id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
262054550 | pes2o/s2orc | v3-fos-license | Computer‐aided Design and 3D‐printed Personalized Stem‐plate Composite for Precision Revision of the Proximal Humerus Endoprosthetic Replacement: A Technique Note
Background Aseptic loosening is considered to be a rather uncommon complication in proximal humerus endoprosthetic replacement (PHER). However, patients with aseptic loosening often suffer severe bone loss, which poses a great challenge in following revision. Under this situation, a standard stemmed endoprosthesis is unavailable for revision limb salvage. Computer‐aided design and 3D‐printed personalized implants are an emerging solution for reconstructing complex bone defects. Case Presentation Here, we present a 67‐year‐old male who underwent PHER after tumor resection and developed aseptic loosening with severe periprosthetic osteolysis around the stem. Computer‐aided design and 3D‐printed personalized stem‐plate composite was used for the precision revision of this patient. During the follow‐up, encouraging results were observed, with good endoprosthetic stability and satisfactory limb function. Conclusion Computer‐aid design and 3D‐printed personalized stem‐plate composite used in the present case could help to achieve good endoprosthetic stability and satisfactory limb function. This 3D‐printed personalized stem‐plate composite seems to be an effective method for the precise revision of PHER in patients with severe periprosthetic osteolysis. In addition, it also provides a novel method for similar revision surgery of other joints or primary endoprosthetic replacement with severe bone defects.
Introduction
2][3] Advances in oncology, imaging, and surgical technologies have enabled limb salvage surgery to be the standard treatment. 4Modular proximal humerus endoprosthetic replacement (PHER) has emerged as a preferred reconstruction approach after tumor resection due to its convenience, good cosmetic appearance, and acceptable limb functions. 1,5,68][9] Soft tissue failure is the most common failure mode, while aseptic loosening is considered to be rather uncommon. 10,11owever, patients with aseptic loosening often suffer bone loss, which poses a great challenge in following revision. 12Bone rarefaction or osteolysis around prosthetic stems results in a poor implantation environment for revision endoprosthesis.In addition, the impairment of medullary cavity integrality around the stem may leave an ultra-short bone stock to accommodate a new stem.Under this situation, a standard stemmed endoprosthesis is unavailable for revision limb salvage. 135][16] Hence, we reported a case using computer-aided design and 3D-printed stem-plate composite for precision revision of the PHER after aseptic loosening.A long-term follow-up was performed to evaluate the surgical efficacy and clinical outcomes of this technique.
Clinical Data
A 67-year-old male underwent tumor resection of the left proximal humerus and subsequent reconstruction using an endoprosthesis 18 years ago.The patient complained of progressive pain for 30 months.He visited our institution and underwent a radiography examination in January 2019.Pain severity was assessed using the visual analogue scale (VAS), and the range of motions (ROMs) of the shoulder and elbow were measured.The VAS score was 6 points.The ROM of the shoulder in six directions (forward flexion, backward extension, abduction, adduction, external rotation, and internal rotation) was 30 , 30 , 25 , 10 , 40 , and 40 respectively.The ROM of the elbow in four directions (extension, flexion, pronation, and supination) was 0 , 110 , 70 , and 70 , respectively.The ROM results showed limited motion of the shoulder joint and mildly affected motion of the elbow joint.The X-ray showed significant loosening of the endoprosthesis, with severe periprosthetic osteolysis around the stem (Figure 1).
Computer-aided Design and 3D-Printed Stem-plate Composite
The computed tomography (CT) data of the left upper limb was collected and imported into Mimics software (Materialise Corp., Leuven, Belgium) for building 3D models of the affected limb and the primary endoprosthesis.3D models were set in STL format and imported into Geomagic Studio software (Geomagic Inc., Morrisville, NC, USA) (Figure 2).Considering the severe osteolysis of the lateral bone, only the medial bone along with the short distal bone segment can be supportive for fixation of the revision endoprosthesis.
After informed written consent was obtained from the patient, a personalized stem-plate composite was planned.In detail, the main body of the stem-plate composite was a tapered straight stem, which was designed for providing intramedullary fixation and consistent with the limb alignment of the humerus.However, this patient suffered a significant bone loss of the lateral region of the residual humerus bone, resulting in inadequate fixation of the revision endoprosthesis using intramedullary fixation alone.Therefore, a side plate was designed on the lateral for supplementary fixation to poor lateral bone stock, which matched the lateral shape of the residual bone segment.The stem and the plate were designed as a monobloc, and the revision A B procedure was simulated preoperatively (Figure 3).To promote osseointegration, a porous structure layer (1.5 mm) was placed on the stem.The stem-plate composite was fabricated using electron beam melting technology (ARCAM Q10plus, Mölndal, Sweden), and the material was titanium (Ti-6Al-4V) powder.In addition, modular proximal humerus endoprosthesis (PHE) was prepared, which can be assembled with the stem-plate composite.
Surgical Technique
The surgery was performed by the senior surgeon (CQ T).After general anesthesia, the patient was placed in a supine position and the left shoulder was elevated with padding slightly.A 20 cm long incision along the original incision in the front of the left shoulder was performed.After removing the hypertrophic scar tissue (Figure 4), the loosened PHE was extracted from the medullary cavity.The key points during surgery were taking out the bone cement as much as possible and inserting the 3D-printed stem-plate composite into the residual bone segment according to preoperative planning.First, carefully removing the bone cement created a good implantation environment, which could ensure smooth insertion of the stem-plate composite and direct contact between the host bone and porous interface for bone ingrowth.Second, a small amount of bone from the residual humerus was removed to create a channel to accommodate the passage of the connection between the plate and the stem.After ensuring the appropriate position of the 3Dprinted stem-plate composite, screws were inserted and the steel wire was strapped to increase initial stability.And then, modular PHE was assembled with the stem-plate composite, with a twist angle of 30 degrees.Intraoperative bleeding was $400 mL.All in all, satisfactory intraoperative anesthesia, stable vital signs of the patient, and no special adverse events occurred.Postoperative anti-infection, prevention of thrombosis, and close observation of drainage and vital signs were performed.endoprostheses in an accurate position.In addition, the osteointegration of the bone/implant interface was evaluated by Tomosynthesis Shimadzu Metal Artifact Reduction Technology (T-SMART).And the T-SMART images showed good osseointegration at the interface (Figure 5).
Discussion
T his paper reported a patient who underwent PHER after tumor resection suffered aseptic loosening.Radiographic examination results showed severe periprosthetic osteolysis around the stem.A modular stemmed endoprosthesis was unfeasible for revision limb-salvage procedure in this patient.We designed a personalized stem-plate composite for revision, which combined intramedullary and extramedullary fixation.During the follow-up of 50 months, encouraging results were observed, with good endoprosthetic stability and satisfactory limb function.
Aseptic loosening is a common problem in clinical practice of the application of endoprosthetic replacement to reconstruct bone defects after tumor resection.However, for PHER, aseptic loosening is uncommon, and there are only a few studies that investigate this failure mode. 10,11,17 Further analysis revealed that periprosthetic osteolysis due to stress shielding was an important reason for aseptic loosening.
In the present study, this phenomenon was also observed in our case, severe periprosthetic osteolysis around the stem with a longer follow-up of 18 years.In addition, osteolysis mainly occurred at the lateral humerus around the stem.In 2019, Wei et al. detected a 21-year-old female developing aseptic loosening in 20 patients who underwent proximal humerus endoprosthetic replacement. 17Likewise, the patient showed significant osteolysis at the lateral humerus, combined with periprosthetic fracture.Nevertheless, there is still a lack of a unified conclusion regarding the position of periprosthetic osteolysis.The poor implantation environment and limited bone stock following aseptic loosening often make the revision procedure difficult.In addition, revision for loosened PHER remains challenging due to the rarity of this complication in PHER.Until now, few publications have specifically addressed the aseptic loosening of PHER 11,17,18 (Table 1).Wei et al. described a simple revision technique that a bone cement spacer was implanted to reconstruct the bone defect after taking out the loosened PHER. 17Nevertheless, the mismatch between the shape of the bone cement spacer and the glenoid joint is a thorny problem.Also, total humerus replacement is an alternative revision strategy for patients like our case.However, sacrificing the two joints is unavoidable, which will impair limb function significantly compared with the preservation of the elbow joint.According to a study by Schneider et al., the mean MSTS score using total humeral replacement after tumor resection was 26%. 19In the present study, a stemplate composite was designed for fixation of the revision PHE.Postoperatively, the ROM of the elbow in four directions (extension, flexion, pronation, and supination) was 0 , 125 , 80 , and 80 , respectively.At the last follow-up, the MSTS score was 27.Therefore, the stem-plate composite achieved precise revision of RHER, with the preservation of the elbow joint, which could contribute to better limb function.
An extra-cortical plate has been proposed to supplement fixation in complex bone defect reconstruction, which is added on to the endoprostheses to form a plate-endoprosthesis composite. 20This technique also has been applied in PHER after large segmental resection of proximal humerus tumors. 17Nevertheless, plate-endoprosthesis composite requires an integrated medullary cavity.For the present patient, lateral bone loss resulted in an incomplete medullary cavity.Therefore, the side plate was added to the stem.To our best knowledge, this is the first study to reported this technique.The stem-plate composite had both intramedullary and extramedullary fixation effects.The tapered straight stem provided intramedullary fixation, which was consistent with the limb alignment of the humerus.And the side plate could provide supplementary fixation to poor lateral bone stock.Intramedullary and extramedullary fixation achieved initial stability of the revision PHE.In addition, the porous interface allowing bone in-growth can achieve biological and permanent fixation.
In conclusion, the computer-aid design and 3D-printed personalized stem-plate composite used in the present case could help to achieve good endoprosthetic stability and satisfactory limb function.This 3D-printed personalized stem-plate composite seems to be an effective method for the precise revision of PHER in patients with severe periprosthetic osteolysis.In addition, it also provides a novel method for similar revision surgery of other joints or primary endoprosthetic replacement with severe bone defects.However, large cohort studies are needed to confirm these findings.
FIGURE 1 FIGURE 2
FIGURE 1 Anteroposterior (A) and lateral (B) X-rays show significant loosening of the endoprosthesis (red arrows), with severe periprosthetic osteolysis (blue arrows) around the stem.
FIGURE 3
FIGURE 3 Computer-aided design and 3D-printed personalized stem-plate composite.(A) 3D models of the bone stock can be supportive for endoprosthetic fixation; (B) stem-plate composite combining intramedullary and extramedullary fixation; (C) simulation of implantation of stem-plate composite; (D) stem-plate composite with porous structure layer (1.5 mm).
The VAS score was improved to 1 point postoperatively.At the last follow-up, the ROM of the shoulder in six directions (forward flexion, backward extension, abduction, adduction, external rotation, and internal rotation) was 40 , 40 , 45 , 20 , 50 , and 50 , respectively.The ROM of the elbow in four directions (extension, flexion, pronation, and supination) was 0 , 125 , 80 , and 80 , respectively.In addition, no varus or valgus of the elbow joint was observed.Limb function was assessed by the Musculoskeletal Tumor Society Score (MSTS) and the American Shoulder and Elbow Surgeons (ASES) shoulder score.The MSTS score was 27, and the ASES score was 87%.The left arm could meet the requirements of daily activities, without limitation.
FIGURE 4 FIGURE 5
FIGURE 4 (A) Preoperative photos of the patient; (B) intraoperative photos of the hypertrophic scar tissue; (C) postoperative photo of the patient; (D) functional photos of the patient at the last follow-up. | 2023-09-20T06:17:59.417Z | 2023-09-18T00:00:00.000 | {
"year": 2023,
"sha1": "0f84949f0e248eed9c6c23a9492899e844b7730b",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.13857",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd5751702544602c91e3ccb6d60fbfc7cb4a39bd",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252262877 | pes2o/s2orc | v3-fos-license | Developing a Sustainable Agricultural System in the Context of Sustainable Development Goals and Demands in Germany
Agriculture is one of the areas that is significantly contributing to deepening environmental problems and the environmental crisis itself. Therefore, the need to transform agriculture into a sustainable one is still very relevant. The international community has already confirmed this position, as well as the need to transform agricultural systems across the globe, by adopting the 2030 Agenda and the Sustainable Development Goals (SDGs). By this, countries have committed themselves to active solutions at national levels in order to come as close as possible to achieving this ambition. The aim of this paper is to examine in particular the Goal 2 "Zero Hunger" and to look more closely at the commitments that countries in the global community have made. The main part of this paper is then to examine and analyse how these commitments to transform agriculture into a sustainable one have been reflected in the national policies of Germany as a country that is one of the most important agricultural countries in the world and thus potentially one of the biggest environmental harms in this context. Our study will present particular steps and actions taken by the country since 2015 and will assess how the 2030 Agenda's agricultural intent has been fulfilled so far by this country in the almost 7 years since the adoption of the SDGs.
Introduction
Sustainable development (SD) is an increasingly relevant and important concept that is gaining a growing position throughout the international community. A number of problems have already taken on a global character and threaten the whole world, but to different levels and in varying strengths. One of these problems is the growing population and the need to feed an increasing number of people, on the one hand, and the environmental crisis, on the other. This is why, particularly over the last two decades, there has been an increasing effort to develop and implement global rules and goals to promote and ensure sustainable development, combined with sustainable agricultural production and environmental protection. However, this is not an easy task. The aim of the international community is therefore to find a way how to solve or at least mitigate the aforementioned problems, as well as to achieve sustainability in development and to preserve the world in an appropriate form for future generations. However, industrial agriculture is a significant factor in intensifying a number of environmental problems and it is a major contributor to pollution, which needs to be dramatically reduced in order to bring the world closer to sustainability. In this respect, the concept of sustainable agriculture and its promotion and implementation in particular countries is becoming increasingly important as "the growing sociocultural burden of nature connected mainly with the development of consumption economy seriously threatens lives of future generations" (Svitačová & Moravčíková, 2017, p.196).
Due to the urgency of the situation and the failure of previous sustainable strategies, the international community has collectively and unanimously adopted the 2030 Agenda for Sustainable Development (UN, 2015c) together with the new Sustainable Development Goals -SDGs (UN, 2015b) and set clear ambitions on how to achieve sustainability, not excluding the support of sustainable agriculture. The situation is more complex in less developed countries because, although they contribute much less to global problems than more developed countries, they suffer more from their consequences. They are also mostly more populated, so they are dependent on agriculture, thus its transformation to sustainable one is necessary to protect the environment. However, we also know many developed countries in the world that are among the global leading countries in agricultural production, and therefore, even in these countries, the transformation of agriculture is actually even more urgent. The reason is that we assume that these countries, despite being less populated than most developing countries, by their share of agricultural production and their use of various environmentally harmful techniques and facilities, pose a much greater threat to the environment. The responsible approach of governments and applying effective solutions to adapt to the 2030 Agenda and SDGs is of crucial importance in this case. However, the dissemination of knowledge in the field of agriculture and sustainable development has also attained considerable importance, as has the promotion of young people in the field of agriculture and a real effort by all those involved, so that the objective can be effectively achieved.
The aim of this paper is to examine the term sustainable agriculture and in particular Goal 2 "Zero Hunger" (The Global Goals, 2022), as well as to look more closely at the commitments that countries in the global community have agreed on. The main part of this paper is then to examine and analyse how these commitments to transform agriculture into a sustainable one have been reflected in the national policies of Germany as a country that is one of the most important agricultural countries in the world and thus potentially one of the biggest environmental harms in this context. Our study will present the development actions taken by the country since 2015 and will assess how the 2030 Agenda's agricultural intent has been fulfilled so far by this country in the almost 7 years since the adoption the SDGs.
Sustainable agriculture and the concept of sustainable development
The concept of sustainable development -which means the "development that enables to meet the needs of the present without compromising the ability of future generations to meet their own needs" (World Commission on Environment and Development, 1987, p.43) -is becoming increasingly important, and have led to ever more sophisticated strategies for achieving it at the level of the international community, but also particular countries. The issue of sustainable development has undergone a considerable evolution since it was first defined. It is now represented by various international documents, and in particular by the 2030 Agenda for Sustainable Development (United Nations Knowledge Platform, 2015) and Sustainable Development Goals (SDGs) as part of it, which draw together all the experiences from previous successful and unsuccessful efforts in achieving it. Those were adopted by the international community in September 2015. The 2030 Agenda and 17 goals with 169 targets (see Figure 1) reflect the world community's efforts to achieve sustainable development and, currently they represent one of the highest priorities for the world. In their content, the SDGs are defined in a quite detailed way and each one is strongly linked to the biggest challenges, which influence (although differently) all the countries of the world. When we look at the issue of sustainable development and the particular goals, based on the current and most significant global problems of mankind, we can see that the environmental problems can be considered to be the most critical ones today. Many of these problems are greatly intensified by industrial agriculture. Therefore, there is a strong emphasis on transforming agriculture into a sustainable one, and this specifically represents the content of the SDG 2 (Zero hunger); as well as to achieve food security, improve nutrition, and promote sustainable agriculture. Particularly important is the target 2.4 -By 2030, ensure sustainable food production systems and adopt resilient agricultural practices that increase productivity and production, help sustain ecosystems, strengthen adaptive capacity to climate change, extreme weather, drought, floods and other disasters, and progressively improve soil and land quality (UNDP, 2015). Within this also the Indicator 2.4.1 -Proportion of agricultural area under productive and sustainable agriculture is important. The basis of this indicator is to measure the progress in reaching more productive and sustainable agriculture. It is made up of relevant sub-indicators that should provide governments with strategic information for evidence-based policies. This indicator was developed through a multi-stakeholder process involving statisticians and technical experts from particular countries, international organisations, national statistical offices, civil society, and the private sector. It brings together the issues of productivity, profitability, resilience, land and water, decent work, and well-being to reflect the multidimensional dimension of sustainable agriculture (FAO, 2022b).
The importance of sustainable agriculture
Agriculture changed its character especially after World War II. Modern technologies, mechanisation, the use of chemicals, specialisation, and a policy that favoured the maximisation of production emerged. Industrial agriculture produces huge quantities of food at low prices. However, this is only possible through the practices that endanger the environment, health, rural communities, animals, etc. We agree with the opinion that the global environmental crisis as a whole is a consequence of the human strategy of overproduction, accumulation and consumption, the implementation of which is now reaching the limits of natural resources and nature's ability to absorb the pollution created by this overproduction and consumption (Sťahel, 2016). Thus, despite the positives of industrial agriculture, there are significant associated costs that affect the possibility of reaching SD. The most serious impacts of industrial agriculture on the environment are: depletion of topsoil, contamination of groundwater, degradation of rural communities, worsened conditions for farm workers, increased production costs, etc. Sustainable agriculture not only addresses many environmental and social issues, but offers innovative and economically viable opportunities for farmers, 153 workers, consumers, policy makers, and many others throughout the food system to grow their crops and produce (UC Davis, 2021). Therefore, today we can see the promotion of "apparent changes in land use and the impact of human activity on the planet's ecosystem and the limitations of human activity that result from the limits of the system" (Šeben-Zaťková, 2015, p.1144).
Especially in recent decades, the increase in world population and the consequent growth in demand for animal products has led to the intensification of farming systems, which leave a huge footprint and cause considerable environmental harm. Sustainable agriculture and food production systems that promote climate-resilient and environmentally friendly practices have significant potential to preserve our valuable natural resources. By following simple practices such as nutrient recycling and not using agricultural chemicals, sustainable farming systems can have a wide reach, allowing countries to feed a growing population without causing irreversible environmental change (Friend of the Earth, 2022b). The basic principle of sustainable agriculture is to maintain a balance between the demands of food production and the preservation of the environment. Sustainable agriculture is therefore a type of agriculture that focuses on the production of sustainable agricultural products without compromising the ability of present or future generations to meet their needs. Furthermore, the use of sustainable agriculture standards and certificates is important here as it is a way of communicating to customers that a product is sustainably produced or grown (Friend of the Earth, 2022a).
Thus, in the areas of food security, nutrition, land degradation, desertification, and drought, a strong SDG on food security and agriculture was considered to be crucial to poverty eradication and achieving sustainable development (UN Sustainable Development, 2015). In this respect, within the SDG 2 we can also find a specifically described topic about "Food security and nutrition and sustainable agriculture". To achieve this, it is really important that agriculture systems globally become more productive and less wasteful. Land, healthy soil, water, and plant genetic resources are key inputs for food production and their increasing shortage requires their sustainable use and management. For example, the restoration of degraded land through sustainable agricultural practices would reduce the pressure to cut down forests for agricultural production. Similarly, the potential benefits of soil restoration for food security and climate change mitigation are huge. Moreover, traditional knowledge of farmers can support productive food systems through wise and sustainable management of soil, land, water, nutrients, and greater use of organic fertilizers (UN, n.d.). Reducing food waste is also key to ensure food security and sustainable agriculture. Because the more food people waste, the more needs to be produced, which puts a burden on soil, water, and ecosystem resources (UN Sustainable Development, 2015). Last but not least, it is the high intention of countries and the whole global community to increase investments in research, development, and technology demonstration to improve the sustainability of food systems worldwide (UN, n.d.).
We agree that industrial agricultural production is highly unsustainable in the context of environmental impact. Thus, the above-mentioned problems in this area can be mitigated through the following principles to guide the strategic development of new approaches and the transition to sustainability: 1) Improve efficiency in the use of resources; 2) Direct action to conserve, protect and enhance natural resources; 3) Promote agriculture that protects and improves rural livelihoods and social well-being; 4) Promote agriculture that enhances the resilience of people, communities and ecosystems, especially to climate change and market volatility; 154 5) Good governance is essential for the sustainability of both the natural and human systems (FAO, 2022a).
In general, the concept of sustainable agriculture integrates several main objectivesenvironmental health, economic profitability, social and economic justice. Achieving the goal of sustainable agriculture is the responsibility of all actors in the system. Every person involved in the food system can play a role in ensuring a sustainable agricultural system (UC Davis, 2021). In this context, sustainable agriculture in its simplest sense means the production of food, fibre, or other plant or animal products using agricultural techniques that protect the environment, people, and animals (Grace Communication Foundation, 2021).
Agricultural sustainability is a complex goal with all three dimensions of SD: environmental (good management of the natural systems and resources on which farms depend), economic (a sustainable farm should be a profitable enterprise that contributes to a strong economy), and social (it should treat its workers fairly and have a mutually beneficial relationship with the surrounding community). These include: building and maintaining healthy soils, wise water management, minimising air, water and climate pollution, promoting biodiversity, etc. By following these, farms can avoid harmful impacts without sacrificing productivity or profitability (Union of Concerned Scientists, 2021).
In the context of achieving sustainable agriculture, many new documents and standards have been adopted on international or national level. An important one is, for example, the 2020 Sustainable Agriculture Standard: Farm Requirements. We can agree that the need for sustainable agriculture has never been bigger. By providing a practical framework for sustainable agriculture and a devoted set of innovations, the farm requirements can help farmers develop better crops, adapt to climate change, increase their productivity, set targets for sustainable outcomes, and focus investments to address the biggest threats of the current world (Rainforest Alliance, 2022).
Data and Methods
The present work is based on qualitative research that draws on a theoretical analysis of the current status and prospects for achieving sustainable agriculture in the world and the goals that the international community has set and unanimously adopted for this purpose.
The study was carried out within the framework of the Erasmus+ KA2 Strategic partnership project SUSTA (2020-1-PL01-KA203-081980), which aims to create an involving concept of teaching sustainability for students of business related studies which will result in raising the awareness and involvement in the problems of sustainability. The aim of the research in the present study is to theoretically examine the main purpose, particular plans, and the possible outcomes in the direction of achieving sustainable agriculture aimed at significant reduction of the global environmental burden. Consequently, the study focuses on Germany as a highly developed country, which also belongs to the most important and largest agricultural entities in the global community. The next step is then to examine how this country has changed its agricultural practices since 2015 and the adoption of the 2030 Agenda, and how it is progressing towards the SDG 2 Zero Hunger (End hunger, achieve food security and improved nutrition and promote sustainable agriculture).
For this purpose, we used several scientific methods. First, we aimed to map, describe, and identify the importance and essence of sustainable agriculture concept generally and within the 2030 Agenda, as well as the set global goals for achieving sustainable development adopted within this agenda. We then explored, analysed and identified specific mechanisms to promote sustainable agricultural practices in Germany, as one of the most important agricultural 155 countries globally, as well as the mechanisms that the country has adopted and implemented since 2015 and the adoption of the SDGs.
The results allowed us to assess the current state of the analysed area towards a realistic and effective implementation of SDG 2 in particular and the achievement of sustainability in agriculture, which is still one of the most important priorities towards reducing global environmental burdens and pollution.
For our scientific interest, we chose to work with the most commonly used worldwide scientific information databases and search engines, such as Google Scholar, SCOPUS, Web of Science and ResearchGate, as well as other available resources, especially the websites and data of the United Nations and various other global organizations focused on sustainable development and the sustainable agriculture model, as well as databases and websites containing information and data on Germany, its political practices and regulations set up to achieve sustainability in agriculture related to the SDGs.
Results and Discussion
According to BMEL (Federal Ministry of Food and Agriculture), Germany while being a land of engineering ingenuity and industry, has always maintained a strong agricultural sector. Despite a high population density, half of the land is farmed. Almost a million workers produce goods worth more than 50 billion euros a year in around 275,400 agricultural enterprises (BMEL, 2022a; BMEL, 2020c). The way in which agriculture and forestry (on more than 80% of land) are operated has a major impact on nature and the environment (BMEL, 2020b). Germany's farming sector is among the four largest producers in the EU, mainly due to animal husbandry. In order to feed the livestock (over 200 million animals), more than 60% of agriculturally used land is utilized for growing nourishment for them. Some of these and other crops are also dedicated to the production of renewable energy (BMEL, 2022a; BMEL, 2020c). Germany has for many years been the world's third largest exporter of agricultural goods, while one third of the agriculture goes into exports, and the food industry generates one third of its total revenue in export activities (BMEL, 2020a).
The national Sustainable Development Strategy of Germany (GSDS) created in 2002, with measures adopted in 2010 and regularly updated (indicators every two years and progress reports every four years), was radically revised in 2016 to align it with the 17 SDGs of the Agenda 2030, with additions and updates in 2018 and 2021 in response to the COVID-19 pandemic (The Federal Government, 2021b). Even before this agenda was adopted in 2015, the German Government was working on making the transformation of the agricultural and food sector more sustainable. Examples in agriculture include the development of strategies for arable and livestock farming, amendments to the Fertiliser Application Ordinance, the Strategy for the Future of Organic Farming, and the ongoing changes to the EU's Common Agricultural Policy (The Federal Government, 2021a, p.58).
The Federal Statistical Office (Destatis -Statistisches Bundesamt) evaluates the progress of GSDS national and international measures on the basis of 65 indicators and the country's sustainable development policy is regularly monitored by an international group of experts by peer review (Zech, 2019). In March 2021, Destatis checked to what extent the Federal Government achieved its goals for 2020. In the 72 DNS target areas, twelve goals were to be specifically achieved by 2020 (Destatis, 2021). In July 2021, Germany reported to the United Nations High-Level Political Forum on Sustainable Development (HLPF) on its national activities to implement the 2030 Agenda based on the GSDS (The Federal Government, 2021c). The new GSDS was refined assisted by all ministries and the public was involved 156 through an extensive dialogue process during several months. The updated strategy introduced six decisive transformation areas on which future sustainability politics will focus, including sustainable agri-food systems. Transformative measures have been established in this area, including soils and forests acting as carbon sinks, the 2035 arable farming strategy and the organic farming future strategy among others (BMEL, 2022a).
Within the focus of SDG 2, it is covered in the GSDS by three indicators in two categories (see Table 1 and details below; The Federal Government, 2021d). Organic farming, along with conventional farming, is considered an important pillar of the country's agricultural and food industries. The Federal Ministry of Food and Agriculture has therefore developed the Strategy for the Future of Organic Farming, which is to be used as a guideline to significantly improve the development opportunities for organic farming and food management and thus enabling also the participation of domestic agriculture in market opportunities (BMEL, 2020c). Although over the past few years the share of organically farmed area has steadily increased, its rate has not been fast enough (in 2020, only 9.6% of utilised agricultural land was farmed organically). In this case, the target of increasing the share to 20% by 2030 might not be achieved (BMEL, 2020b), therefore 24 measures along five pivotal lines of action are being implemented (designing a viable and coherent legislative framework; facilitating access to organic farming; fully utilizing the demand potential and expanding it further; improving the productivity of organic farming systems, and rewarding environmental services adequately (BMEL, 2020c).
As for food security, funds disbursed for the application of the guidelines and recommendations of the UN Committee on World Food Security (CFS) are to be increased appropriately as a percentage of total spending on food security by 2030 (The Federal Government, 2021a).
Agricultural production is also reliant on the availability of land. The GSDS goal of reducing daily land-take to less than 30 hectares by 2020 has so far been missed by a wide margin (currently at more than 50 hectares a day). Soil regeneration should also be promoted by means of appropriate funding (ZKL -Commission on the Future of Agriculture, 2021). In 2019, the European Commission's (2020) Farm to Fork Strategy set an ambitious target for 2030 of reducing nutrient losses by 50% and fertiliser quantities by 20% while maintaining soil fertility 157 levels (ZKL, 2021). The aim of the Federal Conservation Act in conjunction with the GSDS is therefore to reflect the special importance of soil (BMEL, 2020b).
The biodiversity and quality of life indicator surveyed as part of the GSDS is still far from the targeted 100% for 2030 for agricultural landscapes and currently stands at 59.2% (ZKL, 2021). On the other hand, among the renewable energy resources, bioenergy continues to play an important role (in 2019, about 15% of the primary energy used in the country originated from renewable energy resources). Of this, bioenergy alone supplied around 58%. It is forecasted that bioenergy from domestic sources alone would have sustainable potential to provide 17% of Germany's primary energy in 2050 (BMEL, 2020c).
Conclusion
We can summarize that there is high pressure for countries all over the world to transform their agricultural practices into sustainable ones. The international community have agreed on 17 SDGs and one of those is necessarily aimed at implementing sustainable agriculture in practice, with several particular targets to be achieved within this objective. Those are prepared and described in detail and represent the task and responsibility for each country. However, they have even greater importance when we are talking about the greatest agricultural countries in the world (including Germany), as those are harming the environment through industrial agricultural practices the most.
As for lessons learned, areas requiring action and anticipated priority areas, the Arable Farming Strategy (for making arable farming sustainable) and intensified efforts to make the transition to organic farming are identified as key ones by the German Government. On several requirements for the accelerated achievement of targets, there is added urgency due to the COVID-19 pandemic to combine economic recovery measures with specific actions, in order to foster the development of multi-stakeholder partnerships as well as to promote organic farming worldwide. The GSDS is designed to be continuously revised and further developed.
The concept provides guidelines for viable policies for the future across the board. The ambitious update in 2021 adopted by the Federal Government was an important step for German sustainable development policy, as it clearly defines priority spheres of action in six areas of transformation. By late 2023/early 2024, it should be comprehensively updated in a process involving all society stakeholders.
The implementation of sustainable agriculture requires new efforts in development, research, and also implementation. One of the most important in this regard is specialized and wise management as well as commitment at the highest government levels. This must be connected with an action programme that addresses the needs of agricultural producers and farmers in the context of the environment and public awareness. There is a great need to promote sustainable agriculture, to create a market for sustainable food and to formulate demands for the reform of agricultural policy and regulation. Defenders of industrial agriculture claim that only this type of agriculture can feed such a huge world population, but this is not entirely true. According to the data and analyses, proper implementation of sustainable agriculture practices can be more effective in achieving this goal and can also protect and sustain the environment. It is therefore necessary to promote the dissemination of knowledge and information about this new strategy among people, groups, entire nations, and their decision-making bodies and adapt the national policies of particular countries to achieve this goal commonly in the highest extent as possible. | 2022-09-15T17:03:44.554Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "867bef78631d3e3701e0a0682156cc110a4f5697",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15414/isd2022.s2.02",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b9724d4c5aacd0bb15e6141e1dd25e277577fbc1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
} |
119327067 | pes2o/s2orc | v3-fos-license | Precise large deviations for random walk in random environment
We study one-dimensional nearest neighbour random walk in site-random environment. We establish precise (sharp) large deviations in the so-called ballistic regime, when the random walk drifts to the right with linear speed. In the sub-ballistic regime, when the speed is sublinear, we describe the precise probability of slowdown.
1. Introduction 1.1. Random walk in random environment. Throughout this article we will be interested in some asymptotic properties of nearest neighbour random walk in site-dependent random medium. Starting from the early work of Solomon [24], this model has attracted a lot of attention over the past few years since, apart from motivations originated in physics, it exhibits a lot of features not observed in the classical random walk. We refer to the notes of Zeitouni [27] for an introduction to the topic.
The main contribution of this article is an extension of large deviation results obtained previously by Dembo, Peres and Zeitouni [8] to precise (rather than logarithmic) asymptotic of the deviations. We establish also precise probability of slowdown, when the speed of the random walk is sublinear, improving thus the result of Fribergh, Gantert and Popov [11]. For a precise set-up, let Ω = (0, 1) Z be the set of all possible configurations of the environment and let F be the σ-algebra generated by the cylindrical subsets of the product space Ω. An environment is an element ω = (ω n ) n∈Z of the measurable space (Ω, F). By P we denote a probability distribution on (Ω, F). Once the environment ω is chosen with respect to P it remains fixed and determines the transition kernel of a random walk starting at point 0. Denote the set of trajectories by X = Z N and let G be the corresponding σ-algebra. A quenched (fixed) environment ω provides us with a random probability measure P ω on X , such that P ω (X 0 = 0) = 1 and Then X = (X n ) n≥0 is a Markov chain on Z (with respect to P ω ), called random walk in random environment ω (RWRE).
In the context of RWRE one can distinguish two equally valid aspects, that is quenched and annealed behaviour. The former refers to phenomena encountered with respect to P ω for almost all (a.a.) ω. The latter, which is our main focus here, is with respect to the annealed probability, that is the average of P ω over ω. Formally, we define the annealed probability P as follows. By monotone class theorem, one can verify the measurability of the map ω → P ω (G) for any G ∈ G. This allows us define the mentioned annealed probability measure P on (Ω × X , F ⊗ G), which is a semi-direct product P = P ⋉ P ω given by Note that X does not form a Markov Chain under the annealed measure P since, loosely speaking, the process X "learns" the environment as it traverses Z. Thought this article we will assume a particular structure of the environment, namely that the measure P on Ω is chosen is such a way that ω = (ω n ) n∈Z forms a sequence of independent identically distributed (iid) random variables.
One natural question regarding the behaviour of X concerns limit theorems analogous to those treating classical random walk. Obviously one has to take the random environment into account. To quantify it, consider the random variables This sequence will play a crucial role in what follows, since A n 's are the means of a reproduction laws of a branching process associated with X (see Section 2 for details). Solomon [24] proved that the process X is ω a.s. transient if and only if E log A = 0. Here we are interested in the transient case when (1.1) E log A < 0 and then, since the environment prefers a jump to the right, lim n→∞ X n = +∞ P a.s. Solomon [24] proved also the law of large numbers, that is P a.s.
It is known that the limit v is constant P a.s. and that one can distinguish two regimes (1) ballistic regime (EA < 1), when v = 1−EA 1+EA , (2) sub-ballistic regime (EA ≥ 1), when v = 0. The first order asymptotic of X in the recurrent case was investigated by Sinai [23] with a weak limit identified by Kesten [18]. The central limit theorem corresponding to (1.2) was proved by Kesten, Kozlov and Spitzer [19] yielding a weak convergence of X n − vn a n (α) .
The limiting distribution as well as the appropriate normalization a n (α) are related to the value of a parameter α > 0, for which Note that the above condition for α > 1 implies ballisticity.
1.2. The ballistic regime. The aim of this article is to investigate large deviations corresponding to the convergence (1.2). This problem already attracted some attention in the probabilistic community resulting in works of Dembo et. al [8], Pisztora, Povel and Zeitouni [22] and Varadhan [25]. However all mentioned articles deliver asymptomatic of the logarithm of probability of a large deviation. Our aim is to sharpen some of this results and deliver a (precise) asymptotic of probability of a large deviation. The quenched behaviour, which is not of our interest here, also accumulated a fair amount of literature devoted to it. This resulted in the works of Greven and den Hollander [14], Gantert and Zeitouni [12], Comets, Gantert and Zeitouni [7] and Zerner [28]. In spite of the time that had passed since the work of Solomon [24], RWRE sill attract a lot of attention in the literature as seen from the research of Dolgopyat and Goldsheid [10], Peterson, Jonathon and Samorodnitsky [20], Bouchet, Sabot and dos Santos [1].
In this paper we consider large deviations of Xn n in the ballistic regime and aim to describe asymptotic behaviour of P(X n − vn < −x) as n, x → ∞. We assume only that P[A > 1] > 0 which, excluding some degenerate cases, entails (1.3) for some α > 0. In regime (1) this problem was considered by Dembo et al. [8] where it was established that the probability of a deviation is subexponential.
We aim to prove a result treating a precise behaviour of deviations of X rather than logarithmic.
Theorem 1.2. Suppose that (1.3) holds for some α > 1, P[A = 1] < 1 and that EA α+δ < ∞ for some δ > 0. Assume additionally that the law of log A is nonarithmetic. Then where C(α) > 0 and where M > 2, ε > 0 and b n , c n → ∞ such that c n ≤ n 1/2 log(n) −1 and b n < vn − n 1/α log(n) M if α ∈ (1, 2] and b n < vn − c n n 1/2 log(n) if α > 2. In particular, choosing The constant C(α) can be represented in the terms of branching process with immigration associated with X. We will provide more details in Section 2 and Section 3 after we present the construction of the process in question and deliver some tools.
In order to prove our main result, we will use the fact that jumps of X have a structure of a branching process with immigration. The problem of large deviations of X will boil down to deviations of the total population size of mentioned branching process. This approach was used previously by Dembo et al. [8] and Kesten et al [19]. Next, since the branching process can be relatively well approximated by the environment, we will be able to determine the most probable moment, when the deviation happen. A fortiori, the large deviations of X come from large deviations of the environment, which is a phenomena used by Dembo et al. [8] and Kesten et al. [19]. The final arguments leading us to Theorem 1.2 strongly base on the methods developed by Buraczewski et al. [4], who considered large deviations results for partial sums of some stochastic recurrence equation.
1.3. The sub-ballistic regime. If condition (1.3) holds for some α ≤ 1, then X n /n converges to 0 a.s. For α < 1 the process {X n } is typically at distance of order O(n α ) from the origin, as follows from [19]. The annealed probability of slowdown was described by Fribergh et al. [11], who proved that it decays polynomially.
Here we obtain a precise asymptotic.
Assume additionally that the law of log A is nonarithmetic. Then where C(α) > 0 and Γ n = (c n log n, n α /(log n) M ) for M > 2α and c n → ∞.
1.4. The structure of the paper. The article is organized as follows. In Section 2 we present an associated branching process in random environment with immigration and translate the problem of large deviations of RWRE into those of BPRE with immigration. In Section 3 we present some intuitions related to our arguments. The last three sections are devoted to the proof of our results.
Branching process in random environment with immigration
From now on, we will suppose that the assumptions of Theorem 1.2 are in force.
2.1. Construction of associated branching process with immigration. We will begin by introducing a branching process in random environment with immigration associated with X. For this reason consider the first hitting time of X, given viz.
As shown in [19], one can express T n using a branching process. To see that, let U n i be the number of steps made by X from i to i − 1 during [0, T n ), that is Then, since X 0 = 0 and X Tn = n, we have T n = # of steps during [0, T n ) = # of steps to the right during [0, T n ) + # of steps to the left during [0, T n ) = n + 2 · # of steps to the left during [0, T n ) Note that the summation above extends over all integers i ∈ (−∞, n). As a conclusion, all the randomness of T n comes from the infinite sum It turns out that (U n i ) i≤n exhibits a branching structure. To make it evident, fix an environment ω ∈ Ω, an integer n ≥ 0 and consider the sequence U n n , U n n−1 , . . .. Obviously U n n = 0 since X cannot reach n before the time T n . Firstly, we will inspect 0 ≤ i < n. Note that a jump i → i − 1 can occur either before the first jump i + 1 → i, between two jumps i + 1 → i or after a last jump i + 1 → i. Whence, we may express U n i in the following fashion where V i 0 denotes the number of jumps i → i − 1 before the first jump i + 1 → i, for U n i+1 > k > 0, V i k denotes the number of jumps i → i − 1 between kth and (k + 1)th jump i + 1 → i and for k = U n i+1 is the number of jumps i → i − 1 after the last jump i + 1 → i. Note that since the underlying random walk is transient to the right under P ω , V i k 's are iid with geometric distribution with parameter ω i , that is P ω (V i k = l) = ω i (1 − ω i ) l and moreover there are independent of U n i+1 . For i < 0 the behaviour of U n i is different. Since X starts from 0, there will be no jumps from i → i − 1 before the first jump i + 1 → i. Apart from that, the relation between U n i and U n i+1 is the same as previously, more precisely where V i k is distributed as indicated by (2.2). In conclusion {U n n−j } j≥0 forms a sequence of generation sizes of an inhomogeneous branching process with immigration in which one immigrant enters the system only at first n generations. The reproduction law is geometric with parameter ω n−j in the jth generation. We will ease the notation and consider a branching process in random environment Z = {Z n } n≥0 with evolution which can be described as follows. We start at time n = 0 with no particles, so that Z 0 = 0. Next first immigrant enters the systems and generates ξ 0 0 offspring with geometric distribution with parameter ω 0 , that is P ω (ξ 0 0 = l) = ω 0 (1 − ω 0 ) l , these particles will form the first generation, i.e. Z 1 = ξ 0 0 . At time n for n ≥ 1, (n + 1)th immigrant enters the system and reproduces independently from other particles (with respect to P ω ). Their offspring will form the (n + 1)th generation, that is where {ξ n k } k≥0 are iid with geometric distribution P ω (ξ n 0 = l) = ω n (1 − ω n ) l and independent of Z n . Note that Z n+1 depends on the environment up to time n, that is it depends on ω 0 , . . . ω n . To analyse Z, it will be convenient to group the particles depending on which immigrant they originated from, so let Z i,n denote the number of progeny alive at time n of the ith immigrant. Note that then {Z i,n } n≥i forms a homogeneous branching process, that is Z i,n = 0, for n < i and , with respect to the quenched probability P ω for all ω ∈ Ω, and for n > i, This process in subcritical, since and by our standing assumption E[log(A)] < 0. Whence, we are allowed to consider the total population size of the process initiated by the ith immigrant denoted by and the total size of population started by the first n immigrants, given via Now, since ω for a sequence of iid random variables, after we average over P , we can conclude that Our strategy is to establish Theorem 2.1 stated below, from which we will infer Theorem 1.2 and Theorem 1.4.
where d n = EW n for α > 1, d n = 0 for α ≤ 1 and for M > 2, c n , s n → ∞ and s n = o(n).
Theorems 1.2 and 1.4 are relatively simple corollaries from Theorem 2.1. Therefore we first establish the implication, and in the remaining part of the paper we concentrate on the proof of the above result. Below we present how Theorem 1.2 can be deduced. We skip the details concerning our second result, Theorem 1.4. From the proof we can easily deduce that for the constant C(α) appearing in Theorem 1.2 one has Proof of Theorem 1.2. Recall that with respect to the annealed probability P Step 1. Lower estimates Write for x ∈ Γ n , .
Step 2. Upper estimates We will apply an argument similar to the one presented in [8]. Denote to be the longest excursion of X to the left of of j, after the first hitting time at j. By the virtue of Lemma 2.2 in [8] P(L j > k) ≤ Cρ k .
Take k = D log n for some large D which we will specify later. Note that The second term is smaller than n −εD , which with a proper choice of D is negligible. To estimate the first term we write
2.2.
Quantification of the environment. We will start with a few useful formulas for the process with immigration {Z n } n≥0 and the process initiated by the ith immigrant (2.4) and an appeal to independence of ξ i k 's and Z i,n with respect to P ω , we get Whence, we infer that For the recursive formula for the quenched moments of Z n , we go back to (2.3) and deduce that E ω [Z 0 ] = 0 and for n ≥ 0, So that after a simple inductive argument and the corresponding quenched mean, for n ≥ k > i Finally, denote for simplicity Notice that Y 1 k,n and Y n−k have the same distribution.
We defined two processes {Y n } n≥0 and { Y n } n≥0 . The first one admits the recursive formula Y n = A n Y n−1 + A n which is one of the most recognized Markov chains and is a particular example of the stochastic affine recursion, called also in the literature the random difference equation, or just the 'ax + b' recursion. The last name reflects the fact that if we consider the pair (A n , A n ) as an element of the affine 'ax + b' group then Y n is just the result of the action of this element on In general Y n is the second coordinate of left random walk on the 'ax + b' group, more precisely The study of the process {Y n } n≥0 (usually in a more general settings with random (A, B) instead of vector (A, A)) has a long history going back to Kesten [17], Grincevicius [15], Vervaat [26] and others. We refer the reader to the recent monographs [3,16] containing a comprehensive bibliography.
The process { Y n } n≥0 also can be represented in terms of the affine group. A simple calculation leads us to the following formula Thus { Y n } n≥0 is given as the action of the random elements (A j , A j ) but in reversed order. This explain that { Y n } n≥0 is called the backward process (in contrast to {Y n } n≥0 , which is sometimes referred to as the forward process). Apart from the affine group, { Y n } n≥0 has an interpretation in terms of Financial Mathematics, and for that reason it is very often called the perpetuity sequence.
Formulas (2.5) and (2.6) justify that for fixed n random variables Y n and Y n have the same distribution. If follows from the Cauchy ratio test that if E log A < 0, then Y n converges a.s. to Moreover, E Y β n → E Y β ∞ for any β < α, for details see Section 2.3 of [3]. Of course this entails convergence in distribution of Y n to Y ∞ .
The celebrated result by Kesten [17] (see also Goldie [13]) constitutes that Y ∞ has a heavy tail.
Lemma 2.2. If hypothesis of Theorem 1.2 are satisfied then This result was the main ingredient in [19]. For our purposes, we need to enter deeper into the structure of both processes. Namely we need to understand not only the probability of exceedence of large values by the perpetuity, but also to understand when is it most likely to happen. This problem was studied in [2,5] 3. The approach Before proceed to the proof, we would like to give a reader-friendly discussion on our approach. We will state some Lemmas below and if we do not use them in the sequel, we restrain ourself from presenting the proof in order to keep this section as brief as possible. Define the stopping time via After time ν the process regenerates, that is {Z ν+n } n≥0 d = {Z n } n≥0 . Due to Kesten et al. [19], it is known, that the process regenerates exponentially fast.
Define the first passage time of Z viz.
The tail asymptotic of total population size, given in the next Lemma, was proved by Kesten et al. [19] in the case α < 2. The result can be easily extended to cover α ≥ 2. We provide a sketch of the argument in the next Section.
where C 3 (α) is given as the finite limit of the conditional expectation One way to approach with {Z n } n≥0 is via the renewal times, ν 0 = 0, ν 1 = ν, and Let N (n) = #{k | ν k < n}. One has a natural way to decompose W n , By an appeal to Lemma 3.2 we see that the first term on the right-hand side is a sum of iid terms with α-regularly varying tails. Whence, one can expect that This heuristic argument gives the correct order, as verified by Theorem 2.1. However, due to the fluctuations of ν i 's, a rigorous argument is more complicated than expected. For this reason, we will proceed in a slightly different fashion.
Large deviations of W n are caused by deviations of the environment. Whence we need to have a good understanding of the latter. We will start with deviations of the multiplicative random walk {Π n } n≥0 . Here the answer is given by the Bahadur, Rao theorem [9]. for some constant c ρ , where E log(A) < ρ < ρ ∞ = sup 0<s<α∞ Λ ′ (s). Moreover the convergence is almost uniform in ρ.
If we note that min Λ * (ρ) = Λ * (ρ 0 ) = ρ 0 α, where ρ 0 = Λ ′ (α), the result above suggests that for given x, the probability of the event {Π n > x} is the largest for Then we have Moreover, the probability that a large deviation happens outside some neighbourhood of n 0 in negligible. To be precise let m = (log x) 1/2+δ for small δ > 0 and consider the following Lemma.
The first statement can be deduced from the arguments leading up to Lemma 3.9 in [6]. The second statement follows directly from Lemma 3.5 stated below.
Deviations of {Π n } n≥0 and {Y n } n≥0 are closely related. The former is most likely to deviate at n ≈ n 0 and so is the latter. More precisely, as proven in Section 4 of [2] a large deviation on Y n is most likely to happen for n in some neighbourhood of n 0 . Lemma 3.5. Let n 1 = n 0 − m and n 2 = n 0 + m for m = (log x) 1/2+δ and any small δ > 0. Then Since the deviations of { Z k } k≥0 are mostly caused by the environment, one expects an analogue of Lemma 3.5 for the total population size of a branching process in random environment. This is in fact the case as we have proven in Lemma 5.3 in [6]. The following Lemma is a direct consequence of Lemmas 4.3 and 4.4 given in the next section. Lemma 3.6. Let n 1 = n 0 − m and n 2 = n 0 + m for m = (log x) 1/2+δ and any small δ > 0. Then As a consequence, the significant part of Z k k,∞ , the total progeny of the population initiated by the kth immigrant conditioned on { Z k k,∞ > x}, is Z k n 1 +k,n 2 +k . Whence, the dominant part of W n is expected to be n k=1 Z k n 1 +k,n 2 +k .
The key feature that we will exploit is that for n 2 < |i − j|, Z i n 1 +i,n 2 +i and Z j n 1 +j,n 2 +j are independent with respect to the annealed probability P. The strategy is to group Z i n 1 +i,n 2 +i 's into blocks of length n 1 , Z k n 1 +k,n 2 +k .
We will benefit from the fact that {W k } 1≤k≤p+1 forms a two-dependent sequence, i.e. for any 1 ≤ i ≤ p − 1, {W k } 1≤k≤i and {W k } i+3≤k≤p+1 are independent. Furthermore, {W k } 1≤k≤p have the same distribution. With this set-up, after the investigation of the asymptotic behaviours of W 1 and the random vector (W i , W i±1 ) we will be able to prove Theorem 2.1.
Preliminaries
One of the reasons { Z k } k≥0 has the same asymptotic behaviour of { Y k } k≥0 is that in some regimes, one can successfully approximate one by the other. Throughout the article we will benefit from this phenomenon via next two Lemmas, first of which was proved in [6] as Proposition 3.1 and Corollary 3.2.
Moreover, if α > 1, then for some γ < 1 and a positive, finite constant C.
Using this Lemma, we can provide sketch of the proof for Lemma 3.2.
Sketch of the proof of Lemma 3.2 for α > 2. The argument goes along the exact same lines as the one presented in [19] with the only difference that for α ≥ 2 one needs to refer to Lemma 4.1 whenever a bound for E Z 1,n − A n−1 Z 1,n−1 α is needed.
Lemma 4.2. For any k < n we have This constitutes the desired formula.
Next two Lemmas improve on the statement of Lemma 3.6.
Proof of Theorem 2.1
The main idea is to decompose W n into three terms W n = W 0 n + W ↓ n + W ↑ n , when it is most likely, too early and too late to deviate respectively. More precisely As we will see below, W 0 n decides about asymptotic while the other sums are negligible and do not contribute to our final result. Denote d 0 n = EW 0 n if α > 1 and d 0 n = 0 otherwise. Define d ↑ n and d ↓ n in the same fashion.
Proposition 5.1. Under the assumptions and notation of Theorem 2.1, for C 1 (α) = C 3 (α)/Eν one has The above Proposition provides crucial estimates of large deviations of W n . Its statement is an analogue of Proposition 3.9 in [4]. We will prove it in Section 7. Below we clarify how the above statement implies the main result.
6. Some properties of W 0 n In this Section we will present two results essential in the proof of Proposition 5.1. Notice that Having in mind the remark concerning the dependence structure of {W k } 1≤k≤p+1 , we will begin with an investigation of the asymptotic behaviour of W 1 followed by a discussion of the behaviour of (W 1 , W 2 , W 3 ).
6.1. Behaviour of W 1 . Our aim is to establish the following statement.
We will achieve that using next two Lemmas. Denote Lemma 6.2. Suppose that the assumptions of Theorem 2.1 are in force. We have Proof. We will use a very similar argument as the one presented in the proof of Lemma 3 in [19]. Note that Z j ∞ is independent (with respect to the annealed probability P) of the event {ν ≥ j} since the former depends on ω j , ω j+1 , . . . while the latter depends on ω 0 , . . . , ω j−1 and Z 1 , . . . Z j−1 . We can write The second inequality is a consequence of Lemma 3.2 and the fact that Z 1 1,∞ ≤ ν−1 k=0 Z k . Lemma 6.3.
Proof. We can infer the statement of the Lemma by invoking Lemmas 3.2, 6.2 and 4.4.
Proof of Proposition 6.1. We have, by the merit of Lemma 4.4, Observe that for k chosen as in the last event since ν k−1 is an extinction time smaller than n 1 , and whence Z j j,ν k−1 = Z j j+n 1 ,∞ = 0 for j < ν k−1 . Moreover such a k must be unique. Denote by V the random set of extinction times, i.e. V = {ν k } k≥0 . As a consequence of these remarks, we get Given i, the events {i = ν k−1 ∈ V} and { n 1 −1 j=i Z j j+n 1 ,∞ > x, ν k − i ≥ n 1 } are independent. Therefore, applying consecutively Lemmas 6.2, 4.3, 6.3 and finally the (weak) renewal theorem, we have This completes the proof.
6.2. Asymptotic behaviour of (W i , W i±1 ). Recall that W i 's via their definition depend on x.
Proposition 6.4. One can find a constant C, such that for any i, j such that |i − j| ≤ 2, any x > 0 and any a > 0 Proof. We will present a proof for i = 1 and j = 2. The case i = 1 and j = 3 can be dealt in a similar fashion. We will proceed in the following fashion. Note that In the first step we will prove that After that it will become evident that for our purposes it will be sufficient to estimate (in step 2) Step 1. To prove (6.1), applying Lemma 4.2 we estimate i,∞ are independent, applying Lemma 2.2 and the second part of Lemma 4.1, we have for some γ ∈ (0, 1) If on the other hand α < 1, we need to proceed in a slightly different way and borrow some arguments from Kesten at al. [19]. Namely, applying the Jensen inequality, we estimate Note that with respect to P ω , Z j,i − A i−1 Z j,i−1 is a sum of Z j,i−1 independent zero mean random variables distributed as ξ i−1 Finally, invoke Lemma 4.1 and take θ ∈ (α 1 ∨ α 2 , α), From here, we can apply the same arguments with γ replaced by λ(θ) < 1. Applying the first part of Lemma 4.1 we conclude, as above, inequality (6.1).
Step 2. We will start with bound for moments of Z k of order β < α, i.e. we intend to prove that For α ≤ 1 we just apply Lemma 4.1 and write If α > 1, then 1 = λ(α) > λ(β) > λ(1) uniform with respect to k. By the virtue of Minkowski inequality we have Finally, for any given ε take β = β(ε) < α close enough to α.
Proof of Proposition 5.1
The arguments used in the proof are similar the proof of Proposition 3.9 in [4]. However for reader's convenience we present here main steps of the proof, focusing on the arguments leading to the precise asymptotic results. We present here the proofs for α ∈ (1,2]. For the other values of α the same scheme works, with only slight changes (see [4] for details) Proof of Proposition 5.1, formula (5.1). The proof strongly relies on the observation that the sum p j=1 (W j − EW j ) is large when exactly one of the terms reaches values close to x, whereas contribution of all other factors is negligible. Below we first describe the dominant event and then justify that its complement is of smaller order. Let Define y = x (log n) 2ξ and z = x (log n) ξ for ξ such that ξ < 1 4α and 2 + 4ξ < M.
Step 1. We prove that for every ε > 0 there is N such that uniformly for all n > N , x ∈ Λ n , the following inequality holds Eν .
(7.1)
Obviously it is sufficient to prove that for fixed 1 ≤ k ≤ p Eν .
(7.2)
Denote the probability above by V k . We begin with upper estimates. To begin, note that one has EW k ≤ n 1 λ(1) 1−λ (1) . Indeed, since the mean of the reproduction law is λ(1), we have Thus by Proposition 6.1 Lower estimates are more tedious. Firstly define to be the sum of all W j 's independent of W k , so it is itself independent from W k . We have Proposition 6.1 provides us with the lower bound for the first term. Assuming we can justify that the second term is negligible, i.e.
we obtain To prove (7.4) we need to bound separately two factors and establish: To estimate I we apply Propositions 6.1, 6.4 with σ > 0 sufficiently small, a = (log n) −2ξ and use independence of W i and W k for |i − k| > 2: Now it is just sufficient to justify that the expression in the brackets is tends to zero, but this follows directly from our assumptions on ξ and the definition of the domain Λ n .
To bound II we first use the independence of W k and W k and write In view of Proposition 6.1 it is sufficient to prove For this purpose we need the Prokhorov inequality (see Petrov [21], p. 77): Let (X n ) be a sequence of independent random variables and denote their partial sums by R n = X 1 + · · · + X n . We write B n = var(R n ). Assume that the X n 's are centered, |X n | ≤ y for all n ≥ 1 and some y > 0. Then The Prokhorov inequality requires the random variables to be bounded and independent. To reduce our problem to this setting we use 2-dependence of the sequence {W i } 1≤i≤p+1 and we decompose the sum W k into sum of three blocks, each consisting of i.i.d. random variables and W j ≤ y .
Next we reduce the problem to bounded random variables by introducing the truncations We prove that the remaining part, that is W j − W y j is negligible. Applying twice the Minkowski inequality, we estimate the α norm of W j with the help of Lemma 4.1 Therefore, by the Hölder inequality where the last inequality follows for our assumptions on ξ and Λ n . To see that, consider two possibilities, first of which is x > n. Then, if n is large enough x > log(x) M n 1/α and as a consequence x −α ≤ log(x) −αM n −1 and so due to constraints imposed on ξ. In the second case, i.e. x < n we have (log x) We use the Prokhorov inequality (7.9) with X i = W y i − EW y i B p = pvarW y i ≤ py 2−α EW α 1 ≤ Cpy 2−α n α 1 m α and considering two possibilities x < n and x ≥ n in combination with the fact that x ∈ Λ n we obtain This completes the proof of (7.7), which together with (7.6) entails (7.4). Combining (7.3) with (7.5) we obtain (7.2) and hence (7.1).
Step 2. Now we consider the remaining cases, not treated in the first step, which are of smaller order. We begin with the event when all W i , except W k , are small, despite this, their sum is large. That is we intend to show P U ∩ W k > y for some k,W i ≤ y for i = k and j =k (W j − EW j ) > z = o(nx −α ) (7.10) As previously it is sufficient to prove for fixed k P U ∩ W k > y, W i ≤ y for i = k and j =k (W j − EW j ) > z = o(n 1 x −α ) (7.11) We estimate this probability by P(W k > y) · P | W k − E W k | > z − 8y and W i ≤ y for i = k and then we proceed exactly as in the first step, that is we apply Proposition 6.1 and to bound the second term the Prokhorov inequality (7.9). We omit details.
Step 3. Next we consider the event when all W j 's are smaller than y and then again the Prokhorov inequality (7.9) yields P U ∩ {W i ≤ y for all i} = o(nx −α ) (7.12) Step 4. Finally when at least two W j 's are larger than y, the same arguments as in the proof of (7.6) entail P U ∩ {W i > y, W j > y for some i = j} = o(nx −α ) (7.13) We refer the reader to the proof of Proposition 3.9 in [4] for more details.
Proof of Proposition 5.1, formula (5.2). We proceed as in the proof of formula (5.1). Recall Then W ↓ k are identically distributed and one dependent, i.e. if |i − j| > 1, then W ↓ i and W ↓ j are independent. We have P W ↓ n − EW ↓ n > x ≤ P W ↓ k > y for some k + P W ↓ n − EW ↓ n > x and W ↓ k ≤ y for all k To bound the first term we just use Lemma 4.3 P W ↓ k > y for some k ≤ p+1 k=1 P W ↓ 1 > y ≤ pe −C(log y) δ y −α ≤ nx −α · n −1 1 (log x) 2ξ e −C(log x) δ = o(nx −α ).
And for the second term we use the Prokhorov inequality (7.9).
Proof of Proposition 5.1, formula (5.3). We would like to repeat the procedure from previous proofs of (5.1) and (5.2). However this time we need to proceed more carefully, because all the factors in the sum defining W ↑ n are dependent and we cannot use directly the block decomposition into sum of i.i.d. terms.
To overcome this difficulty we cut the factors Z j j+n 2 ,∞ at some place. Let n 3 = D log x, where D is a large constant satisfying D > α−1 | log EA| . We are going to prove (7.14) P n−n 3 j=1 Z j j+n 3 +1,n − z n > x ≤ cnx −α−δ for some δ > 0, where z n = E n−n 3 j=1 Z j j+n 3 +1,n .
We have Then W ↑ k have the same distribution and W ↑ i , W ↑ j are independent if |i − j| > ρD + 1. We can repeat previous arguments. | 2018-01-04T20:33:12.000Z | 2017-10-03T00:00:00.000 | {
"year": 2017,
"sha1": "40438806b727c2af1fcfe65ea1557e8d920ddb81",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1214/18-ejp239",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "40438806b727c2af1fcfe65ea1557e8d920ddb81",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236154901 | pes2o/s2orc | v3-fos-license | How Do Pedophiles Tweet? Investigating the Writing Styles and Online Personas of Child Cybersex Traffickers in the Philippines
One of the most important humanitarian responsibility of every individual is to protect the future of our children. This entails not only protection of physical welfare but also from ill events that can potentially affect the mental well-being of a child such as sexual coercion and abuse which, in worst-case scenarios, can result to lifelong trauma. In this study, we perform a preliminary investigation of how child sex peddlers spread illegal pornographic content and target minors for sexual activities on Twitter in the Philippines using Natural Language Processing techniques. Results of our studies show frequently used and co-occurring words that traffickers use to spread content as well as four main roles played by these entities that contribute to the proliferation of child pornography in the country.
I. INTRODUCTION
"Child abuse casts a shadow the length of a lifetime." -Herbert Wood Cybersex or computer sex is an activity where two or more people, anonymous in some cases, connect over the Internet to engage sexually gratifying performances [1]. Activities such as sharing, watching, downloading and trading explicit online content across websites and social media platforms such as Facebook, Twitter, and Instagram are all under the umbrella term of cybersex [2]. Behaviors exhibited in cybersex activities include solitary acts of self pleasure, consensual interactions, and to coercive and forceful activities which often considered as rape [3].
In its essence, cybersex allows exploration of sexual urges and private fantasies while maintaining anonymity [4] as well as providing a safe space for physically-separated partners to connect over the Web and continue to be sexually intimate [1]. However, in a moral and ethical point-of-view, the conduct of cybersex activities should only be between consenting and legal-aged partners. Non-consensual cybersex often target extremely underprivileged women and minors where the produced media are peddled, trafficked, and sold worldwide. Although most justice and intelligence agencies in countries around the world enforce strict laws on minors involved in cybersex activities, the problem still pose as one the major challenges for poor, developing areas in Southeast Asia, Africa, and South America where they are often labelled as hotspots of child sex tourism from 2014 to 2016 [5].
A. Proliferation of Child Pornography in Twitter
There are multiple environments where cybersex, both consensual and non-consensual, are often mediated and spread. Internet chat rooms and instant messaging applications are common grounds for these activities. However, in the recent years, social media platforms such as Twitter have been used more and more by illegal cybersex peddlers and traffickers since it offers anonymity under the guise of fake accounts [6], [7]. In addition, Twitter allows these accounts to share images and videos seamlessly as well as having the option to privatize accounts. Pedophiles, or a group of people who are sexually attracted to children, use these features to maintain a close circle of similar-minded individuals and to stay hidden from public eye. Although Twitter follows strict policies 1 in maintaining a safe environment by banning users for any type of abuse, child sexual exploitation, and sexual assault, accounts of pedophiles and illegal cybersex peddlers still surge in number [8].
In the Philippines, the Cybercrime Prevention Law was signed in 2012 which aims to reduce computer-related crimes including child pornography and other illegal cybersex activities. However, in the recent years, the Cybercrime Law did little to nothing to alleviate proliferation of child pornography as country topped the latest survey by United Nations Children's Fund on global sources of child sex abuse materials in 2018 [9]. According to the report, the proportion of internet addresses hosting child pornographic materials in the Philippines tripled in scale starting from 2017. Twitter has become one of the most used platform in the Philippines that serves as a breeding ground and medium of pedophiles to spread child pornographic content. These individuals hide their identity using multiple fake accounts colloquially known as alter or alternate accounts. In the same manner, the term Alter Twitter has become popularly known in the country as a Twitter community of Filipino individuals using anonymous accounts to conduct, share, and exploit sexual content and activities [10].
In recognizing the need for further research efforts in mitigating the spread of child pornographic content, this paper investigates the general writing styles of pedophiles and cybersex traffickers in Twitter, and the roles that they often conform using the platform. We perform natural language processing techniques over a dataset composed of a year's worth of child pornographic tweets collected from the Twitter accounts of pimps, peddlers, and traffickers in the Philippines.
II. RELATED WORKS A. Writing Styles in Twitter
The challenge of analyzing writing styles such as authorship attribution in social media platforms is one of the most interesting tasks in natural language processing. [11] defines writing style as a grammatical choice that writers make which adheres to norms and social identity. An individual's writing styles is composed of choice of select words, sentence and paragraph structure, and symbols that are used to convey the a message effectively [12].
Existing writing styles in the web vary by a large scale since users are free to express themselves and there are no formal rules to follow. In addition, other elements of writing in social media platforms such as the use of emoticons to adds complexity to the task [13]. The use of social media platforms like Twitter allows researchers in various fields perform deeper analysis on factors that can affect writing such as gender [14], user personality [15], and mental illness [16]. Inclusion of these factors paved way for more research efforts in understanding negative social media interactions such as forms such abuse like racism and sexism [17] and bullying [18].
B. Themes in Twitter
Works on identifying salient and underlying themes conveyed in large volumes of social media data have also intrigued researchers on the field. In contrast to writing styles which focus on how each tweet is constructed using elements such as hashtags, emoticons, and use of symbols, thematic analysis captures the representations of the texts by uncovering topics commonly extracted using unsupervised machine learning algorithms to generate topic models [19], [20]. These topic models allow us to have an overview of important topics (or themes) and supporting topic words present in the document [21]. In the Philippine local setting, the works of [22], [23], and [24] all focused on the use of topic models to extract themes present from collected typhoon and earthquake-related Twitter data which can be used to improve the disaster risk reduction landscape and response of the country.
III. CHILD PORNOGRAPHY-RELATED TWEETS
For this study, we collected over 69,675 raw tweets related to child sex trafficking and peddling in Twitter from October 2019 to July 2020, over a year's worth of data. We used a bounding-box feature from the Twitter API to capture tweets only published within the area of the Philippines. In addition, we used hashtags such as #bagets (colloquial term for the word 'children') and #sarapngbagets (conveys sexual desire for children) which were reported to be commonly used by child sex traffickers as bookmarks or subject tags for their tweets [25]. After cleaning and removal of retweets and duplication, only 32,899 unique tweets were left for the analysis proper.
IV. WRITING STYLE ANALYSIS
For the writing style analysis, we conduct two methods: the word cloud visualization for getting a bird's eye view of the most frequent words present within the collected data and mapping of trigram co-occurrence network for understanding series of word connections used to spread child pornographic content.
The word cloud visualization in Figure 2 showcases top used terminologies in tweets with respect to the size of each word. The word jakol or masturbation, tamod or semen, and boso or voyeur are seemingly three of the most used words in the context of child pornography. In addition, the hashtag #alterph is also often appended in tweets to signal that the account used for uploading content is an alter account with the suffix ph indicating the user is in the Philippines and prefers interaction with users also coming from the same country. Action words are frequently used in context such as chupa or fellatio and salsal or motion of stimulating a man's penis as well as words used for targeting children such as bagets for hire or children for hire and altergc which means alter groupchat, indicating that there are also other platforms where videos and contents are shared and not just in Twitter. Fig. 3. Trigram network indicating terms that frequently co-occur in child pornographic tweets. Figure 3 describes the chain reaction-like structure of words that are co-occurring or are seen together in semantically similar tweets. From a corpus containing 2,498 unique words, only three subgraphs are formed which signify that the overall lexicon used by pedophiles and child traffickers are somewhat limited in a way that terms are often reused repetitively. From the figure, the first subgraph on the upper left contains only two connected words, ctto or credits to the owner and trade. These two words describe user accounts that share tweets by giving unofficial crediting of the source of contents as well as the notion of exchanging resources by trading links of online repositories where videos are stored as seen in Figure 1. The largest subgraph in the middle, on the other hand, contains terminologies forming sequences denoting instruction for proliferation and attention such as follow and rt (retweet), dm me (message me), and follow me. And lastly, the third subgraph on the lower left with word sequence open thread to denotes the spread of content to be in the form of threads or series of posts. Overall, these subgraphs model how posts containing child pornographic content such as lewd photos and videos are structured. It also describes how users behind alter accounts sway other users to spread their malicious content by convincing them to use Twitter's interaction features such as (a) retweets for sharing and (b) likes for increasing the exposure of the content to a wider audience. Aside from just analyzing the overall stylistic writing patterns of potential pedophiles and child sex traffickers on Twitter, we want to understand deeper roles played by these entities in the platform. To do this, we trained a shorttext clustering model using the Gibbs Sampling Dirichlet Multinomial Mixture (GSDMM) [26] trained from the preprocessed tweet corpus. The GSDMM model aggregates words into clusters or groups that are similar to each other in terms of usage and meaning. As seen in Table I, we obtained four main homogeneous clusters symbolizing four different online personas or roles played by users behind alter accounts that are tied with child pornography.
From the table, each persona has its own unique set of thematic words forming an underlying vocabulary used for specific purposes. First, we have the Propagator which is mainly responsible for spreading child pornographic content in the platform. Keywords often used by this type of user are similar to the ones highlighted in Figure 3 such as rt or retweet, follow, like, post, and comment. Next, the Peddler which is responsible for the hidden market or trading, buying, and selling of child pornographic content. Keywords often used by peddlers are dm or direct message, price, php, avail for their business transactions and looking and willing for enticing possible victims who are willing to trade sexual content such as photos and videos for money. Third, the Social acts as someone who encourages users to meet physically or digitally for activities such as jakol or masturbation. This persona often uses descriptive words such as pogi or handsome as well as semi-coercive words such as tara or let's go and sino pwede jan? or who is available? to convince potential users having the same interests. Lastly, we have the Voyeur which often targets minors for their voyeuristic content. The main keyword used by this persona is boso or the act of spying undressed or naked people for sexual pleasure and often used in tweets with targets such as bagets or children, baby, and boys. This persona also frequently makes use of the word cr or comfort room where hidden camera are often installed.
VI. ETHICAL CONSIDERATIONS
This study makes use of extremely sensitive data involving sexual words that are often used to target minors. However, the proponents felt compelled to do this type of study as something has to be done in order to understand and be able to alleviate the problem of child pornography landscape in the Philippines. In addition, for the safety of minors, no personal information is revealed in any part of this document.
VII. CONCLUSION
In the Philippines, child pornography and other illegal cyber-sex activities are widespread especially on social platforms like Twitter where users can hide behind anonymous accounts. In order to further gain understanding and deeper insights for the reason behind the rapid proliferation of child pornographic content online, we used three types of analysis, namely word cloud visualization, trigram cooccurrence analysis, and persona analysis. Results show basic terminologies often used by child traffickers and peddlers that often co-occur with each other. In addition, these entities can be classified into four possible roles or online personas based on their vocabulary use. Continuation of this study involve partnership with local government units concerned with cybercrime prevention and child protection to track down active child pornography peddlers and traffickers. | 2021-07-22T01:16:25.733Z | 2021-07-21T00:00:00.000 | {
"year": 2021,
"sha1": "d50e4ca64e0c0e4400f2c5f1a5d2b25d33ab3cae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d50e4ca64e0c0e4400f2c5f1a5d2b25d33ab3cae",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211524843 | pes2o/s2orc | v3-fos-license | ABCA7—A Member of the ABC Transporter Family in Healthy and Ailing Brain
Identification of genetic markers of a human disease, which is generally sporadic, may become an essential tool for the investigation of its molecular mechanisms. The role of ABCA7 in Alzheimer’s disease (AD) was discovered less than ten years ago when meta-analyses provided evidence that rs3764650 is a new AD susceptibility locus. Recent research advances in this locus and new evidence regarding ABCA7 contribution to the AD pathogenesis brought a new understanding of the underlying mechanisms of this disorder. An interesting, up-to-date review article "ABCA7 and Pathogenic Pathways of Alzheimer’s Disease" by Aikawa et al. (2018), outlines the ABCA7 role in AD and summarizes new findings in this exciting area. ABC transporters or ATP-binding cassette transporters are a superfamily of proteins belonging to a cell transport system. Currently, members of the family are the focus of attention because of their central role in drug pharmacokinetics. Two recent findings are the reason why much attention is drawn to the ABCA7 family. First, is the biochemical data showing a role of ABCA7 in amyloid pathology. Second, genetic data identifying ABCA7 gene variants as loci responsible for the late-onset AD. These results point to the ABCA7 as a significant new contributor to the pathogenesis of AD.
Alzheimer's Disease (AD) is a Devastating Neurodegenerative Disease
Currently, an estimated 5.8 million Americans have AD, and this number is growing quickly. Two hundred thousand individuals under age 65 have younger-onset AD [1]. A key feature of AD is a buildup of extracellular amyloid-β (Aβ) and corresponding plaques within the brain. This accumulation occurs because of the altered balance between Aβ production and clearance. Another important characteristic of AD is the accumulation of hyperphosphorylated Tau protein. This buildup leads to an increase in neurofibrillary tangles (NFTs). Accumulation of NFTs represented by bundles of filamentous protein occurs most often in the cytoplasm of neurons. Elevated levels of NFTs, neurotoxic Aβ peptides, and loss of neurons and synapses, result in brain atrophy. These are the main factors in the progression of AD, which can be classified as a conformational disease [2] because of the roles of naturally unfolded prone to aggregation proteins in its development [3,4]. Currently, basic researchers and pharmaceutical companies have put an enormous amount of effort and funds toward finding novel targets for pharmacological interventions for AD.
Contribution of Genetic Factors to AD Pathogenesis
The majority of AD cases are late-onset and sporadic. However, investigation of genetic forms of AD and identification of susceptibility loci for late-onset Alzheimer's disease (LOAD) is an important approach raising hope for finding new mechanisms and new targets in AD pathogenesis. Although genetic forms of AD are relatively rare, their investigation helps discover detailed mechanisms of this disorder [5]. Researchers use various approaches to detect the genetic variants contributing to disease traits with complex inheritance. The ε4 allele of apolipoprotein E (ApoE), remains the most significant sequence variant affecting the risk of late-onset AD [6]. Genetic studies revealed other major risk factors that markedly affect the risk of developing AD. The majority of them are rare variants in the following genes: Amyloid precursor protein (APP), presenilin 1 (PSEN1), and presenilin 2 (PSEN2) [5]. Recently, a rare susceptibility variant in TREM2 was discovered [7,8]. Lambert et al. (2013) [9] performed a meta-analysis of GWAS in European ancestry and discovered a novel susceptibility variant rs4147929 in an intron of the ABCA7 gene. However, the existing genetic data could not explain all phenotypic forms of AD, and a large portion of the genetic risk for AD remains unexplained.
ABCA7 and AD
The recent biochemical and genetic data point to ABCA7 (ATP-binding cassette sub-family A member 7) as a new contributor to AD pathogenesis [10]. Common variants of this gene were associated with the risk for LOAD [11,12]. Recent evidence describing the risk of gene variants of ABCA7 for AD development and its role in the AD pathogenesis are described in the Aikawa et al. review [10]. Genetic data pointing to the ABCA7 gene contribution to AD began to appear about ten years ago. An important finding indicating a contributing role of ABCA7 in AD development was the identification of the common single nucleotide polymorphism (SNP) variant rs3764650 in an ABCA7 intron. This marker is the susceptibility loci for LOAD [11] in Caucasian cohorts. Next, a missense variant associated with the risk for LOAD due to the G1527A substitution in ABCA7 was described [13]. Analysis of ABCA7 gene variants in different populations by exome sequencing, whole-genome sequencing, and targeted resequencing confirmed the conclusion about the role of ABCA7 in AD. These studies showed that some of the low-frequency variants (1%-5%) and rare variants (less than 1%) have significant associations with the risk for AD [14]. Based on the data about the loss-of-function of ABCA7, the three most probable mechanisms of its involvement in AD pathology are proposed. First, is the disturbance of microglial Aβ clearance. Second, is the accelerated APP processing and third, is the interference with the elimination of various brain debris during AD progression.
ABCA7 Structure and Functions
Members of the family are built of multiple subunits, and one or two of them are usually associated with membrane ATPases. ABCA7 is a large protein containing 2146 amino acids with a molecular weight of 220 kDa. Its closest homolog is ABCA1, with 54% of sequence identity. These two proteins also share some functional similarity [10], but their transcription is regulated differently. ABCA7 is a ubiquitous protein with high expression in microglia. It is expressed in a tissue-specific manner as two variants. Type I cDNA is a full-length, whereas Type II is a shorter splicing variant. ABCA7 maintains intracellular lipid metabolism and regulates cellular homeostasis. It is involved in the efflux of cellular phospholipids, cholesterol, phosphatidylcholine, and other lipids. The family includes about fifty members subdivided into seven subfamilies. Functions of the members of this family are to perform and control the efflux of intracellular cholesterol and phospholipids. ABCA7, in addition to the functions of the mediator of lipid metabolism, also participates in the generation of immune and regulation of microglial responses to acute inflammatory challenges.
The high level of ABCA7 expression in cell culture increases the amounts of intracellular/cell surface ceramide and intracellular phosphatidylserine, causing cell cycle arrest. The variety of processes in which ABCA7 is involved is amazing. In addition to being a modulator of several biochemical pathways, ABCA7 regulates phagocytic activity in microglia. This activity may play an important role in AD pathogenesis. In particular, ABCA7 is involved in phagocytosis of Aβ aggregates and, therefore, participates in Aβ elimination in the brain. Furthermore, ABCA7 is an essential player in the regulation of APP processing. Several mechanisms pinpoint to this ABCA7 role. The most important one of them is via altered lipid profile. Indirect evidence supporting this point of view suggests that elevated levels of phospholipids, cholesterol, and phospholipids regulate APP processing. Due to the involvement of ABCA7 in many processes critically involved in AD pathogenesis, further investigation is necessary to uncover their relationship in more detail.
Funding: Some of the work by A.S. was conducted at the Kansas City VA Medical Center, Kansas City, MO, USA, with support from the VA Merit Review, grant number 1I01BX000361 and the Glaucoma Foundation, grant number QB42308. A.A.S. is partially supported by YALE ENT Research grant number YD000220. | 2020-02-27T09:30:29.626Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "9386f2a3dc2ba46a2fa2e033ad02f98d9db9d006",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/10/2/121/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "838762f5d95c07a2a4c8203fa4d5ccbe54032083",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234569081 | pes2o/s2orc | v3-fos-license | Decoration of Graphene Oxide with Cobalt(II) Coordinated Silica and its Catalytic Activity for the Synthesis of Functionalized Indenopyrazolones
We synthesized functionalized f-SiO 2 @GO@Co catalyst through decorating graphene oxide surface using SiO 2 sphere with the help of ethylenediamine ligand and chelation with CoCl 2 .6H 2 O for increasing the catalyst activity to produce heterogenous catalyst. The heterogenous catalyst was characterized by FT-IR, XRD, SEM, Raman spectra, and TGA. We assessed activity of the catalyst in the synthesis of indenopyrazolones and results demonstrated high activity for the catalyst. The ability of the catalyst to increase the yield and reduction in reaction time as well as high catalytic activity, and recycling are prominent advantages of the catalyst.
Introduction
In last few years, applying carbon-based materials as a catalyst have received considerable attention [1].
Graphene is one forms of carbon as a single layer that its crystalline structure is two-dimensional. Graphene was rst discovered in 2004 by Geim and Novoselov [2]. Graphene oxide is an oxidized form of graphene with a two-dimensional (2D) honeycomb structure. A monolayer graphene oxide which is a layer of graphite has various oxygen-containing groups like hydroxyl groups, epoxides, and carboxyl group via oxidation of graphite crystals. The presence of oxygen functional groups on the surface of graphene oxide increase chemical interactions so, graphene oxide can be participated as a desirable support or catalyst in chemical reactions [3][4][5].
Functionalization of graphene oxide is more bene cial for biomedical, electrochemical, and chemical applications [1]. It can be processed using functionalization of oxygen-containing groups on the basal plane of graphene oxide with different electroactive species [6,7].Different methods have been developed for the synthesis of graphene oxide but the common method is Hummer with oxidation of graphite using KMnO 4 under acidic conditions [8]. The llers like SiO 2 are often used as corrosion resistant coatings.
Then, the proper dispersion of SiO 2 in graphene oxide/epoxy coating can improve corrosion resistance [9,10]. Also, for improving activity and stability of the catalysts metal-based catalysts are used. Due to the high cost of noble metals, the cheaper metals replace for this purpose such as Fe, Ni, and Co [11].
Nitrogen heterocyclic compounds have attracted a researcher's interest because they have more applications in biological and other sciences [12]. Pyrazoles are a famous series of ve-membered nitrogen heterocycles containing two adjacent nitrogen atoms [13]. Compounds containing pyrazole ring and its derivatives often exhibit different physiological and pharmacological properties such as anticonvulsant [14], antioxidant [15], anticancer [16], and fungicides [17]. Moreover, some compounds with pyrazole ring are used as ligands in transition-metal-catalyzed cross-coupling reactions [18,19].
In the current literature, we focus on preparation, characterization and application of an e cient heterogenous catalyst based on cobalt(II) coordination on f-SiO 2 functionalized graphene oxide(f-SiO 2 @GO@Co) (Scheme1) for the synthesis of cis-3-aryl-3a,8b-dihydro-3a,8b-dihydroxy-1-phenylindeno [1,2-c] Synthesis of graphene oxide 1 g graphite powder and sodium nitrate (0.5 g) were added into 25 ml of acid sulfuric and stirred for 10 minutes. Under magnetic stirring, potassium carbonate (3 g) was slowly added into the mixture. Then, the mixture was heated to 35 o C and stirred for further 30 min. After that, 45 ml deionized water was added to the mixture and temperature was then raised up to 95 o C and stirred for 15 min. Next, add 150 ml deionized water and 10 ml hydrogen peroxide 30% to the solution. The resulting solid phase was ltered and repeatedly washed with hydrochloric acid and deionized water for several times. The obtained solid was graphite oxide and dried at temperature 60 o C for 12 h. The resulting solid was dispersed in deionized water by ultrasonication for making graphene oxide. At the end, the nal solid was recovered by centrifugation and dried for 24 h in 60 o C. The nal brown solid is graphene oxide.
Synthesis of spherical SiO 2 nanoparticles
The mixture of distilled water (20 ml) and ethanol (50 ml) were sonicated for 30 min. Then, 3 ml TEOS was added dropwise within 5 min followed by addition of 0.1 mmol pvp into the mixture under stirring. Thereafter, 0.1 ml ethylenediamine was added dropwise into the mixture as precipitating agent, under ultrasonic. After 30 min, the produced SiO 2 product isolated by centrifugation and washed with ethanol and water three times. The nal product was dried at 80 o C for overnight.
Synthesis of SiO 2 @ CPTES (0.5 ml, 5 mmol) 3-chloropropyl triethoxysilane (CPTES) was added dropwise to a stirred solution of SiO 2 (1 g) in dry toluene (30 ml) and re uxed for 24 h. After completion of the reaction, the impure product was separated and washed three times with toluene and dried under 120 °C in a vacuum oven for 8 h to obtain the white powder as SiO 2 @CPTES.
Synthesis of SiO 2 @ Ethylenediamine (f-SiO 2 )
In a 50 ml round-bottomed ask, Ethylenediamine (0.3 g, 1 mmol) was added to the suspension of SiO 2 @CPTES (1 g) in absolute ethanol (30 ml) and heated under re ux for 24 h. The resulting solid was collected by ltration and washed successively with ethanol several times and dried at 90 o C overnight.
Synthesis of f-SiO 2 @GO 0.04 g graphene oxide powder was dispersed in 20 ml deionized water by sonication, then SiO 2 @Ethylenediamine (f-SiO 2 ) (0.16 g) was added to the mixture and sonicated for 20 min. The solution was stirred at 85 o C in an oil bath for 12 h. Lastly, the resulting product was collected by centrifugation, washed with deionized water and ethanol three times, and then dried at 60 o C.
Synthesis of f-SiO 2 @GO@Co
As-prepared f-SiO 2 @GO (0.1 g) with 0.01%wt CoCl 2 were dispersed into an absolute ethanol under ultrasound irradiation for 5 min, and then reacted for 24 h at room temperature. The nal catalyst was collected and washed with ethanol and deionized water. The product was dried at room temperature for several hours to obtain f-SiO 2 @GO@Co catalyst.
General procedure for the synthesis of cis-3-aryl-3a,8bdihydro-3a,8b-dihydroxy-1 phenylindeno[1,2-c]pyrazol-4(1H)-ones A mixture of aldehyde (1 mmol) and phenylhydrazine (1 mmol) and 15 mol% f-SiO 2 @GO@Co as a catalyst in ethanol (5 ml) were stirred at 60 o C until an intermediate was formed. Next, ninhydrin (1 mmol) was added to the mixture reaction and allowed to stir until the completion of reaction (monitoring by TLC). After that, the heterogeneous catalyst was separated and the crude product was collected and washed with n-hexane and ethyl acetate to achieve the pure nal product.
Results And Discussion
Characterization of catalyst FT-IR spectra for the catalyst preparation steps are reported in Fig. 1 describes hydroxyl group vibrations. In SiO 2 @Cl and SiO 2 @Ethylenediamine spectra, the bands at around 950 cm -1 is related to ethoxy moieties vibrations. Functionalized graphene oxide shows the characteristic peak in 1102 cm -1 which reveals that functionalized SiO 2 was successfully grafted on the graphene oxide.
XRD patterns of the catalyst in different steps are depicted in Fig. 2. GO sheets show a characteristic peak around 12 o which proves the synthesis of graphene oxide. In comparison graphene oxide and functionalized graphene oxide, the new broad peak at 2θ = 25 o is related to amorphous SiO 2 which shows surface functionalization of graphene oxide. The small peak at 2θ = 44 o correspond to cobalt in XRD pattern of nal catalyst con rm successful modi cation of the GO surface.
The SEM images of GO(a) and GO@f-SiO 2 @Co (b) has been represented in Fig. 3. The SEM image of graphene oxide clearly shows the layer sheet structure of graphene oxide and GO@f-SiO 2 @Co exhibits the surface modi cation of graphene oxide with functionalized silica nanoparticles.
The presence of Si, Co, C, N in EDS spectrum of the catalyst (Fig. 4) con rm decoration of graphene oxide surface with functionalized SiO 2 .
The Raman spectra for GO and GO@f-SiO 2 are shown in Fig. 5. The characteristic peaks for Go at 1362 and 1595 cm -1 are attributed to D and G bands, respectively. Also, the spectrum of GO@f-SiO 2 also shows these peaks which con rm the presence of graphene oxide in the structure. In addition, after functionalization of GO slight increase in the ratio of I D /I G , indicating more transition from sp 2 to sp 3 from grafting of f-SiO 2 on the graphene oxide.
According to the differential thermal analysis (DTA)/Thermogravimetric analysis (TGA) for the nal catalyst (Fig. 6), the primary stage of decomposition occurred at 220 o C and continued to 800 o C with 18% weight loss in endothermic condition according to the curve of DTA. which is attributed to decomposition of the organic functional groups on the graphene oxide surface.
Catalyst reusability
The important point for a proper catalyst is recovery and recycling. The reusability of f-SiO 2 @GO@Co was investigated in the model reaction between phenylhydrazine, ninhydrin, and 2-nitrobenzaldhyde. For checking the reusability, after the completion of the reaction, the catalyst was recovered using ltration, and washed with ethanol to remove impurities and then dried. The recovery of the catalyst was excellent with an average yield (96%) after ve times subsequent use. As shown in Fig. 7 the catalyst activity without considerable loss was approximately same for ve cycles.
Analysis and characterization of synthesized compound
In order to optimize the reaction conditions, different parameters such as temperature, solvent and catalyst loading were assessed on the model reaction between phenylhydrazine, ninhydrin, and 2nitrobenzaldehyde. Firstly, the reaction was conducted in MeCN without catalyst in r.t. and, no product was formed after 24 h. When the reaction was carried out in the presence of f-SiO 2 @GO@Co in MeCN at 60 o C, no signi cant yield was formed after 24 h (50%). Also, the reaction was carried out in the presence of f-SiO 2 @GO in ethanol and the product was obtained in 75% after 24 h (Table 1, entry4). Then it was tested in ethanol in different temperatures in the presence of f-SiO2@GO@Co as a catalyst ( Table 1). The results revealed the yield of product increase at 60 o C ( Table 1, entry7). According to results shown in Table 1, the best performance of the catalyst was in the presence of ethanol as a solvent. Furthermore, we investigated the effect of catalyst loading (Table 2) and the yield improvement was found with increasing the catalyst from 3 to 15 wt%. It should be mentioned, increasing the more amount of catalyst did not affect on the yield. The performance of the 15 wt% f-SiO 2 @GO@Co in the reaction rate in the presence of ethanol as e cient solvent and various aldehydes are given in Table 3.
To con rm the accuracy of desired products (4a-k), we used FT-IR, 1 H NMR, and 13 CNMR. The IR spectrum of the compound 4i exhibits the peak at 3461 cm -1 that is attributed to the stretching vibrations of hydroxyl groups. The strong peak at around 1710 cm -1 indicates the presence of carbonyl group. It shows singlet peaks at δ = 7.93 ppm and δ = 7.31 ppm due to hydroxyl groups. The protons on the aromatic rings appear between δ = 8.37 to 7 ppm. In addition, the peak for carbonyl group in 13 13
Conclusion
In summary, we have synthesized GO@f-SiO 2 @Co as a heterogenous and recoverable catalyst which was an e cient catalyst for synthesis of indenopyrazolones derivatives. Results showed the catalyst with high catalytic activity provided excellent yields in a shorter reaction time under mild conditions.
Declarations
Ethics approval and consent to participate Not applicable
Consent for publication
The authors declare that the copyright belongs to the journal Availability of data and materials | 2020-12-17T09:08:24.941Z | 2020-12-16T00:00:00.000 | {
"year": 2020,
"sha1": "ecd74cce6af19aa22b21f09b91fb1a373832bf0b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-127252/v1.pdf?c=1631881995000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "52291c27f0e89cdfcadd700f0ce00295e47f9374",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
5706970 | pes2o/s2orc | v3-fos-license | Preprint typeset in JHEP style- HYPER VERSION HUTP-06/A0036
Based on holographic arguments Tanaka and Emparan et al have claimed that large localized static black holes do not exist in the one-brane Randall-Sundrum model. If such black holes are time-dependent as they propose, there are potentially significant phenomenological and theoretical consequences. We revisit the issue, arguing that their reasoning does not take into account the strongly coupled nature of the holographic theory. We claim that static black holes with smooth metrics should indeed exist in these theories, and give a simple example. However, although the existence of such solutions is relevant to exact and numerical solution searches, such static solutions might be dynamically unstable, again leading to time dependence with phenomenological consequences. We explore a plausible instability, suggested by Tanaka, analogous to that of Gregory and Laflamme, but argue that there is no reliable reason at this point to assume it must exist.
Introduction and summary
In the Randall-Sundrum one-brane model (RS2), a five-dimensional warped spacetime with a single Minkowski brane and brane-localized matter, linear perturbations of the Minkowski brane and AdS bulk appear to a brane observer to be those of a four-dimensional gravity theory up to energies set by the AdS curvature scale. Hence this model provides a low-energy dimensional reduction for brane observers, even though the extra dimension is not compact. However, it is not clear that the RS2 model exactly reproduces four-dimensional gravity at the nonlinear level. In this regard, the study of black holes is very interesting. Tanaka and Emparan et al. [1,2,3] have argued using holography that large black holes localized on the brane might behave very differently from four-dimensional ones, decaying classically much faster than the conventional quantum Hawking evaporation rate. This would allow this five-dimensional theory to be distinguished from four-dimensional gravity at low energies, even though linear perturbations about a flat brane do not distinguish them until energies comparable to the AdS scale.
This possibility might seem puzzling from the perspective of a four-dimensional effective theory, which contains a four-dimensional graviton mode with the correct four-dimensional gravitational interactions. However, since the extra dimension is not compact, the spectrum of Kaluza-Klein (KK) modes is continuous down to zero energy. While for linear fluctuations of the flat brane and bulk, one may still construct a four-dimensional effective theory, the validity of this theory beyond the linear level is unclear. In particular we will see that for a non-linear background, such as a black hole, the mass spectrum of the KK modes becomes significantly altered for modes with mass around the black hole temperature, thereby distinguishing the effective theory from pure four-dimensional gravity. The nature of these strong effects will be critical to determining if a static stable solution exists.
Tanaka and Emparan et al. [1,2] provided evidence that classical bulk geometries correspond in the holographic dual to quantum-corrected black holes. This dual theory is 4-d gravity coupled to a gauge theory and matter fields with N colors, taken in the large N 't Hooft limit, with large 't Hooft coupling [6,7,8,9,10,11]. They noted that an interesting question arises in the 4-d holographic theory when one considers Hawking radiation due to the black hole into gauge theory fields. If radiation from a free field leads to an evaporation rate ∼ , then radiation from the O(N 2 ) fields of the holographic dual theory may yield an effect going as ∼ N 2 , which would then persist in the large N limit even as → 0. Tanaka and Emparan et al. pointed out such spontaneous quantum radiation in the 4-d holographic theory would correspond to a classical 5-d process in the bulk, which would imply a 5-d black hole localized to the brane should always classically radiate, and hence cannot be static. The rate of classical "evaporation" or decay of the time-dependent brane black hole could then be used to place relatively strong bounds on the effective compactification scale, the AdS curvature length L [3]. Later work supporting these ideas appeared in [4,5].
The key subtlety in the argument of Tanaka and Emparan et al is that the gauge theory is strongly coupled, with large 't Hooft coupling, and hence it is unclear whether simply multiplying the free field result by N 2 is valid when considering Hawking radiation. The aim of this paper is to revisit this issue with this subtlety in mind.
We find two consistent options for large localized black holes. The first is that static stable solutions exist. The second, although we believe a less likely option, is that static but dynamically unstable solutions exist. The first is obviously in stark contrast to the arguments of Tanaka and Emparan et al. The second, while in detail different -particularly the existence of a static solution, and the interpretation of the instability as being unrelated to Hawking radiation -the qualitative result is similar, namely from the classical 5-d point of view, black holes would shrink at a rate fast enough to be interesting phenomenologically.
We begin by discussing the existence of static black holes in the 5-d classical theory. We use a concrete counter-example to the claim of Tanaka and Emparan et al to argue that in the 4-d holographic dual, Hawking evaporation is not enhanced by a factor of O(N 2 ), and is therefore absent in the → 0 limit. Hence we expect static black holes localized to the brane should exist. We review the likely "pancake" geometry of these localized black holes and interpret it in the 4-d holographic dual as a black hole surrounded by a thermal halo of strongly coupled CFT matter.
We then discuss the issue of dynamical stability of such static solutions. We review using entropy arguments a possible instability suggested by Tanaka that the end of the black hole in the bulk may be unstable to breaking off. Since we expect the localized black holes to have a geometry near the brane which is similar to that of a warped uniform black string, this instability should be similar to the type found by Gregory and Laflamme for uniform strings [12]. We explore this mechanism using linear theory to model the possible localized black hole horizon, concluding that this instability is unlikely to occur.
We note that there should be no deep mystery regarding the existence of localized static black holes. In principle their existence is a problem in partial differential equations. Whereas analytically there has been little progress in this [14,15,16], numerical methods have been used to solve the full Einstein equations for localized static objects on the brane [17,18,19] 1 . The most advanced work is that of Kudoh [21], who indeed finds static localized black holes, but with radii only up to a few AdS lengths. It remains unclear whether large localized black holes exist. We note however that the numerical methods used to find them are presumably difficult to implement for large black holes due to the scale separation between the radius of the horizon and the AdS scale, which must still be resolved. Hence it is unsurprising that very large black holes have not been constructed numerically, and this certainly cannot be taken as evidence against their existence. Which of our above options is realized (stable or unstable static black hole solutions) can only be decisively determined in the non-perturbative gravity theory. It remains interesting to check -and we expect confirm -the stability of black hole solutions by dynamical simulation of the 5-d classical bulk, using similar methods to [22].
Existence of localized static black holes
In this section we discuss the validity of the argument of Tanaka and Emparan et al. that no static solution can exist. We use a concrete example of a static black hole in the fivedimensional theory where one can apply their arguments as a counter-example to their claim. We then argue in the holographic dual theory that Hawking radiation is not enhanced by a factor of O(N 2 ) since the number of asymptotic states that may be radiated is not enhanced by such a factor. Hence we expect a static solution should exist. However, whereas spontaneous Hawking emission to asymptotic states does not occur, there may still be interesting nonspontaneous processes that are seen at the planar level. These might lead to instabilities of this static solution, which we will consider in the section that follows.
A counter-example to the claim of no static black holes
We now present a simple static black hole that falls under the arguments of Tanaka and Emparan et al. Their arguments imply that in the holographic theory a black hole must be spontaneously emitting O(N 2 ) degrees of freedom, resulting both in the lack of a static solution in the bulk, as one cannot turn off a spontaneous process, and also a deformation of the brane geometry from the usual 4-d Schwarzschild due to the backreaction of this radiation. This example exhibits neither.
The example we consider is the well-known uniform black string. We note that without an IR brane, the bulk geometry is singular, and therefore include one to avoid this subtlety [23]. Since the metric, solves the Einstein equations provided g µν (x) is Ricci flat, we can take g µν (x) to be the Schwarzschild solution, with horizon radius R S to construct a five-dimensional solution. This solution is the uniformly warped black string. The UV vacuum brane resides at z = L, and we introduce a vacuum IR brane at z = z IR . While we have included an IR brane, classically we may make z IR as large as we wish. In particular we will take L << R S << z IR . In this limit the arguments of Tanaka and Emparan et al. would be expected to apply. However, clearly from the form of (2.1), the geometry comprises only a non-trivial zero mode, with no KK modes excited. In particular the UV brane geometry is exactly that of four-dimensional Schwarzschild. Hence from the 4-d dual perspective, the solution is a static Schwarzschild black hole with no backreaction from the CFT. However, although the static solution exists, this black hole-CFT state is in fact unstable due to the presence of the CFT. For now we continue to focus on the existence of the static black hole solution. We will return to the question of stability in the following section.
Reduction of low energy degrees of freedom from strong coupling
Let us now consider why Tanaka and Emparan et. al.'s calculation of Hawking radiation fails. Tanaka and Emparan et al. [1,2] proposed a deviation from conventional four-dimensional behavior by considering the four-dimensional holographic interpretation of braneworld black holes. The theory dual to the classical 5-d bulk should be classical 4-d gravity coupled to a large N gauge theory with matter in the 't Hooft planar limit, with 't Hooft coupling λ = N g 2 Y M large, where the gauge theory is conformal in the IR, although the brane acts as a UV cut-off. We will simply refer to the dual theory as a CFT, with this UV cut-off implied. These authors provided evidence that classical bulk geometries correspond in the holographic dual to quantum-corrected black holes. Just as the observer on the brane in the bulk picture sees small deviations from 4-d behavior due to classical 5-d gravity, an observer in the 4-d holographic theory sees the same deviations from classical 4-d behavior, now due to the quantum corrections from planar contributions of the gauge theory.
The 4-d quantum corrections in the holographic dual could survive when we take → 0 (note that is the same for both theories) if quantum effects coherent amongst the O(N 2 ) color degrees of freedom amount to a total effect going as ∼ N 2 . This should be generally true, and can be seen explicitly for the N = 4 case.
Recall that in the large N limit, the 't Hooft coupling remains fixed, where g s and l s are the bulk string coupling and length. N is related to the 4-d Planck length by N = L/l 4 , so that in this limit l 4 = g s l 4 s /L 3 → 0. Since = l 2 4 /G 4 , with G 4 the 4-d Newton constant, the combined quantity remains fixed while → 0, as we keep L, G 4 fixed in RS2. Consider a 4-d black hole evaporating, with initial radius R S . Assume for the moment that as Tanaka and Emparan et al claim, the power emitted by Hawking quanta is dM/dt ∼ N 2 /R 2 S , ie. N 2 times the free field result. Then, the evaporation time T remains finite in this → 0 limit: We emphasize that this argument holds provided the Hawking radiation rate does indeed go as N 2 times the usual free field result. We now argue that is not the case.
The problem with the argument is that in the holographic dual theory, there do not necessarily exist O(N 2 ) dynamical asymptotic degrees of freedom accessible to radiation from a localized finite temperature object. This reduction in accessible degrees of freedom derives from the large 't Hooft coupling when the theory is dual to gravity. 2 We now briefly review the dynamical degrees of freedom for the usual case of AdS-CFT, where the field theory is N = 4 super Yang-Mills, and in particular, how the number of colors N manifests itself in the closed string gravitational dual. Note that although we are using the results for this particular conformal theory, the results should be general whenever the closed string dual has a gravity limit. For details of the review below, the reader is referred to [25,26,27,28,29,30,31,32] and references therein.
The situation is best understood for the N = 4 super Yang-Mills theory on a sphere. Let us take the sphere radius to be R sph . The states in the field theory fall into weakly interacting states with energies ER sph << N 2 , and strongly interacting states with energies ER sph >> N 2 .
The weakly interacting states are built from a free particle Fock basis of traces of products of local adjoint fields and their derivatives. These behave as free particles as in the large N limit these trace operators commute provided the total number of local fields in the product is << O(N 2 ), implying an energy ER sph << N 2 . These states are best thought of as glueballs, since they represent excitations of the low temperature confining vacuum for the theory on a sphere.
The strongly interacting states arise when the number of local fields in an operator becomes O(N 2 ), for example considering products of long trace operators or determinants, and hence its energy becomes ER sph ∼ O(N 2 ). They can no longer be thought of as a set of weakly interacting particles since the commutation with other operators is lost. These states have large energies, but due to the length of the operators, their density of states is very high, going as e O(N 2 ) . We term these plasma states, since they describe the plasma of the high temperature deconfined phases of the theory on the sphere.
At large 't Hooft coupling the theory is dual to closed string theory in asymptotic AdS in the gravity limit, λ 1/4 = L/l s → ∞. The glueball states should then correspond to perturbative string excitations about the vacuum target spacetime. While in the free theory the separation of energies of the glueball states is ∼ 1/R sph , at large 't Hooft coupling the energy separation scales as ∼ λ 1/4 /R sph since the spacing of the closed string dual spectrum is ∼ 1/l s rather than ∼ 1/L. Hence as we approach the gravity limit the entire glueball spectrum is lifted to infinite energy, apart from the glueballs dual to the supergravity modes of the string. Then the gravitational perturbations are dual to only O(1) of the O(N 2 ) free particle basis states. In particular the graviton is dual to the field theory stress tensor. We cannot see 'N' for perturbative fluctuations in the gravity limit, the other O(N 2 ) perturbative states only becoming visible when we look at string scale physics.
In our situation we do not restrict the field theory to be N = 4 super Yang-Mills, the case discussed above. However, in other field theories known to be dual to string theories the case is qualitatively similar [30]. Since in these cases the reduction in light glueball degrees of freedom is exactly dual to the decoupling of string oscillator modes in the string dual when truncating to the gravity limit, it seems reasonable that this will occur whenever the string dual to the field theory has such a gravity truncation.
At large 't Hooft coupling the plasma states correspond to the non-perturbative black hole excitations in the closed string dual, the small and large AdS Schwarzschild black holes with radii small or large compared to the AdS scale L. The energies of these black holes are ER sph ∼ O(N 2 ) translated to the field theory. The large number density of the plasma states allows them to account for the O(N 2 ) entropy of these black holes. At finite temperature, whilst at large N these states are very massive, their enormous number density, e O(N 2 ) , may allow them to overcome their exponential Boltzmann suppression. This occurs for the large black holes which are rapidly semiclassically spontaneously nucleated in hot AdS above a critical temperature in the field theory going as ∼ 1/R sph , and is reflected in the gravity dual by the negative free energy of the large black holes. The small black holes, similar to asymptotic flat space black holes have positive free energy, and hence are composed of states that are not numerous enough to spontaneously overcome their Boltzmann suppression.
Having briefly reviewed how field theory states correspond to the dual gravity physics, and in particular the role that the number of colors N plays, we now consider radiation from a localized finite temperature object coupled to the CFT at strong coupling in asymptotic flat space, such as for our 4-d black hole localized to the brane. We again are interested in the λ → ∞ limit to ensure the gravity dual description, and may still consider the theory on the sphere although we must take the sphere radius R sph to be much larger than the size of our localized thermal source, the black hole.
Firstly consider the glueball excitations. The O(1) free particle states dual to the graviton perturbation modes, which have energies ∼ 1/R sph , can certainly be radiated. However as described above the remainder of the O(N 2 ) glueball states cannot, due to their enormous energy ∼ λ 1/4 /R sph , and hence enormous Boltzmann suppression. In the gravity dual, this corresponds to our localized black hole thermally radiating gravitons, but not effectively radiating string oscillator modes due to their enormous mass.
Secondly we must consider the plasma states. The only way these can be emitted spontaneously is if they have a sufficiently large degeneracy to overcome their large O(N 2 ) energy, and hence large Boltzmann suppression. In principle this might allow an emission rate that could account for a classical process in the dual gravity.
Whilst a thermal emission of such massive objects naively seems unlikely (see [33] for a discussion of Hawking radiation of 'macroscopic' objects), and our previous counter-example of the warped string demonstrates explicitly that this does not occur, we now attempt to give a rough argument why this is so.
We expect that the plasma states dual to small black holes cannot be spontaneously nucleated by our localized black hole, since they cannot even be nucleated when the entire theory is put at finite temperature. So we only consider the possible nucleation of plasma states dual to large black holes with radius of order L or greater.
Consider now in the bulk the large black holes that might be 'classically emitted' from the localized brane black hole or black string of the counter-example. Since we are interested in the Poincare slicing of AdS, there are no finite size static black holes (not attached to the brane). The only static black holes are infinite in extent in the brane directions, and hence have infinite energy, and correspond to a horizon at finite radial position in the bulk. Obviously such infinite energy objects could not be radiated by a finite size hot source such as our localized black hole or string.
However, we must also be concerned with black holes of large but finite radius that are classically emitted near the brane black hole horizon, and then subsequently fall away from the brane. Such black holes have a temperature measured on the brane given by their local horizon temperature, redshifted by the warp factor due to their distance from the brane. Hence the temperature of a black hole decreases as it falls away from the brane, which is dual to the temperature of the thermal plasma decreasing as it expands under its internal pressure.
Static large black holes in global AdS have a temperature that increases with their energy. There is a minimum black hole temperature, attained by a black hole of approximately L in radius. Such a black hole of radius L, within a few AdS lengths of the brane will then have a temperature measured on the brane given as ∼ 1/L. Taking this minimum temperature large black hole and treating it as a probe in the AdS metric written as (2.1), we can estimate its temperature reduced by the redshifting at its coordinate distance z from the brane as T ∼ 1/z.
The important point is that the brane black holes (localized or string) have lower brane temperature the larger their radius R S , going as T ∼ 1/R S for R S >> L. Since a thermal object cannot emit objects hotter than itself, a large 4-d black hole evidently cannot emit plasma states near its horizon dual to this minimum temperature black hole within the region z < R S near the brane. This remains true for plasma states dual to even larger black holes, which have even larger temperatures. Now we should consider the plasma states dual to the minimum temperature large black hole far from the brane. For z > R S the brane temperature of these black holes becomes small enough that the dual plasma states might be emitted. However, for the string the warp factor means the local horizon radius of the string far from the brane will be much smaller than L, and hence there is little overlap of the string and the large black hole of radius ∼ L to be emitted, and hence such a process would be suppressed. For the localized brane black hole, as we shall see later the horizon doesn't extend further than z ∼ R S into the bulk, and hence again there is a lack of overlap which would disallow the process. We therefore conclude that the only plasma states that have any chance to be emitted are those that are dual to black holes with radius ∼ L, at a position z ∼ R S in the bulk. We have no argument to rule these out, but our counter-example implies that the effects ruling out the emission of both the z < R S and z > R S black holes are still sufficiently effective at z ∼ R S to stop emission there too.
Since we cannot solve the CFT at large 't Hooft coupling the above arguments are necessarily heuristic. However, they do show how the simple thinking that one computes the Hawking radiation rate by multiplying the free field result by O(N 2 ) breaks down in the strongly coupled field theory dual to gravity. We therefore conclude there is no convincing holographic argument obstructing the existence of 5-d static black holes localized on the brane due to spontaneous radiation in the field theory. Furthermore we do not expect any backreaction from this field theory Hawking radiation to be seen in the classical gravity dual either. This is in perfect accord with the static warped uniform string example given above where neither classical radiation or its backreaction is seen. However the situation is interesting when one considers dynamical stability of static solutions. In the following section we will consider possible classical instabilities and why instabilities, but not spontaneous Hawking radiation, might survive in the large N planar limit.
Notice that we have done the analysis in the AdS background that is relevant to phenomenological applications. It is interesting to note that whereas the localized black hole appears to exist without spontaneously radiating in a cold vacuum, if we heat the theory up to temperatures of order the localized black hole temperature or higher, this metastability is likely to be effected. In this case, the holographic picture indicates that the localized black hole should radiate strongly, at a rate O(N 2 ), since the relevant degrees of freedom at this high temperature are deconfined with all O(N 2 ) gluons contributing [28], so it may evaporate in line with the original arguments of Tanaka and Emparan et al. This radiation would be due to the emission of the non-perturbative plasma states of the thermal bath, whose high temperature 'evaporates' the localized hot object. From the 5-d perspective, we note that the gravity dual description now includes an IR horizon in the vicinity of the brane that likely disallows the static localized 5-d horizon on the brane. In this case, the brane black hole horizon ends on the bulk black hole and thus, instead of rounding off at the tip, looks stringlike everywhere. This eliminates the need for the CFT modes dual to the "rounding-off" behavior, since the black string is described purely by gravity in the CFT. Of course, putting the 4-d theory in such a high temperature thermal plasma bath is interesting but not a case relevant for phenomenology and we will not discuss this further here.
Geometry of localized black holes
As we now know of no argument, holographic or otherwise 3 , that a static localized black hole should not exist, we review what form they are expected to take. Following Giddings et al [8], we may estimate the shape of a black hole from the linear equation governing the field φ = 1 + (z 2 /L 2 )g 00 , i.e. the AdS scalar Laplacian. We estimate the intrinsic horizon spatial geometry as that induced on the isosurface where φ = 1. Of course it is unclear how accurate the linear approximation will be to the full non-linear solutions, but it is reasonable to expect qualitative agreement.
Taking the AdS coordinates, the AdS Laplacian is homogeneous in r and z, Taking the brane to be at z = L, we must solve the Laplace equation for Neumann boundary conditions at the brane, but with a static delta function source at r = 0. Following Giddings et al, one then constructs the metric perturbation from φ. The strength of the source determines the size of the black holes, and hence the position of the locus φ = 1. The brane and delta function position being at z = L breaks the homogeneous scaling symmetry of the equation and other boundary conditions, r, z → λr, λz under a change in strength λ 2 of the delta function source. However, far from the brane compared to the AdS scale, the solution does regain this scaling symmetry. Hence for large black holes, so that the majority of the isosurface φ = 1 is many AdS lengths from the brane, the horizon isosurfaces have the same shape in the r, z plane, up to the global scaling of r and z. This is illustrated in figure 1 where we plot isosurfaces for a variety of black hole sizes and we see the larger black holes all have the same shape. The horizon isosurface extends a coordinate distance approximately ∆z ∼ 2R S into the bulk. The shape of the isosurface in the r, z plane implies that for large black holes with horizon radius R S >> L on the brane, the horizon geometry near the brane will be approximately that of a warped uniform string extending into the bulk. Around a proper distance ≃ L log R S /L Figure 1: Surfaces in AdS in r, z coordinates, computed from linear theory, whose intrinsic geometry approximates the horizon geometry of black holes of 3 sizes: 4L, 8L, and 12L. We see these large localized black holes have horizon geometries simply related by a global scaling, and extend a coordinate distance ∆z ∼ 2R S into the bulk. this warped uniform string ends, being capped off in an additional proper distance ∼ L by a horizon with characteristic curvature radius ∼ L. Since this capping off necessarily involves nonzero mass Kaluza-Klein modes, in the 4-d holographic theory, the black hole horizon will be surrounded by a strongly coupled halo of gauge theory matter bound to it. We expect that only glueballs dual to gravitons may be spontaneously radiated from its surface in the planar limit. The interesting question is then whether this black hole and halo state is dynamically stable.
The possible existence of black hole solutions with bound CFT matter might initially seem surprising. Nonetheless, we know that such bound states exist in other situations, the simplest example being localized matter on the brane, such as stars and planets. Similarly, extremal black holes should take this form. Such solutions perturb the bulk geometry by sourcing Kaluza-Klein modes, and therefore have a bound state CFT component from a holographic perspective.
Stability of localized static black holes
We have shown a concrete example of a static black hole in a 4-d theory of gravity plus CFT where the Tanaka and Emparan et al arguments would predict none could exist. However in this theory, as we discuss below, this exactly Schwarzschild black hole plus trivial CFT state is actually unstable, and thus may have interesting dynamics in the spirit of their arguments. Having concluded that static black holes exist, we now discuss their dynamical stability.
There are two possibilities. The first is that there is a consistent stable black hole bound state with the CFT. The second is that a solution exists but is unstable. A third possibility, that no static localized black hole solution exists, is not ruled out by our counter-example to the arguments of Emparan et al., though it does remove the argument against their existence. However, we re-emphasize that static black hole solutions up to a few AdS lengths have been numerically constructed, so that static (though perhaps unstable) solutions should exist. Interestingly, if an instability were present for the localized black hole, the end result of such a process may be rather similar to Tanaka and Emparan et al's picture, namely a rapid loss of energy to infinity, analogous to the result of the Gregory-Laflamme instability. We discuss the form such an instability might take for the brane black hole shortly. We now briefly discuss the time-dependent evolution that would occur in the presence of such an instability.
Emparan et. al. suggested that the bulk interpretation of black hole decay would be classical gravitational radiation near the brane, through which the black hole slides off the brane into the bulk. However, this interpretation is clearly problematic in that the only light mode localized near the brane is the zero mode, and that mode (in RS2) is not a CFT bound state, but a fundamental normalizable mode that exists in the presence of the brane. Since any potential instability should be a consequence of the CFT dynamics, the bulk holographic interpretation must lie elsewhere.
Tanaka [1] made a different suggestion which is more likely. The black hole could decay classically through emission of higher-dimensional black holes at the tip, where the curvature is large -of size set by the AdS scale, the size of the tip region. Specifically, the tip of the black hole, where it extends farthest away from the brane, is unstable to breaking off. The remaining brane black hole would become slightly smaller, and the tip would become a small higher-dimensional black hole which would then decay or falls through the horizon of the Poincaré patch. From the CFT point of view, this would correspond to an instability through which the black hole could decay much more rapidly than implied by Hawking radiation.
Tanaka estimated the rate at which the brane black hole would lose energy to the small black droplets. The total rate of energy released is the mass of the black droplets multiplied by the rate of droplet production. Although we cannot give a precise rate, from the perspective of the local 5-d geometry, there is only one scale in the problem, L. The only other scale, R S the horizon radius, does not appear locally. It seems reasonable to assume that the rate of black hole production is L −1 . The total rate of energy production is then with M 5 the 5-d Planck mass. An observer on the brane will see this value redshifted by the factor (L/R S ) 2 , so the observed evaporation rate will be where we have used the holographic relation N 2 = M 3 5 L 3 . We have written the last expression in terms of the 4-d parameters, the area A of the 4-d black hole and its temperature T = 1/R S . This gives parametrically the same rate of energy loss in the CFT as Tanaka and Emparan et al's proposal of spontaneous thermal emission of O(N 2 ) degrees of freedom. However we note that in light of the arguments in the previous section, this is not a spontaneous process, but rather an instability, and hence can be turned off by fine tuning.
Gregory-Laflamme instability
We now consider the warped black string instability. Afterward we will consider a possible analogous instability for the black hole. The known instability (for the black string) is the Gregory-Laflamme (GL) instability [12,35,13], which we now review. Consider disturbing the warped uniform string metric (2.1) by a 4-d tensor perturbation, Gregory showed [35] that the usual vacuum GL instability of uniform strings generalizes simply to the warped case. This was not at all obvious as the warped string background is not translationally invariant. Writing, we take the 4-d tensor χ µν to be transverse and traceless with respect to the 4-d metric g µν (x). Taking f (z) to be an eigenmode of the operator, ∂ 2 z − 3 z ∂ z , with eigenvalue k 2 , so that, f (z) = AJ 2 (kz) + BN 2 (kz) (3.5) and we must choose coefficients A, B to satisfy the appropriate boundary conditions. Then the 4-d tensor perturbation χ µν satisfies, where △ L is the Lichnerowicz operator of the 4-d metric g µν given by, with ∇ 2 (4,S) the 4-d metric scalar Laplacian, and R αβµν the 4-d metric curvature, with indices raised and lowered with respect to the 4-d metric g µν . This is exactly the equation one obtains for a non-warped string. Hence as in that case, for k < k c = 0.45/R S one finds modes with an exponentially growing t dependence, leading to the familiar horizon instability. If we consider an IR brane, then the spectrum of allowed λ becomes quantized, although provided that z IR >> R S , these include unstable modes.
Holographic interpretation of instabilities
We see from the warped uniform string example that the CFT state may be dynamically unstable. There is no contradiction with our statements about Hawking radiation, since this instability is not a spontaneous radiative process, but rather is simply a result of having an energetic instability. The GL instability of gravity implies the existence of certain tachyonic modes in the gauge theory description. In the gravity these modes are perturbative graviton states, and hence in the gauge theory correspond to tachyonic glueballs. Note that these tachyonic glueball states are quite specific in form, being spherically symmetric, and exponentially localized near the horizon. This instability is not spontaneous, as by suitable fine tuning the system can be prepared to stay static for as long as we wish. However, when perturbed the occupation number in these tachyonic glueball modes will grow simply because it is energetically favorable for this to happen. This will continue until a large collective behavior is produced. Thus we conclude that dynamical instabilities of the CFT vacuum may lead to interesting dynamics in the planar limit, but spontaneous radiation is not seen in this limit. The Emparan et al argument fails for spontaneous emission of radiation, but interesting effects analogous to their original claims might be possible for non-spontaneous energetic reasons.
A possible instability
So let us now consider the possibility of a GL-like instability for the localized brane black hole in the bulk. The majority of the proper distance of a large localized black hole with brane radius R S >> L appears like the warped uniform string, extending roughly L log R S /L into the bulk, and it is only in the last L proper distance that the geometry deviates from the uniform string and caps off.
The GL instability is the only dynamical instability known for static black holes, and certainly would account for the decay of the black hole that was considered above. Gregory and Laflamme argued that for a uniform string the natural end state of the instability in vacuum is an array of black holes. This remains an issue of controversy due to the result of Horowitz and Maeda [36]. However, following numerical simulation of the instability by Choptuik et al [22], it has recently been argued that Gregory and Laflamme's original picture is likely to be correct [37,38,39]. If the uniform string region of a localized black hole were long enough, one would expect a similar instability to exist on it, whose dynamics would result in the uniform neck breaking up, and the segments not connected to the brane falling into the bulk.
In the CFT such an instability would manifest itself through the existence of tachyonic glueball states. Perturbing the static black hole and its halo would result in a condensation into these tachyonic states. The result would be violent emission of glueballs (corresponding to gravity waves in the 5-d gravity), and expanding cooling shells of thermal gluon plasma (corresponding to the small black holes falling away from the brane).
For a large black hole, the profile of the GL instability, given by f (z) in equation (3.4) goes as, f (z) ∼ J 2 (kz) (3.8) where we recall that the marginally unstable mode has eigenvalue k c = 0.45/R S . We plot this profile in figure 2. Earlier we estimated the shape of a large black hole using the linear theory, finding that the coordinate extent, ∆z, of the potential isosurface we take to approximate the horizon into the bulk is ∆z = 2R S . We also plot this in figure 2. We might take the region of the isosurface extending a half or quarter of this distance to approximate a uniform warped string. However, from figure 2, it is then clear that only a tiny fraction of a wavelength of the marginal mode could fit into this region. Hence we find it very unlikely that any potential Gregory-Laflamme instability could be localized in the uniform string region of the localized black hole.
Entropy Balance
Even without finding the explicit black string instability, one could have argued via entropy considerations that one would expect that the black string is unstable. We now apply this reasoning to the brane black hole by comparing the entropy advantage of the localized black hole to "drip" off a small black hole at the tip to the entropy of the static solution. We find that from the perspective of entropy considerations, the decay is parametrically marginal, so it cannot be decided according to entropy considerations. Consider a brane black hole that extends into the bulk out to some value z in z-coordinates which is close to R S . If the tip were to drip off, then it would form a small black droplet of radius L, and the brane black hole would shrink. The black droplet will have mass and entropy given approximately by The brane black hole will shrink by an amount given by energy conservation. The effective mass lost to the black droplet is redshifted by the warp factor: The radius of the brane black hole then shifts by an amount δR S = δM bbh /M 2 4 , and the entropy shifts by For z ≈ R S , this is parametrically the same as the entropy of the black droplet. However, if the brane black hole sticks farther out into the bulk, then the total entropy change δS drop + δS bbh grows and becomes positive, and formation of a black droplet is favored. Notice that for z < R S , this analysis wouldn't apply since the black holes that would be spit off in that case would be larger than the AdS length scale, so the simple interpretation as pure five-dimensional flat space black holes would certainly not apply. The analysis is only valid up to the point where the entropy argument is indeterminate.
The formula for the entropy S bbh of the brane black hole deserves a quick comment. The black hole extends a proper distance into the bulk, so a naive estimate of the area would be S bbh ∼ R 2 S L log R S /L. However, as the authors of [40] discuss, the contribution to the area is suppressed by the warp factor away from the brane, and almost all of the area is near the brane itself. A better estimate for the area is dA = 4πR 2 S dz (z/L) 3 (3.13) on each spatial slice of constant z. Integrating dA from z = L to z = R S accounts for the formula for S bbh . As there is no parametric argument for, or against this instability we regard it as marginal. We expect that any modification of the bulk physics might therefore effect this stability. For example, adding charges to a uniform black string may render the GL instability absent [41]. Certainly extremally charged localized black holes will be stable.
Another simple modification is to add extra compact dimensions. In this case, the area of the small black droplet can start to probe the extra dimensions, whereas the large brane black hole will fill them up. Consider for example a brane black hole in 5 + D dimensions where D of the dimensions are compact and of size L. The surface area of a unit 3 + D-sphere is smaller than that of a 3-sphere by the factor (3.14) Since the black droplet in this scenario is roughly the same size as the compact dimensions, it will interpolate between a 4 + 1-dimensional black hole and a 4 + D + 1 dimensional black hole, and its entropy will pick up some fraction of the above ratio. By making D large, it becomes more likely that such the brane black hole is stable.
Effective Theory Interpretation
We now return to the issue mentioned in the introduction of why the effective theory for linear fluctuations about a flat brane can be given by pure 4-d gravity, but even for large radius black holes this 4-d gravity description can break down. We note that this is true whether or not the black hole is unstable. In either case, the effective theory at the nonlinear level in the presence of a black hole breaks down. This might seem surprising since for the linear theory about the flat brane we recover simple 4-d gravity for perturbations with wavelength larger than L, the AdS length. Thus one might naively assume that 4-d gravity is the correct effective theory with a cut-off requiring all curvature radii to be less than L. However, this is not the case. Since the Kaluza-Klein spectrum is gapless, when we consider a background containing a black holes with radius R S we should worry about integrating out the modes with masses m less than ∼ 1/R S . Usually theories that admit 4-d effective descriptions have a mass gap in their spectrum and hence for large enough black holes there will be no modes with such masses. Here however there is a continuum of such light modes, down to zero mass. While the spectrum of large mass modes, with masses m >> 1/R S will be unchanged due to the presence of the black hole in the background, the spectrum of the modes with masses approximately m ∼ 1/R S will generically be strongly affected.
This modification of the effective theory is important and perhaps more obvious when considering the quantum radiation rate of the 5-d black hole. From a purely 5-d perspective, there is only a single graviton with O(1) degrees of freedom leading to a relatively small radiation rate [42]. However, if one were to calculate the decay rate using the 4d theory with the original modes, one would find that the answer would depend on an IR and a UV cut-off. The resolution of this apparent discrepancy is that the black hole has drastically changed the spectrum of KK modes. The picture is qualitatively the same as the usual calculation for Schwarzschild black holes. The phase space of graviton excitations in flat space gets replaced in the presence of the black hole by spherical harmonics, which are effectively quantized at the temperature of the black hole, and oscillations in the radial direction. In the case of the localized black hole, the KK modes no longer have a continuous spectrum, but effectively are quantized also at the temperature of the black hole, leaving only O(1) accessible modes. So in general, an effective theory with a KK spectrum continuous down to zero can change in the presence of large geometric perturbations such as a black hole. Note, however, that in the case of the brane black hole, modes with mass lighter than the temperature have small overlap with the higher-curvature region of the brane black hole. Therefore only those modes with mass of order the black hole temperature would be strongly coupled.
Consider again our simple example of the warped uniform string from section 2.1. In the dual 4-d theory this does appear to be exactly a 4-d Schwarzschild solution. However, as we have discussed above, the Kaluza-Klein spectrum about this solution has tachyonic modes, due to the GL instability. These are not present in the spectrum of fluctuations in the absence of the black hole, and only arise only through the non-linear interaction of the Kaluza-Klein modes with the 4-d graviton. We see this non-linear interaction explicitly in equation (3.3), where the perturbation governed by χ µν f (z) is a Kaluza-Klein mode, and the Lichnerowicz operator includes the non-linear coupling to the 4-d zero mode, g µν (x), through R (4) α β µ ν χ αβ acting as a potential. Since the mass squared, −k 2 , may become arbitrarily small, when it becomes of order the potential ∼ 1/R 2 S , the 4-d mode behavior is strongly modified from that of a usual 4-d field with the same mass.
Note that in addition to the 4d coupling of the graviton to the tower of KK modes, higher dimensional operators become more important because of the large curvature components at the tip. For both these reasons, conventional no-hair theorems do not apply. Hence there can be two different types of black hole solutions -one involving KK modes (the brane black hole we have been discussing) and one without them (the black string). | 2014-10-01T00:00:00.000Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "2845a014cc5783605079870ccc89e54735d3b4f9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0608208",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0cd43472b9891b8115ed854c6ca04d1b71e9a34f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119362754 | pes2o/s2orc | v3-fos-license | Neutron-scattering studies of arsenic sulphide glasses
High-resolution neutron-scattering measurements have been performed on bulk glasses of As2S3 and As2S3I1.65 at a spallation neutron source. For the case of As2S3, an isotopic-substitution experiment involving the S isotope has allowed some of the various pair correlations contributing to the second peak in the radial distribution function to be determined by the method of first differences. For the ternary glass, it has been confirmed that iodine bonds preferentially to As atoms, and that on average each As atom is coordinated to one iodine and two sulphur atoms in the first coordination shell.
Introduction
Chalcogenide glasses, compounds of Group VI elements (S, Se, Te) with elements such as As, Ge, B etc., have been of great interest for a considerable time, due to their unusual properties (e.g. photo-induced metastability 1 ) and many actual and potential technological applications, including low-loss optic fibres, non-linear optical elements etc. 2 . The canonical chalcogenide glass is perhaps the stoichiometric compound arsenic trisulphide, As 2 S 3 , whose crystalline counterpart is orpiment (or auripigment, so-called after its golden-yellow colour). There have been many structural studies of this important glassy material since, in order properly to understand its physical behaviour, a thorough knowledge of its atomic structure is a prerequisite.
A number of diffraction studies of glassy As 2 S 3 have been performed, using both neutrons 3 and X-rays 4,5 in conventional scattering experiments. Differential anomalous X-ray scattering measurements have also been performed 6 in the vicinity of the As K-edge in order to extract information about the local structure around As atoms. These measurements are closely related to another atom-specific technique that has also been used to probe the local structure in glassy As 2 S 3 , namely X-ray absorption spectroscopy also performed at the As Kedge [7][8][9][10][11] . Other techniques, e.g. Raman scattering 3,12 , also shed some light on the local structure of this glass.
The general consensus is that the structure of glassy As 2 S 3 is for the most part chemically ordered, with each As atom bonded to three S atoms, and each S to two As atoms, in the first coordination shell, as in the structure of the layered crystal, orpiment 13 . The situation regarding the pair-correlation constituents of the second peak in the radial distribution function is less clear. Extended X-ray absorption fine structure (EXAFS) measurements [8][9][10][11] indicate that only As-As, and not As-S, correlations contribute (as well as S-S correlations, which cannot be detected by As K-edge EXAFS). However, the differential anomalous X-ray scattering data seem to imply that As-S correlations do contribute to the second coordination shell (as they do also in the orpiment structure 13 ).
The incorporation of halogens, e.g. iodine, into glassy arsenic sulphides, produces a dramatic decrease in the glass-transition temperature to below room temperature for the resulting ternary As x S y I 1-x-y glasses with iodine contents up to 30 at %. The structure of iodine-containing arsenic sulphide glasses, however, is unclear.
An early, low-resolution X-ray diffraction experiment 14 concluded that iodine acts as a chain terminator, bonding preferentially to arsenic atoms in place of sulphur, thereby producing a twisted-chain structure comprising I / [-S-As-S-] units. Raman scattering spectra, on the other hand, have been interpreted 3,12 as providing evidence for the existence of discrete AsI 3 molecular species dissolved in a glassy As-S matrix. Time-of-flight neutron diffraction data 3 on (As 2 S 3 ) 1-x (AsI 3 ) x ternary glasses have also been used to support this latter assertion.
In this paper, we report on high-resolution time-of-flight neutron-diffraction experiments carried out on bulk glassy As 2 S 3 employing, for the first time, isotopic substitution involving the 33 S isotope in order to obtain additional information on partial atom-atom correlations. In addition, we describe the results of conventional neutron-diffraction experiments on an iodine-containing arsenic sulphide glass, As 2 S 3 I 1.65 .
Experimental details and data analysis
The samples of glassy arsenic sulphide, and the iodine-containing ternary glass, were prepared from the elements, and the isotopically-enriched sample of As 2 S 3 was prepared from 99 % isotopically-pure 15 33 S. The constituents in the form of metal pieces, flakes, powder and chips for As, nat S, 33 S and I, respectively, were weighed and placed in silica ampoules that were sealed under vacuum. The charges were then slowly raised in temperature to 800 0 C in a rocking tube furnace, left at this temperature overnight, and quenched in air to form the glass samples. The mass and atomic densities of the three samples studied are given in Table 1, together with the bound coherent neutron-scattering lengths for the elements and isotopes involved, and the square of compositionally-weighted scattering lengths for the sample compounds.
The composition of the iodine-containing ternary glass chosen for study was As 2 S 3 I 1.65 (or equivalently As 0.3 S 0.45 S 0.25 ), the same as one of the samples studied in an early X-ray diffraction experiment 14 . This particular composition is almost the same as, but slightly sulphur-rich compared with, the member of the family (As 2 S 3 ) 1x (AsI 3 ) x having the same atomic fraction of iodine, viz. As 0.35 S 0.4 I 0.25 with x=0.38, and is deep within the glassforming region of this ternary system 12 .
Time-of-flight neutron-diffraction experiments were carried out on the three samples, held at room temperature, using the LAD diffractometer at the ISIS facility, Rutherford Appleton Laboratory. In order to achieve good counting statistics, particularly important for the data subtractions carried out with the isotopicallyenriched and natural isotopic abundance As 2 S 3 glass samples, the measurement runs were very long (44h each for the two As 2 S 3 samples, 28 h for the As 2 S 3 I 1.65 sample, together with 23 h for the empty vanadium can, 15 h for a vanadium rod for normalization purposes and 1 h for the empty spectrometer).
Data were corrected for background, absorption, multiple scattering and inelasticity using the ATLAS suite 16 . After the appropriate corrections for the differential cross section measured at each angle detector bank and subtractions of the self-scattering cross section, the distinct scattering cross sections from all detector banks were obtained and then combined, resulting in the final i(Q). When combining data from different detector banks, particular attention was made to select exactly the same Q-range from each angle for both natural and 33 S isotopically-enriched samples of As 2 S 3 .
The function that is measured in a neutron-scattering experiment is the differential cross-section, given by where i(Q) is the distinct scattering function of interest, resulting from interference of neutrons scattered from different pairs of atoms in the structure, I S (Q) is the atomic self-scattering function which provides a background signal to dσ/dΩ and which is subtracted to give i(Q), and Q is the momentum transfer (=(4πsinθ)/λ, where 2θ is the scattering angle and λ is the neutron wavelength).
The distinct scattering function is related to the real-space total correlation function T(r) by means of a Fourier transform: where M(Q) is a modification function accounting for the fact that the scattering measurements cannot be obtained over an infinite range of momentum transfers as required by the Fourier transform. In this experiment, the maximum value of momentum transfer used was Q max =32.6Å -1 for all three samples, and the modification function used was the Lorch function 17 . In eqn.
(2), the quantity T r where ρ 0 is the average atomic density, and x i and b i are the atomic fraction and coherent scattering length for element i, respectively. The total correlation function is a weighted sum of the partial pair correlation functions T ij (r) for pairs of atoms i and j:
Results
The distinct scattering functions for the natural isotopic abundance and the isotopically-enriched samples of glassy As 2 S 3 measured in this study are shown in fig. 1. It can be seen that meaningful oscillations persist up to Q~30 Å -1 , and a value for Q max =32.6Å -1 was chosen for all samples to facilitate comparison between the various data sets. There are large differences evident in i(Q) between natural and isotopically-enriched samples, particularly at smaller values of momentum transfer Q<15 Å -1 , as seen in the inset in fig. 1.
There are very pronounced differences in the peak positions and intensities, especially at low Q-values, between the iodine-containing and pure (natural abundance) arsenic sulphide glasses (see fig. 2). Note particularly the near-disappearance of the first sharp diffraction peak (FSDP) at Q~. 1 2 Å -1 . These marked differences are a signature of the structural disruption caused by the incorporation of the chain-terminating iodine atoms into glassy As 2 S 3 .
The real-space correlation functions, T(r), obtained by Fourier transformation of the i(Q) data (eqn. (2)), are shown for the natural and 33 S-enriched samples of As 2 S 3 in figs. 3 (a,b), respectively, for the range of r encompassing the third peak in T(r). It can be seen that the first peak, corresponding to the first coordination shell, is completely separated from the other peaks in both cases. A measure of how large are the systematic errors in the isotopic-substitution experiments on glassy As 2 S 3 can be gleaned by comparing the calculated quantity T(r)/<b> 2 in each case. From eqn. (2), it can be seen that this quantity is simply related to the atomic density, which should be the same in both cases. The comparison is given in fig. 4(a), whence it can be seen that the two curves of T(r)/<b> 2 are practically indistinguishable.
The T(r) function obtained for the As 2 S 3 I 1.65 glass is shown in fig. 5. It can be seen that the first coordination shell now consists of two clearly resolved peaks. The second peak is also markedly more asymmetric than that characteristic of pure glassy As 2 S 3 .
The structural parameters relating to the first coordination shell can be obtained unambiguously by means of curve fitting, in this case using a peak-shape function involving the Lorch modification function and the Q max value of 32.6 Å -1 in the Fourier transformation of the i(Q) data. The results of this peak-fitting procedure are shown in figs. 3(a,b) for the case of the two As 2 S 3 glass samples, and in fig. 5 for the case of glassy As 2 S 3 I 1.65 .
Values of peak positions and coordination numbers resulting from such peak fits are given in Table 2.
Discussion a) First coordination shell
The results of fitting the first peak in T(r) for the two As 2 S 3 glasses, given in Table 2, are generally consistent with what is expected for the nature of the first coordination shell in this case, viz, in the case of chemical ordering, a first shell comprising As-S correlations, each As atom being surrounded by N AsS = 3 S nearest neighbours and each S atom surrounded by N SAs = 2 As nearest neighbours. The value of the As-S bond length found in this study (r AsS ~ 2.27 Å) is the same as that found in an earlier time-of-flight neutron diffraction study 3 and close to that (r AsS = 2.28 Å) obtained by X-ray diffraction 4,5 and EXAFS 10 experiments.
Although the peak fits to the first peak for the two As 2 S 3 glasses shown in figs 3(a,b) are generally excellent, there is a small but discernible discrepancy at the base on either side of the peak in both cases. These shoulders to the base of the first peak are revealed as small peaks, located at r ~ 2 Å and ~ 2.5 Å(see fig. 4(b)), when the T(r) curves, reduced by the factor of 1/<b> 2 , have subtracted from them the fitted first peak (also reduced by the same factor). The fact that these two subsidiary peaks are practically identical for both As 2 S 3 glasses when plotted in this way (a measure of the atomic-density fluctuation), whereas the spurious termination ripples evident at smaller values of r are not similarly coincident, lends credence to the supposition that these peaks are real and are not artefacts. A small shoulder on the low-r side of the base of the first peak in T(r) has also been seen earlier in a reasonably high-resolution (Q max = 21.3 Å -1 ) X-ray diffraction study 5 of glassy As 2 S 3 .
Similar small shoulders on either side of the base of the first peak in T(r) have also been observed 18 for the case of Ge 25 (As 1-x Ga x ) 10 S 65 glasses measured using high-resolution time-of-flight neutron diffraction.
We propose that the origin of the small subsidiary peaks on either side of the principal first peak in T(r) for glassy As 2 S 3 lies in chemical disorder, i.e. the presence of homopolar S-S and As-As 'wrong' bonds in addition to the heteropolar As-S bonds expected in the case of a perfectly chemically-ordered glass. The glasses examined in this study were quenched from the high temperature of T ~800 0 C. It has been established from EXAFS studies 8 that, with increasing quench temperature in the range 300-800 o C, the nearest-neighbour coordination shell becomes increasingly disordered, as monitored by the static structural contribution to the Debye-Waller factor. The most ordered glass structure was found for samples quenched from T ~300 0 C (near the crystal melting point). A Raman-scattering study 19 of glassy As 2 S 3 quenched from various temperatures below 1100 0 C revealed evidence for a subsidiary peak at ~220 cm -1 , ascribed to vibrations of As-As bonds, whose intensity increased with quench temperature. (A complementary S-S vibrational band, expected to lie at 450-500 cm -1 , could not be observed.) However, another Raman-scattering study 12 did find evidence for small peaks at 230 and 490 cm -1 in some samples of glassy As 2 S 3 , consistent with the presence of As-As and S-S bonds, respectively.
In crystalline forms of elemental sulphur, the nearest-neighbour S-S bond length is r SS ~2.05Å [20], and thus we identify the small peak in T(r) observed at r~2Å with the sulphur-sulphur distance in persulphide \ ⁄ bridging units ( As-S-S-As ), in agreement with an earlier X-ray study 5 . Likewise, we associate the small ⁄ \ peak at r ~2.5 Å with the presence of conjugate As-As bonds, since the As-As bond length in crystalline and amorphous forms of elemental arsenic is 21 r As-As ~2.5 Å. The fact that the two peaks in fig. 4(b) have comparable areas in atomic-number density terms is understandable from the chemical-disorder picture: every sulphur atom removed from an As-S-As unit and inserted into an As-S bond, thereby forming an S-S bond, leaves behind a (reconstructed) As-As bond. The subsidiary shoulders on either side of the base of the first peak of T(r) observed in the case of Ge-As-Ga-S glasses have also been interpreted in terms of homopolar-bond chemical disorder 18 .
We turn now to a discussion of the first coordination shell in the structure of glassy As 2 S 3 I 1.65 (As 0.3 S 0.45 I 0.25 ) as revealed by these high-resolution neutron-scattering measurements. As seen in fig. 5, the first coordination shell for this glass clearly consists of two components, and a fit using a peak-shape function as for As 2 S 3 reveals the existence of two peaks, one located at r~2.26 Å and the other at r~2.59 Å. A much earlier, very-low-resolution X-ray diffraction study 14 , using Cu K α X-radiation, could not resolve these two constituent peaks but instead found a single broad first peak in the radial distribution function at r~2.45 Å, which is obviously an overlapped combination of the two peaks. A more recent high-resolution neutron-scattering study 3 of the glassy system (As 2 S 3 ) 1-x (AsI 3 ) x (or equivalently As (2-x)/(5-x) S (3-3x)/(5-x) I 3x/(5-x) ) also reveals a split first coordination shell, with peaks at r~2.27 Å and r~2.62 Å, the latter peak increasing in intensity at the expense of the other with increasing iodine content (x). The reduced radial distribution function for the glass in the (As 2 S 3 ) 1x (AsI 3 ) x system with the composition closest to that studied here, viz As 0.34 S 0.36 I 0.3 with x=0.455, appears to be qualitatively the same as the total correlation function shown in fig. 5.
Note also that, as for the pure As 2 S 3 glass (figs. 3 and 4), there is a rather pronounced shoulder to the base of the first peak on the low-r side, at r~2 Å. As before, we ascribe this shoulder to the presence of S-S bonds, occurring in this case, not just because of chemical disorder, but also as a result of the stoichiometry.
We associate the peak in T(r) at r~2.26 Å with nearest-neighbour As-S bonds, since the peak position is very close to that found for glassy As 2 S 3 (see Table 2). We ascribe the second peak in the first coordination shell, at r~2.59 Å, to nearest-neighbour As-I bonds, since the As-I bond length in crystalline AsI 3 (consisting of a packing of AsI 3 molecules) is 22 r As-1 =2.591 Å.
The coordination numbers inferred from the areas of these two peaks are given in Table 2. It can be seen that the coordination number for As atoms bonded to I is N IAS =l.l, consistent with the idea that indeed iodine acts as a chain terminator, bonding preferentially to arsenic. (Sulphur-iodine bonds have a very low degree of stability and are therefore not expected to be formed 14 .) The coordination numbers for the nearest-neighbour environment of As atoms are N AsS =2.1 and N AsI =0.9, consistent with the total average coordination number of arsenic being three, as usual. These results can be interpreted in two ways. Either the iodine substitutes randomly for sulphur atoms in bonding to arsenic, thereby forming a twisted chain structure consisting of fragments such as, S S-S \ / \ / \ / As As As | | | I I I as proposed by Hopkins et al 14 , with arsenic being coordinated to two sulphur atoms and one iodine atom on average. Alternatively, as proposed by Koudelka and Pisarcik 12,13 from a Raman-scattering study, the structure of As-S-I glasses could be composed of discrete AsI 3 molecules dissolved in an As-S glass matrix. In this case, the composition of the glass examined in this study, viz. As 0.301 S 0.451 I 0.248 , can be rewritten as (As 0.218 S 0.451 )(AsI 3 ) 0 . 0827 .
Thus, the overall coordination number for S atoms bonded to As is expected to three, scaled by the proportion of As atoms in the As-S matrix to the total, i..e. N AsS = 218x3/0.301=2.17 and that for I atoms is similarly N AsI = 0.0827x3/0.301=0.82. These are the same as the values found experimentally (N AsS = 2.1, N AsI = 0.9) to within the experimental error.
Although analysis of the short-range order (first coordination shell) in As 2 S 3 I 1.65 apparently cannot decide between the twisted-chain and AsI 3 molecule models for the structural incorporation of iodine, perhaps the i(Q) data at small Q, specifically the FSDP, may provide a clue. The near-disappearance of the FSDP for the iodinecontaining glass means that the medium-range order that gives rise to this feature, namely quasi-periodic correlations between cation-centred coordination polyhedra (in this case, AsS 3 trigonal pyramidal units), must be very nearly completely destroyed in the iodine-containing glass. In the structural model involving discrete AsI 3 molecules, there still remains a very sizeable proportion of a highly cross-linked As-S glassy matrix that, presumably, would still give rise to an appreciable FSDP. In the other model, however, where I / iodine acts as a chain terminator, [-S-As-S] chains would be produced, the structural flexibility of which (associated with chain rotation) would ensure that quasi-periodic As-As correlations would be destroyed. There is thus some evidence that perhaps the random-chain model is the more appropriate.
b) Second coordination shell
The second peak in T(r) for glassy As 2 S 3 appearing at r ~3.5 Å( figs. 3(a,b)), will be due to second-neighbour As-As and S-S correlations for the chemically-ordered majority part of the structure (together with secondneighbour As-S correlations associated with the chemically-disordered part). However, there will also be substantial contributions from non-directly-bonded, "interlayer" atomic correlations of all three types if the local structure of the glass is anything like that of the crystal orpiment 13 . In principle, an isotopic-substitution neutrondiffraction study, such as we have performed for this system, should be able to distinguish between the various contributions to the second peak.
The total distinct scattering function, i(Q) (or, equivalently, the total real-space correlation function T(r), related to it by a Fourier transform (eqn. (2)), can be written as a weighted sum of partial atom-atom functions, e.g.
AsS SS (4) and for isotopic substitution of sulphur: Thus, the simple first difference of these two quantities is AsS SS (6) where the coefficients are given by As As Thus, this first difference contains structural information only on As-S and S-S pair correlation functions.
The Fourier transform, ∆T(r), of this first difference is shown in fig. 6(a). It can be seen that the first (As-S) peak has the same shape as in the total T(r) (fig. 3), as expected, but that the second peak is rather different, a pronounced shoulder having appeared on the high-r side. This second peak in the difference spectrum has been fitted with two Gaussian functions (rather than peak-shape functions for ease of fitting), having peak positions at 3.47 and 3.87Å. The dominant contribution to the second peak in the difference function at 3.47 Å will be due to sulphur-sulphur second-neighbour correlations, defining the bond angle subtended at the arsenic atoms, as well as presumably due to non-bonded As-S and S-S correlations with the same range of separations, as found in the orpiment crystal structure 13 .
Another first difference can be taken of i(Q) (or T(r)) functions for natural-abundance and isotopicallysubstituted samples that instead involves As-As correlations. Modified functions can be defined as: where the asterisk superscript denotes, once more, quantities relating to the isotopically-substituted sample, and the coefficients α, β and γ are as in eqn. (7). Taking the first difference gives The Fourier transform, ∆T m (r), of the difference function ∆i m (Q) is shown in fig. 6(b). As for the other difference function, ∆T(r), shown in fig. 6(a), the first peak is due to As-S correlations, but the shoulder at r ∼2.5 Å on the high-r side of the base of the first peak in ∆T m (r) is more pronounced than in fig. 6(a), lending support to the assertion made earlier that it is due to As-As 'wrong' bonds. The second peak in ∆T m (r) is at almost exactly the same position as that in ∆T(r), but the pronounced shoulder on the high-r side in ∆T(r), evident in fig. 6(a), is missing in ∆T m (r), leaving only a small peak separated from the main second peak. This second peak in ∆T m (r) can be fitted by three Gaussian peaks, as shown in fig. 6(b), positioned at 3.16, 3.50 and 3.98 Å.
However, the position of the component peak at highest r is somewhat imprecise; variations in the upper cut-off for fitting produced variations in the peak position of ∆r ~±0.1 Å. The small first peak at r ~3.16 Å can possibly be ascribed to As-As 'interlayer' non-bonded correlations; such atomic correlations occur in orpiment at 13 3.19 Å. The main contribution at 3.50 Å is practically the same as for ∆T(r) (3.47 Å), and this similarity reflects the fact that, at least in the orpiment crystal structure, the average bond angles subtended at sulphur and at arsenic atoms are very similar (97.5 o and 99 o , respectively), and hence the average second-neighbour As-As and S-S distances should be almost indistinguishable.
It is a little disappointing that isotopic substitution has not been successful in differentiating all the various second-neighbour contributions to the second peak in T(r) for glassy As 2 S 3 and hence cannot be used to determine separately the bond angles subtended at As and S atoms. However, the fact that the pronounced shoulder observed on the high-r side of the second peak in ∆T(r) is revealed as a small separate peak in ∆T m (r), means that this feature, also responsible for the high-r asymmetry of the second peak in T(r) (e.g. see fig. 4), is probably due primarily to non-bonded As-S correlations, since it appears in both difference functions.
We turn now to an examination of the second main peak in T(r) for the As 2 S 3 I 1.65 glass (see fig. 5).
Comparison with fig. 3(a), showing T(r) for the natural-abundance As 2 S 3 glass sample, shows that the peak maximum has shifted to r ~3.65 Å and that the high-r side of the peak has become markedly asymmetric.
Similar behaviour was found by Kameda et al 3 in a neutron-diffraction study of a series of (As 2 S 3 ) 1-x (AsI 3 ) x glasses with 0<x<0.65, who found evidence for the growth of an additional peak at 3.98 Å with increasing iodine content. The I-I second-neighbour distance in crystalline AsI 3 is r=3.959 Å [22] and so it is tempting to associate the high-r asymmetry of the second peak in T(r) for the As-S-I glass with the formation of discrete AsI 3 molecules dissolved in a glassy As-S matrix. However, Hopkins et al 14 have also ascribed the appearance of correlations at r ~3.9 Å to I-I distances in a twisted chain model where I atoms, bonded to different As atoms, are separated by an As-S-S-As chain. Diffraction data alone are not really capable of distinguishing between these two alternative structural models.
Conclusions
A high-resolution pulsed neutron-diffraction study has been performed to investigate the atomic structure of glassy As 2 S 3 and As 2 S 3 I 1.65 (As 0.3 S 0.45 I 0.25 ). In addition, an isotopic-substitution experiment using the 33 S isotope has been carried out for glassy As 2 S 3 . Even with isotopic substitution, the main pair correlations to the second peak in the radial distribution function for glassy As 2 S 3 could not be resolved, and hence the sulphur and arsenic bond angles could not separately be determined but they must be very similar in value. However, the high-r asymmetry in the second peak can be ascribed probably to the contribution of non-bonded As-S correlations occurring at r ~3.9 Å using this method. Small shoulders to the low-and high-r sides of the base of the first (As-S) peak in the radial distribution function are ascribed to S-S and As-As 'wrong' bonds, respectively, a manifestation of the chemical disorder characteristic of this glassy sample quenched from a high temperature (800 0 C). A recent density-functional-based tight-binding molecular-dynamics simulation 24 of a-As 2 S 3 , produced results very similar to those reported here. In particular, the small shoulders lying on the low-r and high-r sides of the first As-S peak in the RDF were definitely identified as being due to S-S and As-As bonds respectively, in a model containing such homopolar bonds.
In the As-S-I glass, the first coordination shell has been found to be clearly split into two components, one due to As-S and one due to As-I nearest-neighbour contributions. A marked high-r asymmetry of the second peak in the radial distribution function was found for the I-containing glass compared to pure glassy As 2 S 3 due to the presence of I-I correlations. However, analysis of these diffraction data to give short-range structural information is alone incapable of being used to distinguish between two competing structural models for the incorporation of iodine into glassy arsenic sulphide, namely the formation of discrete AsI 3 molecules dissolved in I an As-S glassy matrix, or alternatively the formation of / chains with the iodine atoms acting as [-S-As-S-] chain terminators. However, the fact that the intense first sharp diffraction peak (FSDP) characteristic of glassy As 2 S 3 is practically absent in the iodine-containing glass perhaps favours the random-chain model, where there is no cross linking and hence little structural frustration forcing the creation of the medium-range order required to produce the FSDP. isotope. In both cases, the peak-shape function fit to the first peak is shown as the dashed curve. 9)). In both cases, the fit made to the second peak using three Gaussian functions is shown by the dashed-dotted curve, with the three individual peaks shown as the dashed curves. | 2019-04-14T02:04:48.928Z | 2004-02-24T00:00:00.000 | {
"year": 2004,
"sha1": "6d8f8d2ee167ec74956f4536925dcc8a785a78b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "31bf966d52c806fb5a6ad0a7079a3f33e39f4341",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245352019 | pes2o/s2orc | v3-fos-license | Epigallocatechin Gallate (EGCG), a Green Tea Polyphenol, Reduces Coronavirus Replication in a Mouse Model
The COVID-19 pandemic has resulted in a huge number of deaths from 2020 to 2021; however, effective antiviral drugs against SARS-CoV-2 are currently under development. Recent studies have demonstrated that green tea polyphenols, particularly EGCG, inhibit coronavirus enzymes as well as coronavirus replication in vitro. Herein, we examined the inhibitory effect of green tea polyphenols on coronavirus replication in a mouse model. We used epigallocatechin gallate (EGCG) and green tea polyphenols containing more than 60% catechin (GTP60) and human coronavirus OC43 (HCoV-OC43) as a surrogate for SARS-CoV-2. Scanning electron microscopy analysis results showed that HCoV-OC43 infection resulted in virion particle production in infected cells. EGCG and GTP60 treatment reduced coronavirus protein and virus production in the cells. Finally, EGCG- and GTP60-fed mice exhibited reduced levels of coronavirus RNA in mouse lungs. These results demonstrate that green tea polyphenol treatment is effective in decreasing the level of coronavirus in vivo.
Introduction
The COVID-19 pandemic has resulted in millions of deaths from 2020 to 2021 due to the high mortality of SARS-CoV-2 [1]. Although vaccines for SARS-CoV-2 are now available, new variants of SARS-CoV-2 are continuously emerging, and effective medicines for COVID-19 are under development [2]. Any medicine that can reduce the coronavirus in vivo will undoubtedly be helpful in improving the current COVID-19 conditions. Green tea has been consumed for thousands of years, and many beneficial effects of green tea have been reported [3,4]. Recent studies have demonstrated that green tea and green tea polyphenols inhibit coronavirus proteins and coronavirus replication in vitro [5,6]. Many viruses, including coronaviruses, encode polyproteins, and viral or cellular proteases cleave polyproteins into functional individual proteins [7]. Therefore, virus-encoded proteases are regarded as major targets of antiviral medicines [8]. Coronavirus encodes two viral proteases, papain-like protease and chymotrypsin-like protease (3CL protease), which have more cleavage sites in coronavirus proteins [9]. Therefore, 3CL protease is the primary target of coronavirus drugs, and several reports have demonstrated that green tea polyphenols, including EGCG, inhibit coronavirus 3CL-protease [10][11][12][13]. Coronaviruses use the spike protein for entry into host cells, and the coronavirus spike protein-receptor interaction is known to be the target of green tea polyphenol [14]. Recent reports suggest that EGCG prevents the interaction between coronavirus spike protein and cellular receptors and inhibits the entry of coronavirus into host cells [15]. In addition, EGCG has been Viruses 2021, 13, 2533 2 of 8 reported to inhibit NSP15 endoribonuclease activity in vitro [16]. These results collectively indicate that green tea polyphenols can inhibit coronavirus proteins.
Because green tea is a very popular beverage, there has been attempts to evaluate the correlation between average green tea consumption and COVID-19 morbidity/mortality, and countries with higher rates of green tea consumption showed less morbidity/mortality than those with lower green tea consumption [17]. Recently, small clinical studies have been performed to study the efficacy of green tea consumption in COVID-19 treatment, and the results are promising [18].
Coronaviruses are classified into alpha, beta, gamma, and delta coronaviruses, and only alpha and beta coronaviruses are reported to infect humans [19]. Seven coronavirus strains are known to infect humans: human coronavirus 229E (HCoV-229), HCoV-HKU1, HCoV-NL63, HCoV-OC43, SARS-CoV, MERS-CoV, and SARS-CoV-2 [20]. As SARS-CoV-2 and HCoV-OC43 belong to the beta coronavirus family, we used the HCoV-OC43 virus as a surrogate in this report. We demonstrated that green tea polyphenol extract and EGCG treatment can reduce coronavirus in mice. To the best of our knowledge, this is the first study to investigate the effect of green tea on coronavirus replication in a mouse model.
Mouse Experiment
Male C57BL/6 mice (3-week-old) were obtained from DBL (Seoul, Korea) and housed with wood chip bedding, clean-air rooms with a 12-h light-dark cycle, and a relative humidity of 50%. Mice were infected with 10 µL of HCoV-OC43 virus (10 7 PFU/mL) through intranasal injection [22]. After infection, 30 mg/kg body weight GTP60 (Polyphenon 60) or 10 mg/kg body weight EGCG were administered daily for 2 weeks via regular drinking bottles to avoid stress exposure resulting from repeated injections [21,23,24]. Water bottles were replaced daily. After administration, mice were sacrificed by carbon dioxide (CO 2 ) euthanasia, and samples were collected.
Quantitative RT-PCR and Western Blots
Quantitative RT-PCR was used to measure the level of coronavirus RNA in cells and media, as described previously [6]. Briefly, cells and media were harvested and RNA was extracted using Trizol (Thermo Fisher Scientific) in accordance with the manufacturer's instructions and subjected to RT-PCR using the StepOnePlus Real-Time PCR System (Thermo Fisher Scientific). The HCoV-OC43 N gene was amplified using the forward primer 5 -AGG ATG CCA COCA AAC CTC AG-3 and reverse primer 5 -TGG GGA ACT GTG GGT CAC TA-3 . Western blotting was used to measure the level of coronavirus protein in the cells, as described previously [6]. Briefly, coronavirus-infected HCT8 cells were harvested and resuspended in cell lysis buffer (150 mM NaCl, 50 mM HEPES (pH 7.5), and 1% NP40) containing a protease inhibitor cocktail (Roche, Basel, Switzerland). Equal amounts of proteins were subjected to Western blotting with anti-HCoV OC43 antibody (Sigma-Aldrich). Images were acquired using the ImageQuant LAS 4000 system (GE Healthcare, Waukesha, WI, USA).
Scanning Electron Microscopy
For SEM imaging, HCT8 cells were grown on sterilized 9 mm cover slips and infected with HCoV-OC43 for 3 days. Cells were fixed with 2.5% glutaraldehyde for 1 h and immediately dehydrated in an ethanol series (20%, 40%, 60%, 80%, 90%, and 100% Et-OH). After dehydration, the cells were air-dried using a vacuum desiccator. Cells were coated with platinum, and images were captured using a Carl Zeiss SEM SUPRA 40 microscope (Carl Zeiss, Oberkochen, Germany).
Statistical Analysis
The results of Western blotting and quantitative RT-PCR were evaluated by a 2-tailed Student's t-test using Excel software, Excel 2016 (Microsoft, Redmond, WA, USA). Statistical significance was set at p < 0.05.
Coronavirus Infection Results in Cell Surface Alteration
To examine the coronavirus infection on the surface, we infected RD and HCT8 cells with HCoV-OC43 strain. Three days after infection, we used a scanning electron microscope (SEM) to analyze the infected cells. The mock-infected control cells showed a smooth surface, and coronavirus infection resulted in coronavirus virion particles on the cell surface ( Figure 1A-F). We also analyzed the coronavirus virion particles in the media and found that the size of the coronavirus particles was approximately 100 nm ( Figure 1I). Furthermore, we found that coronavirus infection of HCT8 cells produced extended surface projections ( Figure 1G,H). These results indicate that coronavirus infection results in virion particle production on the surface, as well as cell surface projections.
Scanning Electron Microscopy
For SEM imaging, HCT8 cells were grown on sterilized 9 mm cover slips and infected with HCoV-OC43 for 3 days. Cells were fixed with 2.5% glutaraldehyde for 1 h and immediately dehydrated in an ethanol series (20%, 40%, 60%, 80%, 90%, and 100% Et-OH). After dehydration, the cells were air-dried using a vacuum desiccator. Cells were coated with platinum, and images were captured using a Carl Zeiss SEM SUPRA 40 microscope (Carl Zeiss, Oberkochen, Germany).
Statistical Analysis.
The results of Western blotting and quantitative RT-PCR were evaluated by a 2-tailed Student's t-test using Excel software, Excel 2016 (Microsoft, Redmond, WA, USA). Statistical significance was set at p < 0.05.
Coronavirus Infection Results in Cell Surface Alteration
To examine the coronavirus infection on the surface, we infected RD and HCT8 cells with HCoV-OC43 strain. Three days after infection, we used a scanning electron microscope (SEM) to analyze the infected cells. The mock-infected control cells showed a smooth surface, and coronavirus infection resulted in coronavirus virion particles on the cell surface ( Figure 1A-F). We also analyzed the coronavirus virion particles in the media and found that the size of the coronavirus particles was approximately 100 nm ( Figure 1I). Furthermore, we found that coronavirus infection of HCT8 cells produced extended surface projections ( Figure 1G,H). These results indicate that coronavirus infection results in virion particle production on the surface, as well as cell surface projections.
EGCG and Green Tea Polyphenols Inhibits Coronavirus Replication
We examined the effect of green tea polyphenols on coronavirus replication in mice and produced the coronavirus in RD and HCT8 cells. As HCoV-OC43 virus produced in HCT8 cells results in efficient infection in mice, we performed the experiments with the coronavirus produced in HCT8 cells. We examined the inhibitory effect of EGCG on coronavirus infection in HCT8 cells and found that EGCG treatment effectively decreased virus production and surface projection in HCT8 cells (Figure 2A). In addition, we analyzed HCoV-OC43 protein expression in HCT8 cells and found that EGCG decreased OC43 protein expression in a dose-dependent manner ( Figure 2B). OC43 protein levels were significantly decreased after treatment with 5 µg/mL EGCG ( Figure 2C).
EGCG and Green Tea Polyphenols Inhibits Coronavirus Replication
We examined the effect of green tea polyphenols on coronavirus replication in mice and produced the coronavirus in RD and HCT8 cells. As HCoV-OC43 virus produced in HCT8 cells results in efficient infection in mice, we performed the experiments with the coronavirus produced in HCT8 cells. We examined the inhibitory effect of EGCG on coronavirus infection in HCT8 cells and found that EGCG treatment effectively decreased virus production and surface projection in HCT8 cells (Figure 2A). In addition, we analyzed HCoV-OC43 protein expression in HCT8 cells and found that EGCG decreased OC43 protein expression in a dose-dependent manner ( Figure 2B). OC43 protein levels were significantly decreased after treatment with 5 μg/mL EGCG ( Figure 2C). Next, we examined the effects of green tea polyphenols on coronavirus replication. We used green tea polyphenols containing more than 60% catechin (GTP60), and SEM analysis demonstrated that GTP60 treatment decreased coronavirus-induced virus particle production and surface projections in HCT8 cells ( Figure 3A) as well as coronavirus replication ( Figure 3B). We found that treatment with more than 15 μg/mL GTP60 efficiently decreased HCoV-OC43 replication ( Figure 3C).
We also confirmed the reduction in coronavirus replication by examining coronavirus infectivity upon EGCG or GTP60 treatment. RD cells were infected with HCoV-OC43 and treated with EGCG or GTP60. After the media change, the infected cells were incubated for 72 h, and the conditioned media was used to infect uninfected cells ( Figure 4A). Coronavirus infectivity was visualized by cytotoxicity, and the conditioned media from EGCG or GTP60-treated cells showed the reduced level of cytotoxicity ( Figure 4B). These results indicate that EGCG or GTP60 treatment decreased the coronavirus replication and infectivity. Next, we examined the effects of green tea polyphenols on coronavirus replication. We used green tea polyphenols containing more than 60% catechin (GTP60), and SEM analysis demonstrated that GTP60 treatment decreased coronavirus-induced virus particle production and surface projections in HCT8 cells ( Figure 3A) as well as coronavirus replication ( Figure 3B). We found that treatment with more than 15 µg/mL GTP60 efficiently decreased HCoV-OC43 replication ( Figure 3C).
We also confirmed the reduction in coronavirus replication by examining coronavirus infectivity upon EGCG or GTP60 treatment. RD cells were infected with HCoV-OC43 and treated with EGCG or GTP60. After the media change, the infected cells were incubated for 72 h, and the conditioned media was used to infect uninfected cells ( Figure 4A). Coronavirus infectivity was visualized by cytotoxicity, and the conditioned media from EGCG or GTP60-treated cells showed the reduced level of cytotoxicity ( Figure 4B). These results indicate that EGCG or GTP60 treatment decreased the coronavirus replication and infectivity.
EGCG and Green Tea Polyphenols Reduce Coronavirus Replication in Mouse
After we showed the inhibitory effect of EGCG and green tea polyphenols on coronavirus replication in vitro, we performed a mouse experiment to examine their inhibitory effect in vivo. Mice were infected with HCoV-OC43 virus, which was produced in HCT8 cells by intranasal infection, and the coronavirus RNA was evaluated by quantitative RT-PCR. We found that coronavirus RNA was readily detected in the mouse lung after two weeks ( Figure 5A,B). However, we could not find any significant weight difference between the uninfected group and infected group (data not shown). To evaluate the effect of EGCG and GTP60, ten mice in each group were fed with untreated, EGCG (10 mg/kg), or GTP60 (30 mg/kg) daily for 2 weeks ( Figure 5A). After sacrificing the mice, we examined the level of coronavirus in the lungs by quantitative RT-PCR. Compared with untreated mice, EGCGor GTP60-fed mice showed reduced levels of coronavirus RNA in the lungs ( Figure 5C). In addition, we measured the mouse weight in each group, and EGCG or GTP60 treatment did not result in a significant change in mouse weight, suggesting that EGCG or GTP60 treatment did not induce toxicity in mice ( Figure 5D). These results collectively indicate that EGCG and GTP60 treatments are effective in inhibiting coronavirus replication in vivo.
EGCG and Green Tea Polyphenols Reduce Coronavirus Replication in Mouse
After we showed the inhibitory effect of EGCG and green tea polyphenols on coronavirus replication in vitro, we performed a mouse experiment to examine their inhibitory effect in vivo. Mice were infected with HCoV-OC43 virus, which was produced in HCT8 cells by intranasal infection, and the coronavirus RNA was evaluated by quantitative RT-PCR. We found that coronavirus RNA was readily detected in the mouse lung after two weeks ( Figure 5A,B). However, we could not find any significant weight difference between the uninfected group and infected group (data not shown). To evaluate the effect of EGCG and GTP60, ten mice in each group were fed with untreated, EGCG (10 mg/kg), or GTP60 (30 mg/kg) daily for 2 weeks ( Figure 5A). After sacrificing the mice, we examined the level of coronavirus in the lungs by quantitative RT-PCR. Compared with untreated mice, EGCG-or GTP60-fed mice showed reduced levels of coronavirus RNA in the lungs ( Figure 5C). In addition, we measured the mouse weight in each group, and EGCG or GTP60 treatment did not result in a significant change in mouse weight, suggesting that EGCG or GTP60 treatment did not induce toxicity in mice ( Figure 5D). These results collectively indicate that EGCG and GTP60 treatments are effective in inhibiting coronavirus replication in vivo. Ten mice in each group were used for HCoV-OC43 infection and treated with the indicated EGCG or GTP60 daily. After 2 weeks, the expression of virus RNA was evaluated through quantitative RT-PCR. The graph shows mean and standard error. Untreated vs. treatment, *: p < 0.05, **: p < 0.01 (n = 10). (D) After mouse sacrifice, the individual mouse weight was measured, and the data was depicted as a graph. Untreated vs. treatment, NS: not significant. Ten mice in each group were used for HCoV-OC43 infection and treated with the indicated EGCG or GTP60 daily. After 2 weeks, the expression of virus RNA was evaluated through quantitative RT-PCR. The graph shows mean and standard error. Untreated vs. treatment, *: p < 0.05, **: p < 0.01 (n = 10). (D) After mouse sacrifice, the individual mouse weight was measured, and the data was depicted as a graph. Untreated vs. treatment, NS: not significant.
Discussion
Mounting evidence indicates that green tea polyphenols such as EGCG inhibit coronavirus replication in vitro; however, experiments using mouse models have not yet been performed [5]. To the best of our knowledge, this report is the first to show that green tea polyphenols are effective in inhibiting coronavirus replication in vivo.
In this experiment, we used two types of compounds as a source of green tea polyphenols. One was EGCG, a major green tea polyphenol, and the other was a green tea extract from lab reagent suppliers (Sigma-Aldrich). Because the composition of green tea extract can be diverse based on the extraction methods, we used a standard green tea extract. Therefore, a similar experiment can be repeated by other researchers. In addition, we used the HCoV-OC43 virus for mouse experiments as a surrogate for SARS-CoV-2. Due to strict regulations, it is difficult to perform experiments with SARS-CoV-2. However, both SARS-CoV-2 and HCoV-OC43 are beta coronaviruses, and HCoV-OC43 is a good alternative for SARS-CoV-2 [20]. Further animal experiments will be required to examine the effect of green tea on SARS-CoV-2 replication.
Initially, we attempted to infect mice with HCoV-OC43 virus, which was produced in human RD cells; however, successful infection was not obtained with repeated trials.
For this reason, we tested several other cell lines and found that the human HCT8 cell line was suitable for the production of HCoV-OC43 virus for infecting mice. Although we detected the replication of HCoV-OC43 in mice lungs, we did not observe any mouse death due to coronavirus infection, and we did not observe weight loss due to infection (data not shown). The effect of HCoV-OC43 infection on mice was mild and we did not observe any other external differences between the uninfected group and infected group. This is probably due to the pathological differences between humans and mice, and it is the limitation of animal experiments. To validate the effect of EGCG or green tea on coronavirus, further human clinical study should be conducted.
In this experiment, we evaluated the mouse intake of EGCG and GTP60 based on daily water consumption and reagent concentration [21]. As the intake amount of EGCG or GTP60 was reasonable, a man weighing 60 kg could obtain an equal amount of EGCG by consuming 600 mg of EGCG. Because green tea has been consumed for thousands of years, its safety is guaranteed. Therefore, green tea can be employed to reduce coronavirus in infected patients if the efficacy of green tea on coronavirus is thoroughly proven.
There are many reports supporting the efficacy of green tea against coronavirus. Green tea polyphenols, including EGCG, have been reported to inhibit several coronavirus proteins [5]. Moreover, green tea polyphenols have been shown to inhibit coronavirus replication, including SARS-CoV-2 [6,16]. In addition, a preliminary epidemiological study and small-scale clinical study suggest that the consumption of green tea can be beneficial for patients with COVID-19 [17,18]. Further preclinical and clinical trials should be conducted to clarify the efficacy of green tea against coronavirus disease. | 2021-12-21T21:03:32.140Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "4d0e598654766ff22b770dabd5bba62800b744d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4d0e598654766ff22b770dabd5bba62800b744d4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230623237 | pes2o/s2orc | v3-fos-license | MR CHOLANGIO-PANCREATICOGRAPHY AS A TOOL FOR EVALUATION OF PATIENTS WITH PANCREATICOBILIARY DISEASES
Ashraf S. Nadaf, Nanjaraj C.P, Rajendra Kumar N.L, Shashi Kumar M.R and Pradeep Kumar C.N ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 20 September 2020 Final Accepted: 24 October 2020 Published: November 2020
ISSN: 2320-5407
Int. J. Adv. Res. 8 (11), 901-951 902 biliary obstruction present with abnormal liver function & symptoms such as jaundice, pale-colored stools, dark urine, itching, right upper quadrant pain, fever, nausea & vomiting. Initial imaging of patients with suspected acute biliary disease (including gall bladder disease) should be performed primarily with US which has a sensitivity for pathologic processes of 83% [1] .
US is the first line imaging investigation in patients with jaundice or right upper quadrant pain. Although US is noninvasive, quick & inexpensive, it is very operator & patient dependent. It has limitations especially in the evaluation of distal common bile duct where bowel gas, debris, fluid in the duodenum & obesity can degrade the image quality [2] . Although CT is not the best technique for imaging choledocholithiasis, it is frequently performed for the assessment of jaundice. Widely variable sensitivities have been reported, ranging from 20 to 78%. CT also has its fair share of limitations, especially in demonstrating important pathology, biliary stones. CT has a sensitivity of only 90% for detecting biliary stones [3,4] . Stones having high cholesterol content may be missed as their attenuation resembles fluid; as a result, they are difficult to separate from bile. Mixed stones also may be difficult to detect on CT as they present as soft tissue density; this soft tissue density may merge with the pancreatic parenchyma thereby decreasing the sensitivity of CT.
Endoscopic retrograde cholangiopancreatography (ERCP) is currently the "gold standard" for the diagnosis of biliary obstruction. It is one of the several invasive direct cholangiography techniques. However, it is an imperfect diagnostic tool & other procedures may be more appropriate gold standards for diagnosis in the future [5] . ERCP is a very operator dependent & invasive procedure & it is associated with 1 -7% related morbidity & 0.2 -1% mortality [6] .
Neoplasms of the bile & pancreatic ducts present major challenge both for diagnosis & treatment. These tumors may arise primarily from the ducts or may involve the pancreatico-biliary tree secondarily by extension from metastatic tumors of the liver, gall bladder, pancreas or adjacent lymph nodes. Before definite therapy, knowledge of the level of obstruction & its cause is essential [7] . Magnetic Resonance Cholangiopancreatography(MRCP) is a noninvasive diagnostic technique that was developed for the visualization of the biliary & pancreatic ducts. Its use was first reported in 1991, & since then the method has evolved along with the advances in MRI hardware & imaging sequences [8] . MRCP is an alternative to diagnostic ERCP for imaging the biliary tree & investigating biliary obstruction. MRCP does not expose the patient to the risks associated with ERCP or PTC. These can occur in up to 5% of ERCP procedures [9] . In addition, there is no use of ionizing radiation or iodinated contrast agents. It has, therefore, become the investigation of choice for many conditions when evaluating pancreatico-biliary ductal disease.
Invasive cholangiography remains the investigation of choice when intervention is required. MRCP is particularly useful in patients with complete biliary obstruction after biliary-enteric anastomosis, where ERCP is frequently not feasible, or in patients for whom ERCP or PTC has failed or unsuitable [10] .
MRI plays a vital role in diagnosing may conditions of the pancreatico-biliary tract. On MRI, Primary Sclerosing Cholangitis (PSC) shows several characteristic features including bile duct abnormalities & increased enhancement of liver parenchyma. Wall thickening & enhancement of extrahepatic bile duct are also common MRI findings in patients with PSC [11] . Acute pancreatitis can be distinguished from chronic pancreatitis from that due to pancreatic carcinoma [12] . MRI can depict the extent of gall bladder carcinomas & can contribute to the staging of this disease [13] . It is a non-invasive, non-ionizing imaging modality & is unaffected by bowel gas shadow as in US.
With the development of higher magnetic field strength & newer pulse sequences, MRCP with its inherent high contrast resolution, rapidity, multiplanar capability & virtually artifact free display of anatomy & pathology in this region is proving to be examination of choice in patients with pancreatico-biliary diseases [14] .
Numerous studies have compared the multislice HASTE sequences with 2D/3D FSE and Gradient Echo (GRE) SSFP sequences [24][25][26][27] . All the studies concluded that multislice HASTE sequence were significantly superior to other sequences especially in terms of Signal to Noise (S/N) ratio and Contrast to Noise (C/N) ratio. Many recent studies have also compared RARE and HASTE sequences to identify the optimal MRCP sequence.
As mentioned earlier RARE sequences suffer from the drawback of direct projectional images and provide no source images for post processing. The S/N and C/N ratio of RARE are significantly lower than HASTE multi slice sequences. The image quality of HASTE multi slice images appears superior. However, in the visualization of the ampulla, periampullary region and anomalies of pancreatico-biliary tree, the RARE sequences are superior to HASTE. Half Fourier RARE MRCP is a reliable imaging technique for the evaluation of anatomy and complications associated with a surgically altered pancreatico-biliary duct system [28] .
A study conducted by Morimoto et al shows that single shot RARE provides superior image quality, duct conspicuity with the added advantage of less image artifact and short acquisition time. However, volume averaging can cause bile duct stones to be missed. Therefore, multislice HASTE sequences should still be acquired if choledocholithiasis is suspected. Larger studies are required to assess the diagnostic efficiency of single shot RARE sequences in pancreatic duct and intra hepatic duct disease [29] .
Due to high density of bile on HASTE multislice Multiple Intensity Projection (MIP) images, small CBD stones can be missed. RARE image as well as source images of HASTE multislice sequence demonstrates these stones. These studies have revealed no significant difference in image quality between slice thickness of 2 mm and 7 mm in the HASTE multislice technique.
In a study done by Soto et al non-breath hold 3D-FSE, breath hold single section half Fourier RARE and breath hold multislice Half Fourier RARE were compared. The 3D MRCP sequences had a similarly high sensitivity and specificity for the detection of choledocholithiasis [30 &31] .
For the evaluation of pancreatic parenchyma Gadolinium enhanced images are acquired. The normal peak enhancement occurs at 30 to 45 secs [32 & 33] . The evaluation of viability of the pancreatic parenchyma succeeds best on immediate post contrast image obtained with novel fast GRE T1W breath hold sequence such as FLASH (Fast Low Angle Shot), Turbo FLASH, Fast Field Echo (FFE) or Fast Multiplanar Spoiled Gradient Recalled imaging (FMPSPGR) [34] .
Recently phased array multicoil systems for volume imaging have been developed [35 & 36] . The use of body phased array coil improves S/N and C/N ratio as compared to imaging done only with body coil [37 & 38] . Combination of phased array coil, torso phased array coil & rapid sequences enables detection of 1 mm ducts [39] . With phased array coil, abdominal wall motion & the respiratory artifacts are reduced with the wrapped array coil. Disadvantages are the incomplete coverage of the abdomen, the inhomogeneous signal intensity & the expenses of the additional system.
The limitation of MRCP is that ascites or fluid collection may obscure ductal anatomy, which however may be partly overcome by use of multi-oblique method [40] .
MRI is increasingly being used to evaluate pancreatic & biliary ductal systems & even the bowel. The potential or MR imaging to provide functional & anatomic information is intriguing & new techniques, including diffusion & perfusion weighted images are being evaluated [41] .
Anatomy Of Biliary System
In the 4th week of human gestation, a hepatic diverticulum(figure 1) develops from the ventral foregut that eventually will become the bilobed liver and gall bladder, and a solid stalk connects the developing liver to the descending duodenum. By 3 months of gestation, the entire biliary system and gall bladder have canalized to form a continuous lumen. From hepatocytes, biliary canaliculi form biliary ductules, which in turn unite to form segmental bile ducts.
The typical biliary anatomy (figure-2a) consists of anterior and posterior segmental right hepatic ducts that fuse to form the main right hepatic duct. A variable number of segmental left hepatic ducts likewise join to form the main left hepatic duct.
The main right and left hepatic ducts typically converge approximately 1 cm from the liver margin to form the common hepatic duct. The transition from hepatic duct to common bile duct (CBD) occurs at the site of cystic duct insertion which is typically midway between the convergence of the right and left hepatic ducts and the retroduodenal common bile duct. The common bile duct consists of supra-duodenal, pancreatic and intra-duodenal segments [42][43][44] . Terminal portion of CBD is joined by pancreatic duct. The sphincter of Oddi is a muscle that typically encircles the terminal portion of biliary and pancreatic ducts and their common channel (figure 2b) [45][46][47] . There remains some debate as to whether the sphincter of Oddi is a single continuous structure or consists of two or three separate structures. Anterior (RAD) and posterior (RPD) segmental right hepatic ducts join to form the main right hepatic duct, which may vary in length. The right (R) and left (L) main hepatic ducts become extrahepatic proximal to their confluence in the common hepatic duct (CHD), which joins with the cystic duct to form the common bile duct (CBD). Biliary and pancreatic duct (PD) flow is regulated by sphincter of Oddi. Figure 2(b). Diagram of Sphincter of Oddi. There is great variability in the anatomy of hepatic biliary system (Figure 4), gall bladder, and pancreatic ducts. In approximately 30% of the general population, two segmental ducts drain the right hepatic lobe and separately join with the left hepatic duct, cystic duct, or common bile duct [43] . Rarely cystic duct is absent or duplicated. The length and course of cystic duct are frequently anomalous, with the clinically most important anomaly involving the opening of cystic duct into the right hepatic duct. There is less variability in the common bile duct, except in size [42] . In 5 to 15% of population, the common bile duct and main pancreatic duct enters duodenum separately [47,48] . , common hepatic duct(CHD) and cystic duct respectively (C, D, E). Type 4 -Right hepatic duct (RHD) drains into cystic duct (F). Types 5 (A, B) -Right accessory duct drains into CHD or RHD (G, H). Type 6 -Segments II and III drain individually into RHD or CHD (I).
The enzymes from the pancreas drain into the small intestine (duodenum) through the ampulla of Vater. Intrahepatic biliary radicals join to form right and left hepatic ducts which later on form common hepatic duct. Cystic duct joins with common hepatic duct to form common bile duct (CBD). CBD in combination with main pancreatic duct opens at ampulla of Vater which is situated on medial side of 2 nd part of duodenum. The ampulla of Vater drains the liquids made by the liver called bile, which is initially stored in the gallbladder and then secreted via the common bile duct through the ampulla into the duodenum 9 .
Pancreatico-Biliary Diseases
Evaluation of suspected biliary tract disease is a common radiology problem. Advances in CT, US and MRI over the past decade have greatly improved our ability to evaluate the biliary tract. If obstruction is determined to be present, it is necessary for imaging to define the level of obstruction and if possible the cause of obstruction [48] .
Because of the strong correlation between the presence of dilated biliary duct system and presence of obstruction, CT, MR and US imaging can accurately predict presence or absence of biliary obstruction. When the intrahepatic duct size at CT or US exceeds approximately 2 mm in diameter & duct visualization becomes confluent rather than scattered, an abnormal biliary tree is present & one should consider the presence of biliary obstruction. Dilated intrahepatic ducts should be diagnosed when the intrahepatic bile ducts exceed 40% of the adjacent intrahepatic portal vein [48] .
The CHD is nearly always visualized coursing through the porta hepatis at US or CT. The upper limit of its diameter has been a source of controversy for decades relating to the variation among individuals. Generally, at US, 6-7 mm diameter of CHD or CBD has come to be the most commonly used upper limit size, whereas, in CT it is more common to accept 8 to 10 mm for the CHD or CBD [48] . The diameter of common hepatic duct is often larger in its mid and distal portions, easily identified and used as a measurement at CT than it is proximally, whereas, US most often visualizes the extra-hepatic ducts optimally. In addition, US typically measures the internal luminal diameter, whereas, CT more readily identify the fat around the duct & the measurements include ductal wall too.
Reasons for a dilated extrahepatic ductal system without obstruction have been a source of interest to imagers for decades. When biliary obstruction has been long standing, the elastic fibres in the wall of the duct may be permanently stretched & they will not return to its normal diameter despite relief from the obstruction [49] .
There is also some controversy over whether the EHBD increases its diameter in response to cholecystectomy and in aging. A study done by Bacher et al has revealed an age dependent change in the diameter of EHBD. The study suggests that the upper limit of the duct in elderly persons be set at 8.5 mm. The study states the mean diameter for people < 50 years was 3.128 mm +/-0.862 mm & for patients > 50 years was 4.19 mm +/-1.15 mm. They found that the duct dilated 0.04 cm/ year on US [50] .
When obstruction is present the biliary tree will dilate beyond the normal range, but it is important to be aware that there is a time lag from the onset of the acute obstruction & the dilatation. Animal studies have shown that the extrahepatic duct system dilates before the intrahepatic ducts typically requiring 2-3 days of obstruction to dilate [50] .
Intrahepatic ductal system requires obstruction of 1 week duration for dilatation to occur. Thus, in early obstruction, the lack of biliary dilatation at imaging does not preclude the presence of biliary obstruction.
Determination of the level of obstruction can be an important indicator for next step in therapeutic or diagnostic intervention with proximal obstructions being better approached by a percutaneous approach and more distal processes evaluated or treated by an endoscopic approach. In addition, the level of obstruction of biliary tract is key factor in developing differential diagnosis.
Key to achieving the proper diagnosis with bile duct is evaluating the zone of transition from dilated to non-dilated or non-visualized duct. Special attention should be paid to this zone of transition regardless of imaging modality being used. With a borderline or equivocally increased extra hepatic duct diameter, re-evaluation of duct size following a fatty meal can help to differentiate obstructed from non-obstructed ducts [49] . Below is Duodenal Ca 6.Cholangiocarcinom anoma Abdominal US is initial examination of choice in imaging of bile ducts particularly in patients with jaundice, Yi Tang et al [51] .
US plays and indispensable role in evaluation and follow-up of infants and children with jaundice and in differentiation of obstructive and non-obstructive jaundice without being dependent on ionizing radiation. US provides important information about liver size and texture, the size and clarity of bile ducts [52] .
In patients where visualization of most distal portion of CBD is difficult on conventional US. Tissue Harmonic Imaging(THI) shows larger length of duct with relative ease. Improvement in contrast and resolution of side lobe artifacts with THI enhance visualization of the biliary ducts [52] .
Painless jaundice is a hallmarkof malignant biliary obstruction. However, there are no absolute distinguishing signs between benign and malignant obstruction. About 80 to 90% of patients with carcinoma of head pancreas present with jaundice. Patients with ampullary carcinoma and cholangiocarcinoma almost always presents with jaundice. Jaundice is late symptom of carcinoma of gallbladder [53] .
Both USG and CT are accurate at detecting biliary dilatation and determining the presence of extra hepatic biliary dilatation &obstruction, but they are limited in their ability to define the cause or the exact level of obstruction. US is highly operator dependent and distal biliary lesions are often obscured by overlying bowel gas. Use of MRCP and addition of conventional T1W pre-and post-contrast MR imaging when malignant obstruction is suspected, helps in better evaluation of possible masses, lymph node enlargement or hepatic masses if required. In addition, MR angiography can be performed at the same setting to define vascular anatomy & assess for vascular invasion. The combination of MRI, MRCP & MRA provides the option of complete diagnostic evaluation of pancreatico-biliary neoplasms without the need for invasive or multiple imaging procedures [54] .
Kinematic MRCP can be used to define the necessity of biliary intervention in patients with biliary dilatation [40] . Segmental ducts are difficult to visualize with MRCP because of their small caliber & limited spatial resolution & S/N ratio achievable with standard MR pulse sequence. Visualization of the normal (non-distensible) biliary system is necessary for the evaluation of donor candidates for liver related transplantation because of prevalence of variant biliary anatomy. MRCP is often used in pre-operative evaluation of the patients. Intravenous morphine administered prior to MRCP can improve quality by causing Sphincter of Oddi to contract, which increases pressure in & distension of the biliary & pancreatic ducts [55] .
Kim et al showed that addition of non-enhanced T1 & heavily T2 weighted sequences increases the diagnostic accuracy in differentiation of benign from malignant causes of biliary dilatation. Further addition of gadolinium enhanced T1W dynamic images did not significantly improve the diagnostic accuracy for differentiating causes of biliary dilatation but increased the level of confidence in 17% to 24% of the cases as compared with that for the combination of MRCP & T1 & T2W images especially in cases of biliary dilatation due to pancreatic carcinoma [56] .
Soto et al showed that in patients with biliary obstruction caused by malignant lesions. MRCP demonstrates the site of the obstruction & the severity of bile duct dilatation. Additional cross-sectional MRI images obtained with conventional sequences are necessary to determine the organ of tumor origin & to define the margins of the malignant lesion. Patients with biliary enteric anastomosis also benefit from undergoing MRCP as the primary diagnostic modality, because ERCP may be technically difficult to perform due to the altered anatomy & long afferent loops produced by Billroth II procedures. In some of these patients, the information provided by MRCP is sufficient to help plan therapeutic intervention [57] .
Cholelithiasis
Obesity, increasing age, hyper-alimentation, rapid weight reduction, ileal disease or resection and ethnicity are risk factors for developing gallstones [42] . 70-80% of gall stones in western countries are cholesterol and the remainders 20-30% are pigment stones which occur most frequently in patients with chronic hemolytic disorders. 80% of patients with gallstones are asymptomatic and 20% have biliary colic. 1 to 2 % of patients with asymptomatic gall stones develop biliary symptoms. 1 to 2% per year risk is seen in these patients for developing acute cholecystitis and other complications [42] .
US has a sensitivity of 90% for detecting gall stones, however size and number cannot be accurately determined sonographically. Diagnosis of cholelithiasis is most confidently made when a 5-mm echogenic focus meets all 3 major criteria. Stones <2 to 3 mm sizes are difficult to visualize on USG. Small stones are usually multiple which assists their detection.
The characteristic findings of gallstones at US are a highly reflective echo from the anterior surface of the gallstone, mobility of the gallstone on repositioning the patient (typically in a decubitus position), and marked posterior acoustic shadowing. When the gallbladder is filled with stones, the resultant appearance is termed the wall-echoshadow (WES) sign. WES sign must be differentiated from a partially collapsed duodenal bulb, porcelain gallbladder, emphysematous or xanthogranulomatous cholecystitis or calcified hepatic artery aneurysm.
On MRI, gallstones produce little or no signal because of the restricted motion of water & cholesterol molecules in the crystalline lattice of the stone. Gallstones & CBD stones are best seen on T2W images that produce bright bile. MRI is superior to CT in detecting small stones because of inherent high contrast between low signal intensity stones & high signal intensity bile [42] .
Most stones produce no signal on MR images & appear as signal void area i.e., hypointense areas in a bright gallbladder on HASTE, RARE & FISP sequences.
Recent study by Tsai et al based on the differential signal intensity of gall stones states that 3D fast spoiled gradient echo T1W imaging was able to diagnose the composition of gall stones. Adding 3D fast spoiled gradient echo imaging to the single shot fast spin echo T2W sequence can further improve the detection rate of gall stones and gall stones with differential compositions. Because the 3D fast spoiled gradient-echo images were acquired with fat saturation and the in-phase fast spoiled gradient-echo images were not, the higher intensity of gallstone on 3D fast spoiled gradient-echo images may be caused by fat saturation itself, which increased the apparent brightness of water-bearing stones [58] .
In addition to recognizing gall stone as a filling defect on T2W Single Shot Fast Spin Echo Imaging, MR Imaging can also help to distinguish between different types of gallstones like cholesterol stones, pigment stones. Cholesterol stones appear hypointense on T1W images while Pigment stones usually have increased signal intensity on T1W images. The 3D fast spoiled gradient echo T1W is as good as T2W Single Shot Fast Spin Echo imaging in diagnosis of gall stones & can even be better when applied to bile duct stones [58] .
Choledocholithiasis
Passage of gall stones in CBD occurs in 10 to 15% of patients with cholelithiasis. Majority of bile duct stones are cholesterol or mixed stones formed in gallbladder. Primary calculi arising de novo in the ducts are pigment stones developing in patients with: 1) Chronic Hemolytic disease. 2) Hepato-biliary parasitism 3) Congenital anomalies of the bile ducts 4) Dilated, sclerosed or strictured ducts [59,60] CBD stones may lead to acute biliary obstruction, cholangitis and acute pancreatitis [49] . Currently available modalities for diagnosis are US, CT, MRCP, EUS and ERCP [50] . Choledocholithiasis is one of the most common biliary tract diseases which occur in 8-20% of patients undergoing cholecystectomy and 2-4% of patients after cholecystectomy.
Bile duct stones can be discovered incidentally during the evaluation of gallbladder stones, with an estimated prevalence of 5% to 12%. Common clinical symptoms and signs include pain, fever, and jaundice. Biliary pain confined to the epigastrium or right upper quadrant of the abdomen is the most common presentation.
Although advanced technologies have become more widely available, a clinically oriented approach remains paramount. Atypical as well as typical clinical symptoms should be recognized.
Newer techniques of biliary imaging have simplified the diagnosis of bile duct stones. Noninvasive methods have the lowest risk, whereas invasive techniques have the greater accuracy.
US: Stones classically occur as echogenic foci within the fluid filled duct lumen. Stones may appear as echogenic curved line, depicting only the anterior curved stone margin, with markedly diminished echogenicity distally. When stones are small or not within the focal zone of the transducer they may not exhibit distal acoustic shadowing [49] . Caution must be made to avoid a mistaken diagnosis of duct stone from a variety of causes. Intraluminal masses such as blood clot or papillary tumors can simulate an echogenic mass but without distal shadowing. Adjacent calcified lymph nodes can also resemble calculi. Duodenal & colonic gas can make it difficult to visualize portions of the distal duct [49] .
Choledocholithiasis is suggested by the presence of a dilated CBD on US or by elevated liver function tests, specifically an elevated total bilirubin or alkaline phosphatase level [49] .
US alone lacks the high sensitivity for directly visualizing the stones within the CBD, but the combination of a CBD larger than 10 mm on US & hyperbilirubinemia has a positive predictive value > 90% for choledocholithiasis [49] .
Although US & CT are often used in the initial evaluation of patients with suspected choledocholithiasis, neither has a high sensitivity for detection of CBD stones. Sensitivity of US ranges from 18-70% for CBD stones. This variability in sensitivity is in part due to theoperator dependent nature of US & obscuration of bile duct by bowel gas. Sensitivity of CT for detection ranges from 76 to 87%.
Magnetic resonance Cholangiopancreatography (MRCP) and endoscopic ultrasound (EUS) are less invasive than endoscopic retrograde cholangiography (ERC) but can detect bile duct stones with comparable accuracy. At MRCP, CBD stones are seen as low signal intensity filling defects in the high signal intensity bile. On heavily T2W Fast Spin echo images bile has relatively high signal intensity; thus, common bile duct stones as small as 2-3 mm can be detected with MRCP. MRCP is accurate not only in the detection of common bile duct stones but also in determining their size, number and exact location. MRCP is also able to detect stones in dilated ducts & in nondilated ducts. Axial images are generally more useful in diagnosis of choledocholithiasis because they are degraded by motion artifacts to a lesser extent. It is crucial that all source images or reformatted images be reviewed for choledocholithiasis, since 3D images reconstructed with MIP (maximum intensity projection) may obscure the common bile duct stones. Pneumobilia, adjacent vessel compressing the duct & en face visualization of the cystic duct are great mimickers of stones [49] .
MRCP assists in diagnosis of complex manifestations with bile duct calculi such as Mirizzi syndrome. Mirizzi syndrome represents compression of common bile duct by a calculus impacted in the cystic duct. Multi planar capability of MR cholangiography allows identification of both the obstructing calculus and the long cystic duct that parallels the bile duct and predisposes the patient to Mirizzi syndrome.
Kondo et al found that observer performance with volume rendered MRCP was better than that with MIP & thick section MRCP for the diagnosis of choledocholithiasis. Volume rendering may be an efficient technique for the reconstruction of MRCP [31] .
Early studies focusing on the role of MRCP in the detection of CBD stones yielded sensitivities ranging from 81 to 92% & specifities, ranging from 91 to 100% [50] . Recent technical advances have resulted in improvements in S/N ratio & spatial resolution have further enhanced the MRCP diagnosis of choledocholithiasis. Recent studies note sensitivities of up to 100% & specifities of 92 to 100% matching & in most cases exceeding those of ERCP. Positive predictive value ranges from 96 to 100% [49 ] .
Miller et al quotes that US is highly sensitive (99%) & accurate (93%) for demonstration of ductal dilatation but is slightly less reliable with regard to the location (60 to 92%) & causes (39 to 71%) of biliary obstruction [62] .
Comparative Studies
Park et al found that MRCP is better at detecting stones than US (Reference study was surgical findings). RARE sequence was performed with 1.5 T MRI and body phased array coil in 35 patients. MRCP had sensitivity and specificity of 100% and US had sensitivity of 80%, specificity of 100%& accuracy of 89% [64] . [65] . (Reference standard was ERCP with surgery & follow-up).
Sugiyama et al preferred both US & HASTE MRI with 1.5T & body phased array coil in 97 patients
Varghese, Liddell et al reported that MRCP was better than US in detection of stones (Standard reference was ERCP, PTC or IOC). They reported sensitivity 31% of and specificity of 100% on US and that on MRCP was 91% and 98% respectively. They preferred FSE MRI 1.5T on 191 patients [67] .
Choledochal CYST
This is an uncommon cause of obstructive jaundice. Choledochal cyst is the cystic dilatation of the extra hepatic bile ducts, with or without dilatation of the intra hepatic ducts. It is a common congenital anomaly of the biliary tree. It is 3 to 4 times more common in females and two thirds of the patients remain asymptomatic before the age of 10 years. The classic clinical triad of pain, jaundice and a palpable right upper quadrant lump is seen in 30-60% of patients presenting in the first decade of life and approximately 20% of those diagnosed in adulthood. This condition is thought to be related to an abnormal insertion of the CBD into the pancreatic duct which causes reflux of the pancreatic enzymes into the CBD [67] .
Classification of Alonzo-Lej modified by Todani et al(figure 5) [68]
Type I choledochal cysts account for 80-90% of bile duct cysts. They are further subdivided into A, B and C subtypes.
Caroli's Disease
Caroli"s disease is the eponymous designation for congenital non-obstructive dilatation of the large intra hepatic bile ducts [69] . This rare and incompletely delineated entity was first described by Caroli et al. in 1958 [70] . Caroli"s disease may be multifocal or may be localized to lobe or segment of liver. Most of cases are associated with congenital hepatic fibrosis and medullary sponge kidneys may occur occasionally [71] . Caroli"s disease results from a bile duct malformation, which involves neonatal occlusion of the hepatic artery, leading to bile duct ischemia, cystic dilatation, and abnormal growth rate of the developing biliary epithelium and its supporting connective tissue [72] . Incomplete resorption of circular plates leads to the formation of multiple primitive bile ducts surrounding portal vein radicles [69] .
The role of imaging in the evaluation of choledochal cyst is to delineate the anatomy of the cyst, determine the relationship of the cyst to the rest of the intra and extra-hepatic biliary tree, evaluate associated complications and biliary tree abnormalities. Sonography (US) is useful in assessing the full extent of biliary duct dilatation and for identifying the communication between the cyst and the biliary tree [73] . It is capable of demonstrating entire intra as well as extra-hepatic biliary tree [74] . It can also demonstrate the presence of calculi, stricture or tumor if present [73] . Surgeons need an exact anatomic map of the pancreatico-biliary ductal union because it is essential that the choledochal cyst be completely resected without pancreatic ductal injury [175] .
Choledochal cysts are frequently detected at sonography as an anechoic or hypoechoic cystic lesion in the region of the porta hepatis with communication to the biliary tree. US is the recommended initial imaging study in the new born infant and persistent jaundice in whom differential diagnosis such as biliary atresia might occur together in neonates. However, sonography cannot reveal anomalous pancreatico-biliary ductal union which is generally believed to be the cause of choledochal cyst. In terms of pre-operative evaluation of anomalous pancreatico-biliary ductal union, ERCP is regarded as the most definitive and reliable diagnostic method of revealing anomalous pancreatico-biliary ductal union. However, ERCP is contraindicated in patients with acute pancreatitis and cholangitis and requires the administration of general anesthesia in children. It should be remembered that choledochal cysts and biliary atresia might occur together in neonates. Nuclear hepatobiliary scans confirm excretion of radiotracers into the choledochal cyst but yield limited anatomic delineation [49] .
The most common complication associated with a choledochal cyst is stones in the gallbladder, within the cyst, in the dilated intrahepatic biliary tree, or in the pancreatic duct [179] . The second most common complication is a malignant tumor. Common bile duct carcinoma and gallbladder carcinoma are the major malignancies [179,[75][76][77] . The risk of developing cancer seems to be related to bile stasis and contact between epithelium and bile. A choledochal cyst may be confused with several other cystic lesions including hepatic cyst, enteric duplication cyst, pancreatic pseudocyst, hepatic artery aneurysm and spontaneous perforation of the common bile duct. These entities call all be differential with a careful scanning approach and the use of duplex and color doppler imaging. An entire duplication cyst most often has the characteristic "intestinal signature", the "muscular rim" sign which consists of a brightly echoic inner rim (mucosa) and hypo echoic outer rim (muscular layer). Hepatic artery aneurysm may be differentiated with Doppler US [176] .
On CT, a choledochal cyst appears as a right upper quadrant, fluid filled structure in contiguity with the extrahepatic bile duct. Coronal imaging is extremely useful in the demonstration of the communication between the cyst and the biliary tree. This is achievable by CT using multiplanar reformations with or without cholangiographic contrast agents [73] . CT is considered to be more accurate in demonstrating the intrahepatic biliary tree and the status of the distal part of the common bile duct, which may be obscured by bowel gases on sonograms [78] . However, when a cyst is round and markedly dilated with no evidence of intrahepatic ductal dilatation, exact diagnosis is difficult with both CT and US, in such cases biliary origin often cannot be determined [74] .
MR Cholangiographic technique allows direct imaging of the cyst in multiple planes [79,80] . Coronal imaging reveals a dilated tubular structure that follows the expected course of the CBD and demonstrates the relationship of the cyst with the rest of the biliary tree. The presence of wall thickening, mural nodularity and wall enhancement in a choledochal cyst raises the possibility of tumor.
The diagnosis of choledochal cyst can be confirmed by ERCP. This method can demonstrate the presence of an anomalous pancreatico-biliary duct junction (APBDJ) and clearly outlines the anatomy of biliary system before surgery [81] . However, ERCP is invasive procedure. MRCP is non-invasive, alternative method to ERCP for evaluating choledochal cysts [82] . Once a choledochal cyst is detected at sonography, MR cholangiography should be performed prior to surgery [73] .
Kim et al concluded that MR cholangiography is equivalent or superior to conventional cholangiography in the evaluation of choledochal cysts. The authors compared MR cholangiography with conventional cholangiography in 13 patients with choledochal cysts [82] . [79] .
Lam et al investigated the use of CT cholangiography versus MR cholangiography in the diagnosis of choledochal cysts in 14 children and had good results with both techniques
Irie et al concluded in a study that MRCP is an important noninvasive diagnostic study for choledochal cysts but that it should not replace ERCP, especially in children. The authors used MRCP in the diagnosis of choledochal cysts in 16 patients [83] . They found that MRCP defined the proximal bile duct better than ERCP but that defects in the distal common bile duct were missed with MRCP in 2 pediatric patients. The anomalous pancreatico-biliary junction (APBJ) was delineated in all 6 adult patients but was missed in 6 of 10 pediatric patients. Hence MRCP with the non-breath hold technique is an accurate, non-invasive method of evaluating anomalous pancreatico-biliary duct lumen in children with choledochal cysts [177] .
CBD Strictures
These are common cause of biliary obstruction. They can be benign or malignant. Biliary stricture can be seen with a wide range of non-neoplastic causes. In western countries, iatrogenic stricture is the most common benign biliary stricture and accounts for up to 80% of all benign strictures [84,85] . Cholecystectomy and orthotopic liver transplantation (OLT) are the most common iatrogenic causes of benign biliary stricture. A spectrum of diseases such as chronic pancreatitis, autoimmune cholangitis associated with autoimmune pancreatitis, PSC, recurrent pyogenic cholangitis, HIV cholangiopathy, chemotherapy-induced sclerosing cholangitis, and Mirizzi syndrome can also result in biliary stricture. [86] proposed a classification for biliary stricture based on its location (figure 6): Type I strictures are located more than 2 cm distal to confluence of left and right hepatic ducts, whereas Type II strictures are seen within 2 cm from hepatic confluence. Type III strictures affect confluence, which is patent. Type IV strictures involve confluence and interrupt it. Type V strictures involve the hepatic duct associated with stricture on aberrant right intrahepatic branch.
Bismuth et al
This classification helps the surgeon to choose the most appropriate surgical approach because it defines the level in which healthy biliary mucosa is available for repair and anastomosis. Clinically, benign biliary stricture can present with a wide array of manifestations, ranging from being completely asymptomatic to showing overt clinical and laboratory evidence of biliary obstruction.
US is the initial imaging modality of choice for the detection of biliary dilatation. US is highly sensitive for the detection of biliary obstruction and the level of obstruction; however, the accuracy of US for the detection of the underlying cause varies widely (30-70%) [87,88] . Again,US is also highly operator dependent.
Multi Detector CT helps in the detection of biliary dilatation, the underlying cause of biliary obstruction, and complications such as cholangitis and cholangitic abscess. In addition, multiphase contrast-enhanced CT may help in differentiating benign biliary strictures from its malignant counterpart. A malignant stricture is characterized by arterial and venous hyper enhancement, a wall thickness of greater than 1.5 mm, longer length of the stricture, and greater extent of proximal dilatation compared with its benign counterpart [89] . In addition, the presence of lymphadenopathy and of metastases also helps in differentiating malignant biliary strictures from benign biliary strictures.
ERCP has been the gold standard investigation for the evaluation of biliary obstruction. The major advantage of ERCP involves obtaining a tissue diagnosis to differentiate benign from malignant causes.
Unlike ERCP, MRCP offers the advantage of noninvasive imaging without the risk of any procedure-related complications, allows evaluation of the biliary system beyond a tight stricture, and allows assessment of the hepatic parenchyma and other intraabdominal viscera. Additional advantages of MRCP include evaluation of biliary enteric anastomosis and evaluation of biliary system during the immediate postoperative period [90] .
At MRCP or ERCP, typical malignant common bile duct (CBD) strictures manifest as irregular, asymmetric strictures with a shouldered margin, whereas benign strictures tend to have smooth and symmetric borders with tapered margins [91] . Abrupt cut off of distal CBD in contrast to smooth tapering has traditionally been considered a sign of malignancy. However, some studies have shown that this finding is not reliable for distinguishing between benign and malignant strictures [92] . [90] studied features of benign biliary strictures and concluded that wide gamut of conditions can cause benign biliary stricture, some of which can cause significant diagnostic dilemmas. Some of these entities exhibit a specific pattern of biliary involvement and thus have specific imaging manifestations. Intra-and extrahepatic biliary strictures with "beading" and with peripheral pruning favors primary sclerosing cholangitis (PSC). IgG4 sclerosing cholangitis predominantly affects elderly men with elevated serum IgG4 levels and presents as hilar or distal CBD strictures. Recurrent pyogenic cholangitis is a disease of Southeast Asia and immigrants from Southeast Asia and mainly affects the left lateral and right posterior intrahepatic ducts, resulting in bile lakes and intraductal calculi. Papillary stenosis is unique to AIDS cholangiopathy, the incidence of which decreased drastically after the introduction of Highly Active Antiretroviral Therapy (HAART). [89] conducted study in 50 patients on Differentiating Malignant from Benign Common Bile Duct Stricture with Multiphasic Helical CT and concluded that hyperenhancement of the involved CBD during the portal venous phase is the main factor distinguishing malignant from benign CBD strictures.
Carcinoma Gall Bladder
Gallbladder carcinoma is the fifth most common gastrointestinal malignancy and the most common biliary tract malignancy worldwide [127] . Predisposing risk factors include cholelithiasis, chronic biliary infections (Opisthorchis viverrini, Salmonella typhi), primary sclerosing cholangitis, and porcelain gallbladder [127] . The clinical presentation of gallbladder carcinoma is nonspecific and may include abdominal pain, weight loss, fever, and jaundice, any of which can be seen in cholecystitis and other benign gallbladder conditions as well as in other abdominal malignancies.
Although sonography has a relatively high sensitivity for the detection of tumor at advanced stages, it is limited in the diagnosis of early lesions and is unreliable for staging. Therefore, CT and, increasingly, MRI are more widely used for further characterization of potentially malignant gallbladder lesions and metastatic survey. CT, or MRI, the presence of a large gallbladder mass that nearly fills or replaces the lumen, often directly invading the surrounding liver parenchyma, is highly suggestive of gall-bladder carcinoma. [128] in his study reported a correct preoperative diagnosis of Ca GB in only 34% of cases, with an incorrect diagnosis being especially common in patients with associated cholelithiasis and those without any advanced changes.
Onoyama et al
CT has been widely used in the diagnosis of Ca GB for the appearance of the primary tumor (mass replacing gallbladder, wall thickening, intraluminal polyp), for the extension study, and for staging the tumor [127] . The tumor is usually heterogeneous, containing hyperdense areas due to necrosis and unequal uptake, which is preferentially peripheral with necrotic (low-uptake) areas. Dual-phase spiral CT studies can even show early uptake in arterial phase, either peripheral or heterogeneous, in the latter case simulating a hepatocellular carcinoma. Biliary invasion can occur by direct spread of the lesion along the hepatoduodenal ligament or by compression from infiltrated adenopathies. [129] studied 15 patients with double phase helical CT, arterial and portal (3-mm collimation slices and reconstruction every 2 mm) venous phases. Their overall assessment was that helical CT is very useful to determine resectability/non-resectability with a global accuracy of 0.93.
Kumaran et al
Ca GB on MR appears as a hypo-or isointense mass or wall thickening in T1 in relation to the liver and is usually hyperintense and poorly defined in T2 sequences [127] . In early phase, the uptake of contrast is heterogeneous and preferentially peripheral and tends to slowly progress in a centripetal manner in dynamic studies, which is characteristic of adenocarcinomas. Assessment of the invasion of neighboring organs and adenopathic infiltration is facilitated by the combination of enhanced sequences in T2 with fat suppression, dynamic post-gadolinium T1weighted images in arterial phase and venous phase.Two recent studies evaluated the usefulness of MR and MRCP in the pre-surgical diagnosis of Ca GB [130,131] .
Schwartz et al retrospectively studied MR findings in 34 patients with known diagnosis of Ca GB and compared them with intra-operative observations in 19 of these cases and with histopathologic diagnosis in 15. MR was able to demonstrate 17 out of 19 cases of hepatic invasion of >2 cm. Schwartz identified four out of the six cases with involvement of the omentum [130] . Similar results were obtained by Tseng et al [131] in 18 patients with Ca GB; MR correctly detected 11 of 12 patients with hepatic invasion, 13 patients with node involvement, 15 of 16 with bile duct involvement and none of patient with peritoneal involvement. [132] added MRA to MR in T1-, T2-weighted sequences and MRCP, which facilitated the diagnosis of vascular infiltration, crucial before attempting curative resection.
Kim et al
Periampullary carcinomas arise within 2 cm of the major papilla in the duodenum and include four different types of malignancies, namely, those originating from: (a) the ampulla of Vater itself (b) the intrapancreatic distal bile duct (c) the head and uncinate process of the pancreas (d) the duodenum.
Their origins are difficult and often impossible to discern based on clinical settings and results of preoperative imaging, as well as on surgical specimens [133] .
Overall survival is highest for patients with ampullary and duodenal cancers, intermediate for patients with bile duct cancers, and lowest for those with pancreatic cancers [133] .
The ampulla of Vater comprises the junction of the biliary and pancreatic ducts and is surrounded by the sphincteric system of Oddi. In 75% of cases, the major duodenal papilla is in the descending duodenum. In these cases, the terminal pancreatic duct is inferior and anterior to the CBD. In 25% of cases, the major duodenal papilla is in the horizontal duodenum. In these cases, the pancreatic duct is positioned vertically and parallel to the left border of the CBD [134] . [133] reviewed Magnetic resonance (MR) images of pathologically proved periampullary carcinomas (29 ampullary carcinomas, 27 distal common bile duct carcinomas, 21 pancreatic carcinomas, six duodenal carcinomas, and six unclassified carcinomas) of 89 patients. He concluded that ampullary carcinoma manifests as a small mass, periductal thickening, or bulging of the duodenal papilla. Pancreatic carcinoma is characterized by a discrete parenchymal mass, which enhances poorly on dynamic gadolinium-enhanced images. Dilatation of side branches of the pancreatic ducts is frequently seen in pancreatic carcinoma but not in other periampullary carcinomas. Distal bile duct carcinoma manifests as luminal obliteration and wall thickening or as an intraductal polypoid mass. A dilated proximal bile duct, a non-dilated distal bile duct, and a dilated or non-dilated pancreatic duct may form the threesegment sign. MR Cholangiopancreatography and sectional MR imaging are useful in determining the origins of periampullary carcinomas. [135] in his study of periampullary Ca concluded that, volumetric oblique coronal reformations are a useful non-invasive method to provide diagnostic information about periampullary abnormalities as well as show secondary features important for local staging and management.
Pham et al
Sugita et al in their study of periampullary tumors concluded that, MR imaging correctly depicted location, extension, and origin of tumor. High-spatial-resolution MR imaging has potential for pre-surgical staging of tumors in this region [136] .
MRCP has evolved as an accurate diagnostic modality for the evaluation of pancreaticobiliary diseases; however, there is still a limitation in the evaluation of periampullary disease. This is because of the small but relatively complex anatomy of this region and because the tapered area of the distal biliary and pancreatic ducts contains little or no fluid. Physiologic contraction of the sphincter of Oddi also makes it difficult to evaluate the periampullary area. The combination of MRCP with conventional T1-and T2-weighted MR imaging, including gadoliniumenhanced dynamic MR imaging, is important for the evaluation of periampullary disease in terms of both detection and evaluation of the extent of a periampullary mass. Marked and abrupt dilatation of the distal bile duct or the pancreatic duct in the absence of stone disease or pancreatitis is suggestive of ampullary carcinoma. Pancreatic masses are usually more clearly delineated on gadolinium-enhanced spoiled GRE images than on unenhanced T1-or T2-weighted images [133] . In patients with periampullary carcinomas of bile duct origin, the distal segment of the bile duct below the obstruction was also frequently seen on MRCP images; hence, three segments (the proximal and distal segments of the bile duct, and the main pancreatic duct) were depicted in the periampullary area (we call this the three-segment sign) [133] . Duodenal carcinomas may manifest as polypoid or fungating, ulcerative, and annular constrictive or infiltrative masses and are associated with lymphatic metastases in 22%-71% of cases. Therefore, the ability of MR imaging to depict the mass depends on the size of the tumor and the degree of narrowing of the duodenal lumen [133] .
Cholangiocarcinoma
Cholangiocarcinoma is a primary tumor arising from the bile duct epithelium and is the second most common primary hepato-biliary cancer after hepatocellular carcinoma. At histopathological analysis, cholangiocarcinomas are predominantly adenocarcinomas (95% of cases), although other histologic types have also been described [117] . Cholangiocarcinoma is mainly a tumor of the elderly, with peak prevalence during the 7th decade of life and a slight male predilection [117] .
Cholangiocarcinomas can be classified anatomically as intrahepatic (peripheral), perihilar, or extra-hepatic. Perihilar cholangiocarcinoma arises at the bifurcation of the hepatic ducts, whereas intrahepatic (peripheral) cholangiocarcinoma arises from beyond second-order bile ducts (Fig. 7) [118] . Intrahepatic cholangiocarcinoma can be classified into three types on the basis of gross morphologic features: mass-forming (the most common), periductal infiltrating, and intraductal growth [126] . Primary sclerosing cholangitis (PSC), choledochal cyst, familial polyposis, hepatolithiasis, congenital hepatic fibrosis, clonorchiasis, and a history of exposure to thorotrast are common risk factors for cholangiocarcinoma [119] . A higher prevalence of positive anti-hepatitis C virus has been reported to be associated with cholangiocarcinoma [120] .
US is the initial screening imaging modality for evaluating biliary dilatation in patients with jaundice because it is inexpensive and widely available. With the use of modern high-resolution equipment, the sensitivity of US in detecting Klatskin tumor has risen dramatically in recent years, from a reported low of 33% in 1983 [121] to a reported high of 96% in 1996 [122] . Biliary dilatation is the most common indirect sign of a cholangiocarcinoma, with the abrupt change in ductal diameter indicating the site of the tumor. Klatskin tumors manifest with segmental dilatation and disruption of the confluence of the RHD and LHD at the porta hepatis. Often there is hepatic lobar atrophy, biliary dilatation, and crowding of bile ducts. A definitive mass is rarely seen at US. US may be helpful in establishing the level of obstruction, but an intraductal or infiltrating lesion causing the obstruction may be difficult to visualize.
With use of US, Neumaier et al [123] were able to establish the level of intrahepatic biliary obstruction in 100% of patients with ductal ectasia but demonstrated a tumor in only 37.1% of cases.
With the emergence of multidetector scanners, CT has become the noninvasive diagnostic test of choice for detailed evaluation and staging of cholangiocarcinomas. Multidetector CT is versatile and widely available. It depicts the level and cause of biliary obstruction and helps to survey the entire abdomen for disease staging. Cholangiocarcinomas are usually hypo-to iso-attenuating relative to the normal hepatic parenchyma at unenhanced CT. After the intravenous administration of contrast material, most cholangiocarcinomas remain hypoattenuating during the arterial and portal venous phases and show enhancement during the delayed phase, findings that reflect their hypovascular desmoplastic composition [124,125] . Volumetric multidetector CT with advanced post-processing allows comprehensive evaluation of cholangiocarcinomas in a single examination.
MR imaging with MR cholangiography and dynamic contrast-enhanced MR angiography is yet another multifaceted modality for the comprehensive evaluation of cholangiocarcinoma. Relative to the normal liver parenchyma, mass forming cholangiocarcinomas are typically hypo to isointense on T1-weighted MR images and variably hyperintense on T2-weighted MR images depending on the amount of mucinous material, fibrous tissue, hemorrhage, and necrosis within the tumor. On fat-saturated T1-weighted images obtained following intravenous contrast material administration, minimal or incomplete enhancement is seen at the periphery on early images, whereas delayed progressive enhancement is seen on late-phase images; these findings represent neoplastic cells at the periphery and desmoplastic response at the center of the lesion However, smaller lesions with less fibrosis may show intense homogeneous enhancement during the arterial phase, with prolonged enhancement during the delayed phase. Satellite nodules are seen in about 10%-20% of cases of cholangiocarcinoma and should be looked for on the dynamic data set, since they indicate a poor prognosis. Concurrently performed high-quality T2-weighted MR cholangiography can further complement contrast-enhanced MR imaging in depicting the site of ductal obstruction and associated upstream biliary dilatation [126] .
The reported accuracy of MR cholangiography in localizing the site and determining the cause of biliary obstruction is 100% and 95%, respectively [126] . 3D MR cholangiography, which consists of inherently continuous data, allowsreformatted images to be acquired in various anatomic planes and then rotated, thereby improving the visibility of the biliary tree and cholangiocarcinomas [126] .
Morphologic Classification:
Cholangiocarcinoma is classified into mass-forming, periductal infiltrating, and intraductal growth types:
Mass-forming Type
Mass-forming cholangiocarcinoma is characterized morphologically by a homogeneous mass with an irregular but well-defined margin and is frequently associated with dilatation of the biliary trees in the tumor periphery. The mass shows an irregular margin with high signal intensity at T2-weighted imaging and with low signal intensity at T1weighted imaging. Both the peripheral and the centripetal enhancement may be more prominent at MR imaging than at CT [126] .
Periductal Infiltrating Type
Periductal infiltrating cholangiocarcinoma is characterized by growth along a dilated or narrowed bile duct without mass formation and manifests as an elongated, spiculated, or branchlike abnormality. At CT and MR imaging, diffuse periductal thickening and increased enhancement due to tumor infiltration can be seen, with an abnormally dilated or irregularly narrowed duct and peripheral ductal dilatation (Fig 7). This type of tumor is rare in intrahepatic cholangiocarcinoma, but most hilar cholangiocarcinomas are of this type [126] .
Intraductal Type
Diffuse and marked ductal dilatation with an intraductal mass that enhances at contrast enhanced MR imaging, marked intrahepatic duct dilatation with no mass or stricture, an intraductal polypoid mass within localized ductal dilatation, an intraductal cast like lesion within a mildly dilated duct, or a focal stricture-like lesion with mild proximal ductal dilatation [126] .
Pancreatitis
Pancreatitis is the most common pancreatic disease in children and adults and one of the most common causes of morbidity and mortality worldwide [93] .
The diagnosis of acute pancreatitis is usually based on clinical and lab findings with clinical severity best determined by Ranson"s criteria or Acute physiology and Chronic Health Evaluation. (APACHE) II criteria [102] .
Causes of Acute Pancreatitis:
(1) Biliary Tract disease (2) Alcohol abuse (3) Peptic ulcer (4) Trauma, Surgery (CABG), hypotension, shock. Over one-half of cases of acute pancreatitis in adults are related to cholelithiasis or alcohol consumption, whereas trauma, viral infections, and systemic disease account for the majority of cases in children. Alcohol consumption accounts for the majority (80%) of cases of chronic pancreatitis in adults in developed countries, whereas malnutrition is the most common cause worldwide. Idiopathic pancreatitis is considered to be the most common cause of chronic pancreatitis in children (up to 30% of cases). In truth, however, hereditary and tropical pancreatitis are responsible for the majority of cases of chronic childhood pancreatitis [93] .
US findings in acute pancreatitis can be classified by distribution (focal or diffuse) and by severity mild, moderate and severe. US findings may be negative in the milder forms of acute pancreatitis. Focal pancreatitis, presenting as focal isoechoic or hypoechoic enlargement of the pancreas without extra pancreatic manifestation generally occurs in the pancreatic head. These patients are usually alcohol abusers. Differentiation from neoplasm may be difficult because both conditions create a focal hypoechoic mass on sonogram. If the serum amylase level is normal and patient is asymptomatic, the mass is likely to represent a neoplasm. If the signs and symptoms are severe associated with calcification the focal hypoechogenicity is more likely to be caused by inflammatory mass.
Complications of Pancreatitis:
(1) Pancreatic pseudocyst (2) Obstruction of the stomach, small bowel, colon or bile duct (3) Pseudocysts dissecting into adjacent organs (4) Gastrointestinal hemorrhage (5) Chronic pancreatitis In diffuse pancreatitis, the pancreas becomes increasingly hypoechogenic relative to the normal liver and increases in size. Assessment of relative pancreatic echogenicity may be difficult because of the alcohol induced fatty liver present in a large number of these patients. The pancreas may appear inhomogeneous. The pancreatic duct may be compressed or dilated. US is an insensitive test in the detection of pancreatic necrosis and other complications and, therefore, should not be used to assess the severity of pancreatitis. However, US may be helpful in diagnosis of gall stones, follow-up of fluid collections and pseudocysts in selected cases. US may also be used to guide interventional procedures, such as catheter drainage, but in general CT is preferred.
Traditionally, CT has been used to help confirm the diagnosis, assess disease severity, detect complications, and provide a "road map" for interventional procedures. CT also plays a pivotal role in evaluating the impact of various medical and surgical treatments [93] .
CT has four major indications in patients with suspected or known acute pancreatitis: (a) to establish the diagnosis and exclude other serious intra-abdominal conditions; (b) to assess the severity of the pancreatitis; (c) to detect pancreatic and extra-pancreatic complications, such as pancreatic necrosis, abscess formation, and involvement of surrounding solid organs, vascular structures or gastrointestinal tract, and (d) to guide percutaneous interventions, such as aspiration and drainage of fluid collections [94] . [95] constructed a CT severity index (CTSI) for acute pancreatitis that combines the grade of pancreatitis with the extent of pancreatic necrosis.
Characteristics Points
Pancreatic inflammation Normal pancreas 0 Focal or diffuse enlargement of the pancreas 1 Intrinsic pancreatic abnormalities with inflammatory changes in the peripancreatic fat 2 Single, ill-defined fluid collection or phlegmon 3 Two or more poorly defined collections or presence of gas in or adjacent to the pancreas 4 Pancreatic necrosis No necrosis 0 30% or less 2 30%-50% 4 greater than 50% 6 Total score of CT severity index is 10. A Score of 7-10 denotes severe pancreatitis, 4-6 denotes moderate pancreatitis and 0-3 is mild pancreatitis.
In the assessment of acute pancreatitis, MRI can depict the presence and extent of necrosis and peri-pancreatic fluid collections. Recently Amano et al [96] demonstrated the superiority of unenhanced MRI over CT in the detection of mild acute pancreatitis. The rationale for using MRI instead of CT in these cases is, because mild pancreatitis cannot be well visualized by CT. However, several authors recommend intravenous gadolinium administration in imaging severe acute pancreatitis, particularly for the assessment of pancreatic parenchymal perfusion and presence of necrosis [97] . Moreover, gadolinium has a good renal tolerance and is better tolerated than the iodinated contrast agents used in CT. [98] proposed a pancreas protocol including T2-weighted fast SE, fast-suppressed T1-weighted fast SE, and a series of T1-weighted gradient echo sequences prior and immediately following gadolinium administration. With this protocol, the authors reported that MRI is a reliable method for staging acute pancreatitis and is at least as helpful as CT is in reaching a prognosis. The enlargement of the gland is well demonstrated on any sequence. Parenchymal edema and areas of hemorrhage are better shown on unenhanced T1-weighted images. T2-weighted sequences are most sensitive in demonstrating fluid collections and are especially helpful in determining the amount and extent of debris in presumed fluid collections.
Chronic Pancreatitis
Chronic pancreatitis is an inflammatory disease characterized by progressive and irreversible structural damage to the pancreas resulting in permanent impairment of both exocrine and endocrine functions.
US findings consist of changes in size and echo texture of the pancreas, focal mass lesions, calcification, pancreatic duct dilatation and pseudocyst formation. Bile duct dilatation and portal vein thrombosis are other associated findings. Focal mass or enlargement is found in 40% of patients. Irregular dilatation of pancreatic duct occurs in chronic pancreatitis. In advanced cases, the duct becomes tortuous. The differential diagnosis between chronic pancreatitis and pancreatic carcinoma in a patient with duct dilatation can be difficult. Pseudocyst formation is reported in 25-40% of patients with chronic pancreatitis. Dilatation of CBD is present in 5-10% of pts with chronic pancreatitis and characteristically causes smooth gradual tapering, although abrupt tapering is rarely seen. Portosplenic vein thrombosis has also been reported to occur in 5.1% of patients and cavernous transformation may be present.
A Revised Cambridge classification of chronic pancreatitis has been proposed and preliminary studies indicate good correlations based on findings of ERCP and US [99] .
Revised Cambridge Classification of Chronic Pancreatitis Class* Ultrasound Features Normal
Visualization of entire gland and demonstration and measurement of MPD Equivocal Less than two abnormal signs -Main duct enlarged (less than 4 mm) -Gland enlargement (up to twice normal) -Cavities (less than 10 mm) -Irregular Ducts Mild Focal reduction in parenchymal echogenicity
Moderate
Two or more abnormal signs -Echogenic foci in parenchyma -Increase or irregular echogenicity of wall of main duct -Irregular contour of gland, particularly focal enlargement Marked -Large cavities (greater than 10 mm) -Calculi Duct obstruction (> 4 mm) -Major duct irregularity -Gross enlargement of MPD (> 4 mm) -Contiguous organ involvement (*If pathologicalchanges are limited to one third of the gland or less they are classified as focal) The diagnosis of chronic pancreatitis on MRI is based on signal intensity and enhancement changes as well as morphologic abnormalities in the pancreatic parenchyma, pancreatic duct and biliary tract. The imaging findings can be divided into early and late findings.
Early findings include low signal intensity pancreas on T1W fat suppressed images, decreased and delayed enhancement after I.V contrast administration and dilated side branches. Late findings include parenchymal atrophy or enlargement, pseudocysts and dilatation and beading of pancreatic duct often with intraductal calcification.
These changes are best visualized on unenhanced and gadolinium enhanced T1W fat suppressed images. The normal pancreas enhances uniformly and intensely on early arterial phase contrast enhanced T1W images and exhibits rapid wash out of gadolinium on subsequent images. Normal pancreas has signal on unenhanced T1W fat suppressed images. In contrast, a pancreas with chronic fibrosis and glandular atrophy exhibits a decreased and heterogeneous enhancement on early arterial phase images and increased relative enhancement on delayed images. Administration of secretin during MRCP may help detect subtle side branch abnormalities and allow non-invasive assessment of the exocrine pancreas. In addition, MRCP is highly accurate for identifying pancreatic divisum. However, its association with chronic pancreatitis remains controversial. Duct abnormalities such as dilatation, irregularity and stones and complications of chronic pancreatitis such as pseudocysts are best depicted by thin section T2W HASTE or SSFSE and thick slab T2W Half Fourier RARE MRCP images [100] .
MRCP is sensitive for depicting strictures of the pancreatic and biliary tract. CT is more sensitive than MRI for detection of calcifications associated with chronic pancreatitis.However, MRI best depicts the intraductal stones and duct obstruction. The typical appearance of benign strictures on MRCP is gradual tapering with a funnel like narrowed segment [100] .
Pseudocyst
Pseudocysts are encapsulated collections of pancreatic secretions that occur in or around the pancreas. Although most resolve spontaneously, complications such as infection, hemorrhage, and gastric or biliary obstruction may occur (Fig. 11). Pseudocysts can be communicating with the main pancreatic duct (Fig. 12) or non-communicating (Fig. 13). MRI can depict pseudocysts and can be used to characterize their content and thus to guide drainage. Uncomplicated pseudocysts are typically unilocular and encapsulated fluid collections that exhibit high signal intensity on T2WI and low signal on T1WI. Complicated pseudocysts and other pancreatic collections may contain solid debris, which is depicted best by MRI [101] .
Pancreatic Necrosis
Severe acute pancreatitis occurs in approximately 20-30% of cases and it is usually associated with pancreatic necrosis and increased complications and mortality. Determining the extent of necrosis is important because it has significant correlation with patient prognosis. OnT2W images necrosis can be of low signal intensity or when liquefied, hyperintense. At times necrosis may be better identified on MRI than CT [101] .
Vascular: Arterial pseudoaneurysms, hemorrhage into pseudocysts, arterial bleeding, and splenic or portal vein thrombosis are vascular complications of chronic pancreatitis that may be seen on MRI. In patients with chronic splenic vein thrombosis, the vein may not be visualized. Short gastric and gastroepiploic collaterals constitute useful complementary findings [100] .
Pancreatic Abscess
Abscesses usually occur up to 4 weeks after the onset of acute pancreatitis and can appear similar to pseudocysts. They are suggested when gas is present in a pancreatic or peripancreatic collection. MRI can reveal air fluid levels or large pockets of gas but CT is more sensitive for small collections of gas [101] .
Hemorrhage Or Pseudoaneurysm
They can occur in patients with severe necrotizing pancreatitis or as result of rupture of a pseudo-aneurysm when it constitutes a life-threatening emergency. Hemorrhagic fluid collections are more evident on MRI than CT because of the following: (1) High signal intensity of methemoglobin on T1W images.
(2) Low signal intensity hemosiderin rim on T2W images (3) Signal abnormalities due to hemorrhage remaining visible longer on MRI than CT.
Contrast enhanced sequences confirm the diagnosis by showing enhancement of the pseudo-aneurysm as comparable to arteries and its connection to the vessels. Although CT is currently the primary technique used to evaluate patients for acute pancreatitis, recent advances allow MRI to be used for the diagnosis and detection of complications. MRI has the potential advantage because of its lack of ionizing radiation and lack of nephrotoxicity from iodinated contrast [101] .
Groove Pancreatitis
Groove pancreatitis is a type of focal chronic pancreatitis affecting the groove between the head of the pancreas, duodenum and common bile duct. The predominant MRI finding of groove pancreatitis is a sheet like fibrotic mass between the pancreatic head and thickened duodenal wall associated with duodenal stenosis and cystic changes in the duodenal wall. The recognition of groove pancreatitis is important for differentiation from pancreatic and duodenal carcinomas [100] . [102] in his study of chronic pancreatitis concluded that, use of ERCP tends to result in overestimation of the caliber of the MPD. MRCP can enable accurate evaluation of the condition of the pancreatic duct and its changes in patients with chronic pancreatitis.
Congenital Anomalies Of Pancreas And Pancreatic Duct
Congenital anomalies and normal variants of the pancreatic duct and the pancreas may not be detected until adulthood and then are often detected as incidental findings in asymptomatic patients [103][104][105][106][107] . Because an increasing number of patients undergo MRI, MR Cholangio-pancreatography (MRCP), and CT examinations, these anomalies are recognized more frequently. At the same time, the rapid advances in and emergence of surgical and endoscopic procedures, such as insertion of stents in the minor papilla for pancreatic Divisum [104] , make recognition of these variants, particularly those of clinical significance, very important. Congenital anomalies and normal variants of the pancreas and the pancreatic duct include pancreas divisum, annular pancreas, ectopic pancreatic tissue, variations of pancreatic contour, fatty replacement and fat sparing of the pancreas, pancreatic cysts, and variations of pancreatic ducts.
Pancreas Divisum
Pancreas divisum is the most common congenital pancreatic ductal anatomic variant, occurring in approximately 4-14% of the population at autopsy series, 3-8% at ERCP, and 9% at MRCP [103][104][105][106][107] . The abnormality results from failure of the dorsal and ventral pancreatic anlage to fuse during the sixth to eighth weeks of gestation. In most cases of pancreatic divisum, no communication exists between the dorsal and ventral pancreatic ducts ( fig.8c). In some patients, the ventral pancreatic duct may be absent. In all cases, most pancreatic secretions drain through the minor ampulla. The clinical relevance of pancreas Divisum remains controversial. Most patients with pancreas divisum are asymptomatic [104][105][106][107] . However, in some patients, this anomaly is associated with recurrent episodes of pancreatitis. Of those with idiopathic recurrent pancreatitis, 12-26% of patients have pancreas divisum, as opposed to 3-9% of the general population [107] . It is postulated that in pancreas divisum, the duct of Santorini and the minor ampulla are too small to adequately drain the secretions produced by the pancreatic body and tail [103][104][105][106][107] . For many years, ERCP has been the primary means of diagnosing pancreas divisum. MRCP provides a noninvasive means of diagnosing pancreas divisum without the use of contrast material and avoids the risk of ERCP-induced pancreatitis. The main features of pancreas divisum when using MRCP include the dorsal pancreatic duct in direct continuity with the duct of Santorini, which drains into the minor ampulla, and a ventral duct, which does not communicate with the dorsal duct but joins with the distal bile duct to enter the major ampulla [105] . With the advent of MDCT scanners, pancreas divisum may be seen using CT as well [106] . Recent research shows that the administration of secretin improves the sensitivity of MRCP in diagnosing Pancreatic Divisum [108] . [108] studied usefulness of magnetic resonance (MR) Cholangiopancreatography (MRCP) before and after secretin administration in diagnosing santorinicele and concluded that S-MRCP helps in identifying pancreas divisum and santorinicele, which may be the cause of impeded pancreatic outflow in patients with pancreas Divisum. Cystic dilatation of the distal dorsal duct, just proximal to the minor papilla, termed "santorinicele".
Annular Pancreas
Annular pancreas is a rare anomaly (1/20,000 people) in which a band of pancreatic tissue surrounds the descending duodenum, either completely or incompletely, and is in continuity with the head of the pancreas [103,109] . The most widely accepted theory of etiopathogenesis is that the ventral pancreatic anlage is responsible for the anomaly by dividing early into two segments [109] . The ventral pancreatic bud consists of two components that normally fuse and rotate around the duodenum so that they come to lie posteriorly and inferiorly to the dorsal pancreatic bud. Occasionally however, the right portion of the ventral bud migrates along its normal route but the left migrates in the opposite direction. By these means the duodenum becomes surrounded by pancreatic tissue. Since it forms a "ring like structure" around the duodenum it is known as an annular pancreas ( fig.9). The anomaly may be discovered incidentally in asymptomatic patients [109] . In others, annular pancreas is associated with duodenal stenosis, postbulbar ulcerations, pancreatitis, or biliary obstruction. Before the advent of CT, MRI, and MRCP, the diagnosis of annular pancreas was usually established by ERCP, as an aberrant pancreatic duct communicating with the main pancreatic duct and encircling the duodenum. CT or MR images may show normal pancreatic tissue, with or without a small pancreatic duct, encircling the duodenum [109] . The findings at upper gastrointestinal examinations are often characteristic in that narrowing of the second portion of the duodenum. Surgical resection is recommended for symptomatic cases.
CA PANCREAS
Pancreatic ductal adenocarcinoma is the fifth leading cause of cancer death in the Western hemisphere with a peak incidence in patients between 60 and 80 years old. Factors associated with an increased risk of pancreatic cancer include smoking, chronic pancreatitis, diabetes, prior gastric surgery, and exposure to radiation or chemicals such as chlorinated hydrocarbon solvents [110,111] . A number of syndromes are identified with an increased incidence of pancreatic cancer, including familial atypical multiple-mole melanoma syndrome, hereditary nonpolyposis colorectal cancer, hereditary pancreatitis, Peutz-Jeghers syndrome, and hereditary breast-ovarian cancer syndrome [112] . Tumor is ≤ 2 cm in maximum diameter and confined to pancreas. Tumor is > 2 cm and confined to pancreas. Tumor extends beyond pancreas but does not involve celiac axis or superior mesenteric artery. Primary tumor involves either celiac axis or superior mesenteric artery.
Regional lymph nodes not assessed.
No involvement of regional lymph nodes. Involvement of regional lymph nodes. Distant metastasis cannot be assessed. No distant metastases. Distant metastasis present.
Imaging often begins with transabdominal sonography (TAS) to identify a cause of abdominal pain or jaundice. Sonography can screen for gallstones, signs of cholecystitis, and for the presence and level (intrahepatic, suprapancreatic, or intrapancreatic) of common bile duct obstruction. However, the presence of obscuring overlying bowel gas and the variable skill of the operator limit the sensitivity of this technique for identification and staging of pancreatic tumors.
Sensitivity of MR imaging and CT in the detection of malignancy was 100% and 92% (95% CI, 0.90-0.94), respectively. The positive predictive value was 90% for MR imaging and 80% for CT, and the negative predictive value was 100% for MR imaging and 67% for CT [142] .
After Sonography, CT is the modality most used as the primary modality for diagnosis and staging. The relatively hypovascular tumor is best detected during the pancreatic parenchymal phase of enhancement, approximately 35-50 secs after the beginning of contrast medium injection [113,114] . On the other hand, liver metastases are best imaged during the portal venous phase of liver enhancement, approximately 60-70 secs after the beginning of contrast medium injection. A "dual-phase" technique is therefore often used to obtain information regarding staging and metastases. Thin-section imaging is vital for optimizing lesion detection; thin-section imaging diminishes the impact of volume averaging on obscuring small lesions.
MR imaging offers several benefits for imaging of the pancreas. It inherently offers better soft-tissue contrast than CT before the administration of an IV contrast agent, and images can be obtained in multiple planes. MR imaging can be performed in patients with a history of allergy to iodinated contrast agents and in those with renal insufficiency. However, CT offers higher spatial resolution. MR imaging protocols typically include T1-weighted spin-echo or fast spoiled-gradient breath-hold sequences with or without fat suppression, T2-weighted fast spin-echo with fat suppression sequences, and dynamically enhanced T1-weighted spoiled-gradient breath-hold with or without fat suppressions. MRCP images obtained with long echo times have been used to create cholangiographic images. Images in MRCP can be acquired in any plane to provide additional information on the level of obstruction of the biliary or pancreatic ductal systems, with a sensitivity and specificity that rivals that of Endoscopic Retrograde Cholangiopancreatography [115] .
In the setting of pancreatic carcinoma, MRCP readily depicts the ducts obstructed by the pancreatic mass & localizes the obstruction to the pancreas. MRCP identifies not only the dilated ducts located proximal to the obstruction, but also the ducts that are narrowed & encased by the tumor. When the mass is located in the pancreatic head, the "double duct" sign is often observed. Although this sign raises the possibility of pancreatic carcinoma, it is a non-specific sign that may also occur in association with chronic pancreatitis. When MRI & MRA are performed in the same examination setting as MRCP, an assessment for resectability can be made. In those patients with unresectable disease, MRCP is useful in planning palliative endoscopic & percutaneous procedures [49] .
Liver Metastases from pancreatic cancer typically appear as low signal intensity masses on the pre-contrast fat suppressed T1W SGE images & exhibit irregular rim enhancement on immediate post gadolinium fat suppressed T1W SGE images. These metastases appear minimally hyperintense on T2W HASTE or SSFSE images. MRI is more effective than CT in differentiating metastases from other hepatic masses including hemangiomas or cysts. Peritoneal metastases are better depicted on MRI than on CT. Lymphadenopathy is well shown as high signal intensity foci in a background of low signal intensity fat on the early interstitial phase (45 secs)gadolinium enhanced fat suppressed T1W SGE & fat suppressed T2W images [49] .
Kamisawa et alfound that diffusion-weighted MRI (DWI) can be used to differentiate autoimmune pancreatitis (AIP) from pancreatic cancer. In a study of 13 patients with AIP and 40 patients with pancreatic cancer High-signal intensity areas, were diffuse or solitary in patients with AIP, but solitary in patients with pancreatic cancer. Pancreatic cancer more often had a nodular shape, while AIP more often had a longitudinal shape. Apparent diffusion coefficient (ADC) values were significantly lower in AIP than in pancreatic cancer, and an optimal ADC cutoff value of 1.075 x 10 -3 mm 2 /s could be used to distinguish AIP from pancreatic cancer [116] .
Study Place
The present study is conducted in Department of Radiodiagnosis, Mysore Medical College & Research Institute, Mysore.
Study Duration
The present study is conducted over a period of one year nine months commencing from January 2019 to September 2020.
Study Population Sample Size
The study comprised a total of 30 patients referred to Radiology department with suspected pancreatico-biliary disease who met the following criteria.
Inclusion Criteria
All patients who are detected to have any of the following pancreatico-biliary diseases on MRCP:
Observations & Results:-
A total of thirty patients who were clinically diagnosed as having pancreatico-biliary diseases were sent for MRCP & were included in the present study Out of the total 30 patients included in the study, maximum 16 (53.4%) were in the age group of > 40 years followed by 19-40 years age group which included 12 (40%). Least number of patients 2 (6.6%) were in the age group of 0-18 years. The mean age of study population was 37.5 (range 5-70 yrs). Out of the total 30 patients included in the study, most common clinical presentation was pain in abdomen seen in 16 (53.3%) patients followed by weight loss seen in 7 (23.3%) patients, while least common presentation was steatorrhea seen in 3 (10%) patients. Most of patients presented with combination of symptoms.
TOTAL 30 100
Out of the total 30 cases included in the study, most common disorder observed was Pancreatitis seen in 10 (33.3%). Second most common disorder was Cholangiocarcinoma seen in 6 (20%) patients.
[VALUE]
[VALUE] [VALUE] [VALUE] [VALUE] [VALUE] [VALUE] [VALUE] Out of the total 30 cases included in the study, most common disorder observed was pancreatitis seen in 10 (33.3%) patients with equal male& female preponderance. Second most common disorder was Cholangiocarcinoma seen in 6 (20%) patients with again equal male &female preponderance. The present study revealed, both benign & malignant strictures are equally common in occurrence. The present study revealed proximal CBD benign strictures are more common than distal CBD benign strictures. Regarding clinical symptoms most common clinical presentation in our study was pain in abdomen seen in 16 (53.3%) patients followed by weight loss seen in 7 (23.3%) patients, while least common presentation was steatorrhea seen in 3 (10%) patients. Almost all patients presented with combination of symptoms. Schwartz et al [143] in his study reported that most common presentation was jaundice seen in 68% patients followed by pain in abdomen seen in 25% patients. This may be because of dedicated study of malignancy. In our study percentage of pain in abdomen (53.3%) was more, this may be because of inclusion of almost all benign and malignant pathologies including pancreatitis.
6 cases of cholangiocarcinoma were evaluated in one case of cholangiocarcinoma diagnosed by MRI there was infiltration into the gallbladder & minimal local spread. Per operative findings were those of carcinoma of gallbladder. This is a known limiting factor on imaging when both, the gall bladder & bile duct are involved. MRI helped in defining the level, extent & staging of the disease in the pre-surgical evaluation. Guibaud et al [137] , Barish M A & Soto [26] &Pavone et al [138] who concluded their studies with sensitivities ranging from 80 to 86 % & specifities of 96 to 98 % & diagnostic accuracies of 91 to 100 % for level of obstruction.
In 3 cases of periampullary carcinoma, MRI was able to delineate the extent, level & local infiltration & helped in staging of the lesion. The assessment of the periampullary lesions was difficult on US in obese patients & bowel gas shadows was also a limiting factor. Sugita et al in his study of 25 cases of periampullary tumors reached a sensitivity of 88%, specificity of 100% & diagnostic accuracy of 96% [136] . Morphology of the gland can be seen but the caliber of main pancreatic duct was difficult to visualize. [66] . Sugiyama et al reported 91% sensitivity, specificity of 100% & diagnostic accuracy of 97% on MRCP [65] . Caroline Reinhold et al showed a sensitivity of 90%, specificity of 100% & accuracy of 97% on MRCP [139] . Ability to detect bile duct stones at CT depends on a number of factors related to the stone (size, shape, position, density), bile duct (dilated vs non-dilated), technology used (conventional vs helical CT), technique used (slice thickness, reconstruction interval, pitch, kVp, administration of contrast material), Pure cholesterol stones are iso-or slightly hypoattenuating relative to bile, making them difficult, if not impossible, to detect. This imposes a theoretic upper limit for the CT detectability of choledocholithiasis of approximately 80%.Heavily calcified stones are relatively easily identified, whereas softtissue density stones can be isoattenuating to surrounding tissue, making them difficult to identify. The attenuation of biliary stones varies with their composition.On MRCP CBD stones are seen as hypointense filling defects within lumen of CBD on T2W SE images. Advantage of MRCP is that stones as small as 3 mm can be visualized [114] .
In 2 cases of choledochal cysts, MRCP yielded diagnostic information by providing exact anatomic map in presurgical evaluation. Kim et al in his study of 20 patients concluded the same [140] .
In 1 case of biliary atresia, MRCP detected with an accuracy of 100%, Seok Joohan & Myung-Jun Kim in a study of 47 patients showed MRCP had a sensitivity, specificity & diagnostic accuracy of 100%, 96% & 98% respectively [141] . US serves as a good initial modality for evaluation in neonates presenting with cholestatic jaundice.
In our study pancreatitis was seen in 10 (33.3%) patients. Out of 10 cases, 6 (20%) were male suggesting male predilection. This may be because of alcoholism which is one of causative factor for pancreatitis.
Ultrasound will not show much change in cases of acute pancreatitis. Pseudocyst and necrotic changes were detected rarely in acute pancreatitis. Exact extent was not appreciated due to bowel gas and probe tenderness. [144] reported chronic pancreatitis in 10% cases. Tamura et al [102] reported overall sensitivity and specificity values of MRCP for delineating pathologic pancreatic changes were 88% and 98% respectively.
Shadan et al
Out of 6 (20%) patients of Cholangiocarcinoma evaluated 3 (10%) were male while 3 (10%) were female suggesting equal preponderance. Shadan et al [144] reported cholangiocarcinoma in 4% cases, Bhatt et al [145] reported Klatskin tumor in 12% cases. Bloom et al [117] reported Cholangiocarcinoma in 2.3% cases. Cholangiocarcinoma is primarily tumor of elderly with peak prevalence in 7 th decade and slight male predilection. In our study majority of cases of Cholangiocarcinoma were seen in 6 th to 7 th decade.
MRI helps in defining level of obstruction, extent of tumor and staging for pre-surgical evaluation. In some case there is involvement of GB fossa as well as hilar region by mass lesion, in such cases it becomes difficult to define whether Ca GB extending to hilar region or it is Primary Hilar Cholangiocarcinoma, so this becomes limiting factor for MRCP.
Out of total 3 (10 %) cases of periampullary carcinoma diagnosed on MRCP, all weremales, suggesting male predilection. In our study majority of cases were in the age group of > 40 years. Shadan et al reported periampullary Ca in 2% cases. Bhatt et al reported Periampullary Ca in 4% cases.This may be possible due to less sample size (50 patients) in both above mentioned authors.
Periampullary carcinomas arise within 2 cm of the major papilla in the duodenum and include four different types of malignancies, namely, those originating from (a) the ampulla of Vater itself, (b) the intrapancreatic distal bile duct, (c) the head and uncinate process of the pancreas, and (d) the duodenum. Ca head of pancreas is associated with dilatation of both CBD and PD called as "double duct" sign. Overall survival is highest for patients with ampullary and duodenal cancers, intermediate for patients with bile duct cancers, and lowest for those with pancreatic cancers [146][147][148] .
Ca GB was seen in 1 (3.3%) cases in our study. Shadan et al [144] reported Ca GB in 4% cases which are closely consistent with our findings, while Bhatt et al reported it in 2% cases. MRI helps in defining extent, local spread for pre-surgical evaluation.
Conclusion:-
The introduction of MRCP now readily permits the study of anatomy & pathology of the biliary tree including pancreatic duct very easily. Based on the results of our study, the following conclusions can be made: 1. MRI serves as an accurate & non-invasive, non-ionizing imaging method for evaluation of pancreatico-biliary anatomy & pathology. There is now enough evidence to suggest that the efficacy of MRI & MRCP is at par with that of ERCP & can be considered as the gold standard for evaluation of the pancreatic-biliary system. | 2020-12-17T09:11:42.199Z | 2020-11-30T00:00:00.000 | {
"year": 2020,
"sha1": "7738f9998203bca04758a8bb066c6836d26f77bc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/12088",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "576a8cf5b8d49ca4bef601eea7cca668fa2260c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2358393 | pes2o/s2orc | v3-fos-license | Functional redundancy of division specific penicillin‐binding proteins in Bacillus subtilis
Summary Bacterial cell division involves the dynamic assembly of a diverse set of proteins that coordinate the invagination of the cell membrane and synthesis of cell wall material to create the new cell poles of the separated daughter cells. Penicillin‐binding protein PBP 2B is a key cell division protein in Bacillus subtilis proposed to have a specific catalytic role in septal wall synthesis. Unexpectedly, we find that a catalytically inactive mutant of PBP 2B supports cell division, but in this background the normally dispensable PBP 3 becomes essential. Phenotypic analysis of pbpC mutants (encoding PBP 3) shows that PBP 2B has a crucial structural role in assembly of the division complex, independent of catalysis, and that its biochemical activity in septum formation can be provided by PBP 3. Bioinformatic analysis revealed a close sequence relationship between PBP 3 and Staphylococcus aureus PBP 2A, which is responsible for methicillin resistance. These findings suggest that mechanisms for rescuing cell division when the biochemical activity of PBP 2B is perturbed evolved prior to the clinical use of β‐lactams.
Introduction
Bacillus subtilis exhibits moderate resistance to a variety of antimicrobial compounds, particularly those that target cell wall biosynthesis (e.g., b-lactams, nisin and cationic antimicrobial peptides) (Helmann, 2006). The integrity of the cell wall is crucial for the viability of bacteria because it protects the cell from mechanical damage derived either from environmental factors or the osmotic pressure of the cytoplasm, which would otherwise burst the cell membrane and cause cell lysis. The major structural component of most bacterial cell walls is a net like matrix of long glycan strands cross-linked by peptide bridges (peptidoglycan; PG) (Sobhanifar et al., 2013). During cell growth, and also during cell division, new cell wall material is synthesised to allow cell expansion and to make the dividing wall (septum). The final steps of PG synthesis are presumed to be carried out predominantly by bifunctional (class A) penicillin-binding proteins (PBPs) that possess both glycosyltransferase activity, used to extend the glycan chains, and transpeptidase (TPase) activity, which generates the peptide cross-links. Additional monofunctional (class B) PBPs that have only TPase activity are also present and have essential roles in PG synthesis, although their precise biochemical role is unclear. blactam antibiotics inhibit the TPase activity of PBPs with varying degrees of specificity (Spratt, 1975). Previous analyses have indicated that resistance/tolerance to b-lactam antibiotics is mediated by transcriptional regulation through extra cytoplasmic function (ECF) sigma factors and the messenger molecule c-di-AMP (Luo and Helmann, 2012;Commichau et al., 2015). Full details of these resistance mechanisms remain to be characterised.
Bacterial genomes often encode 10 or more PBPs, although many of them are non-essential, suggesting functional redundancy. However, there is usually at least one essential PBP and several laboratories have shown, in various organisms, that a PBP specialised for wall synthesis in the division septum is essential (Yanouri et al., 1993;Kato et al., 1988;Massidda et al., 1998;Daniel et al. 2000;Datta et al., 2006;Sauvage et al., 2014). In B. subtilis, the essential division specific enzyme, PBP 2B, is targeted to division sites by interaction with one or more components of the division machinery, orchestrated by the FtsZ protein (Daniel et al., 2000;2006). The functions of the other B. subtilis PBPs in cell growth are less well understood, although in B. subtilis it seems that PBP 2A has a major role in elongation of the cylindrical part of the wall, albeit a role that is partially redundant to that of PBP H (Wei et al., 2003).
All PBPs carry a well-characterised catalytic triad with a conserved active site serine. This serine makes a covalent adduct with b-lactam antibiotics, which renders the enzyme inactive. Substitutions of this serine completely inactivate the TPase activity of the enzyme (Goffin and Ghuysen, 2002). Remarkably, we have found that elimination of catalytic activity by substitution of the active site serine of PBP 2B (PBP 2B (S309A) ) has almost no effect on cell division in B. subtilis, even though depletion of the entire protein is lethal (Daniel et al., 1996). The structural role of PBP 2B and its equivalents has been quite well-documented in B. subtilis (Daniel et al., 2006) and in other bacteria (Goehring and Beckwith, 2005;Karimova et al., 2005;Wissel et al., 2005;Datta et al., 2006;Valbuena et al., 2006), but the notion that catalytic activity was not essential was quite unexpected. Further analysis of the PBP 2B (S309A) mutant revealed that PBP 3, previously shown to be dispensable (Murray et al., 1996), takes on an essential role in this background. We show that the essential function of PBP 3, in the absence of biochemically active PBP 2B, lies in its TPase activity.
By characterising the sensitivity of B. subtilis strains lacking individual PBPs, we have found that the loss of PBP 3 or PBP 2A makes B. subtilis significantly more sensitive to b-lactams. The increased sensitivity of the PBP 2A null mutant is potentially explained by the fact that the mutant does exhibit a mild growth defect (Murray et al., 1998), but was unexpected for the strain lacking PBP 3. Overall, our results suggest that the division machinery is assembled in such a way that if the TPase activity of the architectural PBP 2B enzyme is inactivated (e.g., by covalent antibiotic binding), PBP 3 can provide the necessary activity to allow the division septum to be synthesised. The results also indicate that PBP 3 is intrinsically resistant to the binding of several b-lactams, suggesting that it may provide a resistance mechanism comparable to that recently acquired by Staphylococcus aureus MRSA, a notion further supported by sequence analysis. This may explain how the acquisition of a heterologous resistant PBP can provide antibiotic resistance without the immediate need for extensive protein-protein interactions with the resident synthetic machinery.
A mutant with biochemically inactive PBP 2B is viable
During our work to characterise the essential cell division gene pbpB, coding for PBP 2B, we tested whether the transpeptidase activity of the protein was important for its function in synthesising the division septum. We aligned PBP 2B with PBP 2X of S. pneumoniae, and other PBPs that have defined active sites, to identify the probable active site residue required for transpeptidation (Pares et al., 1996;summarised in Supporting Information Fig. S1 panel A). From this analysis, the serine at position 309 was clearly located in the consensus sequence for the active site of these PBPs. The amino acid numbering for PBP 2B used here is based on the translational start site being located at the fourth amino acid of the coding sequence as it is currently annotated in Swiss-Prot entry PBPB_BACSU Q07868 (Xu, 2008). We then made a serine to alanine (S309A) substitution by site directed mutagenesis to generate a mutant pbpB gene denoted pbpB*. To confirm that the S309A mutation had inactivated the TPase activity of the protein, both the wild type and the mutant forms of PBP 2B were overexpressed in E. coli. Both proteins were found to be predominantly present in the membrane fraction of E. coli, but on exposure of the proteins to a fluorescent derivative of a b-lactam antibiotics, bocillin-FL, the mutant protein did not detectably bind bocillin, whereas the wild type protein was heavily labelled (Fig. 1A lane 1; compared to lane 2), confirming that the S309 residue is required for penicillin binding.
The same pbpB* mutation was then introduced into B. subtilis at the ectopic amyE locus under the control of a xylose-inducible promoter (P xyl ). The coding sequence of the green fluorescent protein (gfp) gene was also fused to the N-terminal coding end of the pbpB* gene so that the localisation of the mutant protein could be studied. Then, a P spac (IPTG-dependent) promoter was inserted in front of the wild-type copy of pbpB, to allow repression of the native copy of the gene. Unexpectedly, in the absence of IPTG but in the presence of xylose, thus expressing only the mutant protein, the strain (4004) was found to grow as well as that expressing the wild-type protein (Fig. 1B).
Fluorescence microscopy of the cells showed that the mutant protein was targeted to division sites, similar to the wild type protein, and the cells were morphologically normal even when only the mutant copy of pbpB was expressed (Fig. 1F, panel X). However, when xylose and IPTG were both withheld, repressing both copies of pbpB, growth of the culture was severely impaired (Fig. 1B). Microscopic examination of these cultures showed that the cells grew as long filaments which eventually lysed (Fig. 1E), consistent with the previous result that PBP 2B is essential for cell division. These results suggested that PBP 2B (S309A) was still functional for cell division, although it was possible that the P spac promoter was not sufficiently repressed and provided sufficient wild-type PBP 2B for division to occur. Western blotting using polyclonal anti-PBP 2B antisera (Fig. 1C) indicated the presence of a very small amount of wild-type PBP 2B in total protein samples of strain 4004 grown in the absence of IPTG (Fig. 1C lane 'X'). Penicillin binding activity of wild type and mutant forms of PBP 2B. PBP 2B and PBP 2B (S309A) were overproduced in E. coli and the membrane fraction was purified, labelled with Bocillin-FL and separated by SDS-PAGE. The left panel is an image of the protein gel following Coomassie staining showing that similar amounts of total protein and overproduced PBP 2B protein were present on the gel and the right panel shows an image of the same gel when scanned for fluorescence. The gel was loaded with membrane preparations from E. coli over expressing WT pbpB (lane 1); pbpB* (2); control (vector only) (3). B. Growth curve of strain 4004 in various inducer conditions. Strain 4004, which contains the pbpB* allele controlled by a xylose induced promoter (P xyl ) and a WT pbpB controlled by an IPTG induced promoter (P spac ), was grown in PAB with IPTG then transferred to Fresh PAB media with various combinations of supplements (XI, media supplemented with xylose and IPTG; X, xylose alone; I, IPTG alone; -, no addition.) and the growth against time was measured by optical density. C. Inducer dependence of PBP 2B and GFP-PBP 2B. Western blot of total protein samples taken at the start of the experiment (t0) and 1 h after removing or adding inducers (arrow in panel B) and probed with polyclonal antisera specific for PBP 2B. However, a similar amount of PBP 2B was also detectable when strain 4004 was grown in the absence of both IPTG and xylose ( Fig. 1C lane '-'), although under these conditions division was not well supported (as determined by microscopy; Fig. 1E).
To eliminate the possibility that leaky transcription from the P spac promoter was providing sufficient wildtype PBP 2B to allow cell division/growth, and to confirm that PBP 2B (S309A) could support cell division, we directly replaced the wild-type pbpB allele with the mutant allele to generate a strain that was isogenic with the wild type except for the presence of the pbpB* mutation. The growth rate of this mutant strain (4001) was again similar to that of the wild-type strain 168, while microscopic analysis showed that the average cell length of the mutant cells (2.4 6 0.4 mm) was slightly (about 25%) greater than that of the wild type (1.9 6 0.3 lm) (Table 1). To confirm that the mutant protein had lost its TPase activity and was unable to bind bocillin when expressed in B. subtilis, live cell bocillin-FL labelling was used for both the wild-type strain 168 and the mutant strain 4001 (Fig. 1D). As PBP 2B co-migrated with PBP H in the gel, DpbpH and DpbpH pbpB* mutants (strains 4017 and 4024 respectively) were constructed and analysed by live cell bocillin-FL labelling to confirm the lack of labelling of the mutant PBP 2B. These results confirmed that while PBP 2B protein is absolutely required for cell division, its TPase activity is not.
Biochemically inactive PBP 2B mutant requires the function of PBP 3
The unexpected result that a mutant carrying a biochemically inactive form of PBP 2B was viable suggested that either cross-linking of glycan chains is not required at the division site or that this activity can be provided by another enzyme. To test the latter idea, mutations in each of the known vegetatively expressed PBP genes (pbpA, pbpC, pbpD, pbpF, pbpH and ponA) were introduced into both the pbpB* mutant and a wildtype strain, with the expectation that knocking out a gene that could substitute for the cell division TPase activity of PBP 2B would be lethal in the pbpB* mutant background. As expected, in the pbpB 1 background introduction of any of the null mutations gave colonies at high frequency expected for single uncomplicated transformation events (Rivolta and Pagni, 1999). Similar transformation results were obtained for the pbpB* mutant with all of the null mutations except pbpC. For pbpC, the transformation efficiency was much lower (less than 10% of that seen for the wild-type strain) and two colonies types were obtained; the majority were small and could not in general be sub cultured, whereas a minority were normal in size. Microscopic examination of the small colonies revealed that their cells were filamentous, whereas those of the large colonies were wildtype. Sequencing of the pbpB locus from several of the large and a few small colonies that grew up showed that what was grown had lost the pbpB* mutation. These were most likely generated by a double transformation event in which they acquired the unselected copy of the wild-type allele of pbpB together with the pbpC null mutation. These results suggested that PBP 3 is essential in the absence of the TPase activity of PBP 2B.
To test whether the TPase activity of PBP 3 was required for complementation of PBP 2B (S309A) , and to eliminate the possibility that the pbpC null mutation had unexpected polar effects on neighbouring gene expression, we constructed a plasmid carrying a mutant pbpC* allele (PBP 3 (S410A) ). This mutation was expected to eliminate its TPase activity as it removed the serine residue that was predicted to be located in the active site of the PBP (Supporting Information Fig. S1A). This plasmid (pSG5666) was then integrated into the chromosome at the pbpC gene locus. In a wild-type recipient, 2.4 6 0.7 2.5 6 0.6 0.7 6 0.1 0.7 6 0.1 KS53(P spac pbpC*) 1.8 6 0.4 1.8 6 0.4 0.8 6 0.1 0.7 6 0.08 KS54 (pbpB* P spac pbpC*) 2.4 6 0.3 2.4 6 0.4 0.8 6 0.09 0.8 6 0.07 KS52 (P spac pbpC DpbpC(cat)) 1.8 6 0.3 1.8 6 0.3 0.8 6 0.07 0.8 6 0.07 a. Dimensions were determined from images of exponentially growing cultures in which the cells were stained with membrane dye (FM 5.95). Greater than 100 cell measurements were taken for each sample; the values shown represent the mean cell size and the standard deviation (SD) for that sample. ND indicates where cell length was not determined.
sequence analysis of 20 independent clones revealed that about 75% of the clones picked up the mutant allele in the functional copy of pbpC a frequency close to expectation, based on a single crossover recombination event. However, none of the pbpB* recipients (0/12 checked by sequencing) acquired the pbpC* mutation. Thus, the pbpC* mutation probably renders PBP 3 unable to complement the function of PBP 2B (S309A) . To confirm this, a strain (4009) was constructed with both the pbpC* and pbpB* mutant alleles and a second wildtype copy of gfp-pbpB under the control of the P xyl promoter. In the presence of xylose, to allow expression of the catalytically active version of pbpB, strain 4009 was indistinguishable from the wild-type, but in the absence of xylose the cells became filamentous and could not be cultured. These results indicate that the TPase activity of either PBP 2B or PBP 3 is essential for cell division, and this activity is either unique to these PBPs or is related to how these PBPs interact with other division proteins.
Mid cell localisation of PBP 3 As described earlier, the strain with an inactive PBP 2B (4001), although growing at a comparable rate to the wild-type strain, exhibited elongated cells during exponential growth (25% longer; Table 1). Thus, although PBP 3 activity can support cell division it is apparently not as efficient as PBP 2B. This suggested that PBP 3 may not be recruited to the division site efficiently and so is not able to provide the required TPase activity to permit the normal progression of cell division in the pbpB* mutant. To test this, we initially used the P xyl inducible GFP-PBP 3 described by (Scheffers et al., 2004). This was found to be at least partly functional, as it supported cell division in the absence of active PBP 2B (strain 4005). However, Western blot analysis using a polyclonal anti-PBP3 antibody showed that the GFP fusion was either unstable, or that the expressed protein was processed such that detectable levels of both PBP 3 and GFP-PBP 3 were present in the culture, even when chloramphenicol selection was maintained during growth (Supporting Information Fig. S2 A). Consequently, we used immunofluorescence to resolve the localisation question. Initial inspection of images of cells suggested that PBP 3 may exhibit a bias toward localising to the division site and the poles of the cell in strain lacking an active PBP 2B compared to the wild type, but it was difficult to visually quantify any significant differences between strains where PBP 2B was functional or inactive (Supporting Information Fig. S2 B and C). To obtain a qualitative understanding of the sub-cellular distribution of PBP 3, heat maps representing the fluorescent signal obtained by IFM for the long axis of individual cells were generated for both wild type (168) and pbpB* (4001) strains ( Fig. 2A and B). PBP 3 tended to localise mainly at mid cell, except in short cells in which a more distributed or polar localization was evident. The pattern was similar in wt and pbpB* cells except that the mutant cells were in general longer and almost all cells had PBP 3 enriched at mid cell position in elongated cell. The PBP 2B distribution was similar in both wild-type and mutant, with a distinct mid cell localization except in a roughly similar proportion of the short cells. For comparison, the same analysis was done using antisera specific for PBP 2B ( Fig. 2C and D), where clear accumulation of the protein occurs at the mid cell position even in relatively short cells, suggesting earlier localisation. The image analysis also showed that the cells of strains lacking PBP 2B were clearly longer than those of the wild-type, suggesting, perhaps, an insufficiency of PBP3. However, overexpression of the pbpC from the strong hyperspank promoter (Vavrov a et al., 2010) did not change cell length or morphology in a pbpB* background (Supporting Information Fig. S3 and Table 1).
PBP 3 localisation at division sites depends on FtsZ and PBP 2B
Assembly of the divisome is regulated by the polymerization of FtsZ, a tubulin like protein, into a ring at midcell (Bi and Lutkenhaus, 1991). Depletion of FtsZ resulted in a shift in the localization of PBP 2B from midcell to the lateral wall and an arrest in cell division (Scheffers et al., 2004). The depletion of PBP 2B also results in a cell division block, which suggests that PBP 2B might have a role in the co-assembly of other cell division proteins (Daniel et al., 2006). To investigate whether PBP 3 is part of the multiprotein complex involved in PG synthesis during cell division, we examined the localization of PBP 3 in PBP 2B or FtsZ depleted cells using IFM. Immediately after the washing step to begin depletion of PBP 2B, PBP 3 showed the expected predominant localization at midcell and the cell poles, with only occasional localization along the cell periphery (Fig. 3A). One hour after the removal of inducers, the cells were filamentous as expected following the depletion of FtsZ or PBP 2B. In both cases, PBP 3 localized in a dispersed peripheral pattern with no sign of localization between nucleoids, where Z ring proteins would be expected to assemble (Fig. 3, panels B and C respectively). In the FtsZ depletion experiment, we also stained for PBP 2B and its localization was also dispersed, consistent with expectation that its divisome localization depends on FtsZ. These results are consistent with PBP 3 being mainly associated with the cell division machinery and dependent on the presence of FtsZ.
PBP 3 shows significant similarity to PBP 2A of S. aureus PBP 3 has the sequence motifs characteristic of a class B PBP (Murray et al., 1996). Sequence comparisons revealed that PBP 3 is strongly conserved in the Bacilli, with many strains encoding a protein with substantial similarity (> 48% identity). Interestingly, PBP 3 exhibits a similar degree of relatedness (41% identity for the entire gene) to the S. aureus PBP 2A (SaPBP 2A), which is encoded by mecA (Lim and Strynadka, 2002), including presence of the domain MecA-N (pfam: 05223) which is infrequently present in PBPs (Supporting Information Fig. S4). SaPBP 2A is an accessory PBP that endows resistance to b-lactam antibiotics, being largely responsible for the MRSA phenotype (Hartman and Tomasz, 1984). The suggestion that PBP 3 might have a related function to both PBP 2B and SaPBP 2A prompted us to conduct a series of experiments to determine the role of PBP 3 in resistance to b-lactam antibiotics.
Disruption of pbpC increases sensitivity to specific b-lactams
Antibiotic sensitivity tests showed that the loss of PBP 3 made cells more sensitive to certain b-lactams compared to the wt strain (Fig. 4). The strain carrying the DpbpC mutation (4015) only grew on plates containing oxacillin and cephalexin when inoculated at significantly higher densities (1000 fold) than the wild type or the other pbp null mutants tested (ponA, strain 4014; pbpD, 4013; pbpE, 4011 and pbpF, 4012) except a strain lacking PBP 2A. The latter strain was more sensitive to oxacillin and cephalexin, which could be related to the mild growth defect that this null mutation causes (Murray et al., 1998). However, the reasons behind the increased sensitivity of the pbpC mutant were less clear. Microtitre based MIC tests (Table 2) also demonstrated a clear increase in sensitivity to both oxacillin and cephalexin for a strain deleted for pbpC, whereas there was no significant change in the MIC for penicillin G. representation of the distribution of PBP 3 and PBP 2B on the long axis of the cell in a population of cells, immunofluorescence images were analysed using a line scan function. The output of this was then used to generate heat maps where each horizontal line represents an individual cell (sorted according to cell length) and the colour indicates the level of fluorescence detected (ranging from dark blue to red and yellow in terms of strength). To provide a second landmark for the interpretation of the images, the subcellular distribution of the chromosomal DNA was determined exploiting DAPI labelling and quantitation of the fluorescence signal in the same way. Panels A and B show PBP 3 cellular distribution in the wild-type strain and 4001 (pbpB*) respectively, whereas C and D indicate PBP 2Bs distribution. For each immunofluorescent analysis, the corresponding DNA distribution is shown. For 168, the data represents the analysis of > 140 cells, whereas for 4001 the panel is the summary of > 400 cells.
The crystal structure of SaPBP 2A with several blactams suggested that the reason behind the resistance of SaPBP 2A to certain b-lactams is the structure of the TPase active site of the protein (Otero et al., 2013). Such observation suggests that SaPBP 2A provide the TPase activity required to crosslink the peptides in PG when other SaPBPs are blocked by b-lactams. To test if PBP 3 in B. subtilis has a comparable resistance mechanism to b-lactams, several mutants including mutants with inactive PBP 2B (S309A) or PBP 3 deleted were tested against oxacillin. As shown in Fig. 4, cells lacking PBP 3 showed increased sensitivity to oxacillin compared to wild-type cells. To further investigate the resistance mechanism of PBP 3 to oxacillin, either a wild type copy of the pbpC gene or a mutant copy of pbpC (pbpC*) analogous to the pbpB* (S309A) mutation, were introduced in the amyE locus under the control of P hyper-spank promoter. In the absence of IPTG, all mutants lacking PBP 3 showed increased sensitivity to oxacillin compared to wild type (Supporting Information Fig. S3, panel A). The addition of IPTG, which allowed the expression of one of the pbpC alleles, pbpC or pbpC*, showed that the ectopic expression of PBP 3 but not PBP 3 (S410A) decreased the sensitivity of cells to oxacillin, restoring it to almost wild type levels. These results confirm that PBP 3 is the 'resistance allele' and that the active site serine was required for b-lactam resistance. Previous analyses have indicated that both oxacillin and cephalexin may have specificity towards PBPs involved in cell division in B. subtilis, which An exponentially growing culture of each bacterial strain was diluted to a specific optical density and used as the starting point for a fourfold dilution series. Samples (10 ll) of the dilutions were spotted onto nutrient agar plates containing the antibiotics Cephalexin (A; 0.08 lg ml 21 ), Oxacillin (B; 0.04 lg ml 21 ), Penicillin G (C; 0.005 lg ml 21 ) and no antibiotic (D). Plates were then incubated at 378C for 18 h prior to being photographed.
suggests that PBP 3 could potentially be acting redundantly to PBP 2B that is more sensitive to certain blactams (Stokes et al., 2005). It was also found that ectopic expression of PBP 3* as a competitor for the natively expressed PBP 3 had no significant effect on cell viability, cell morphology or b-lactam resistance (Supporting Information Fig. S3, panel A).
Previous transcriptional analysis by Nicolas et al. had indicated that pbpC was constitutively expressed and not subject to upregulation by stress. However, it has been shown that antibiotic exposure leads to the induction of a diverse set of genes under the control of various ECF sigma factors, particularly SigM, and this increased expression permits the cell to grow in the presence of antibiotics. To determine if the PBP 3 had a role in this 'resistance' mechanism, we analysed the sensitivity of a strains lacking both pbpC and sigM to oxacillin, penicillin G and moenomycin (as a non blactam cell wall synthesis inhibitor) (Supporting Information Fig. S5) compared to isogenic single null mutants and a strain lacking all 7 ECF sigma actors (BSU2007). Loss of PBP 3 had no effect on moenomycin resistance, whereas a strain lacking sigM was significantly more sensitive (as was a strain lacking multiple ECF genes (BSU2007); as seen by Lou and Helmann, 2012). Thus, PBP 3 probably has no role in the ECF mediated resistance to cell wall inhibitors. However, when penicillin G and oxacillin were used the effects of the sigM and pbpC mutations were additive, with pbpC apparently contributing more to the sensitivity (Supporting Information Fig. S5).
b-lactam binding specificity of PBPs
The increased sensitivity of the pbpC mutant was consistent with the notion that PBP 3 has a protective role against the action of b-lactam antibiotics. If so, this should be detectable at the biochemical level via differences in the specificity of oxacillin and cephalexin binding to the PBPs expressed in B. subtilis.
To test this, we developed an assay based on direct binding of bocillin-FL (Gutmann et al., 1981;Zhao et al., 1999) to live cells, bypassing the need to purify membranes and allowing rapid and reproducible processing of samples (Fig. 5). (Note that in these experiments PBP 2B overlaps with PBP H, rather than PBP 2A, for reasons that are not clear.) The bocillin-FL labelling profiles of wild type culture samples pre-treated with blactams revealed that one higher molecular weight PBP and PBP 5 (Atrih et al., 1999) (asterisks in Fig. 5) were relatively refractory to binding of any of the three compounds at the concentrations tested. The former protein was identified as PBP 3 by absence of the corresponding fluorescent band in the PBP profile of a pbpC null mutant (Supporting Information Fig S3; panel B, lane DpbpC compared to the WT). To exclude the possibility that the poor binding of oxacillin and cephalexin to PBP 3 was due to inability of these compounds to access the active site in vivo, a range of other b-lactams were screened for their ability to bind PBP 3. From this analysis, it was clear that although oxacillin and cephalexin did not show strong affinity for PBP 3, cefoxitin (Supporting Information Fig. S5) did exhibit binding to PBP 3 at comparable concentrations to other b-lactams under the same conditions, showing that prior treatment with at least one b-lactam can prevent bocillin-FL binding to PBP 3. It was also found that a strain lacking active PBP 2B (pbpB*; 4001) exhibited increased sensitivity to cefoxitin (Table 2). It was also notable that the strains lacking functional PBP 2B were more filamentous when cultured in media containing sub-inhibitory concentrations of cefoxitin compared to the wild-type strain.
Discussion
The results presented here show that PBP 3 is important for b-lactam resistance in B. subtilis and that it has a crucial role in enabling cell division when the catalytic activity of PBP 2B is compromised. As such, PBP 3 may provide a fail-safe mechanism that can rescue cells from the potentially catastrophic division failure. The inactivation of PBP 2B homologue in S. pneumoniae, PBP 2X, was lethal (Peters et al., 2014), probably due to the absence of any other PBP that could functionally supply the transpeptidase activity necessary for cell division. Many relatives of B. subtilis, have recognisable PBP 3 homologues, suggesting that this back up to the division-associated TPase is a common function in this bacterial family. Interestingly, database searches excluding the Bacilli revealed PBP 3 to have similarity to SaPBP 2A in methicillin-resistant Staphylococcus aureus (MRSA). SaPBP 2A is responsible for b-lactam tolerance in MRSA and it works by providing a TPase with a low affinity for b-lactam antibiotics that is able to function in cell wall synthesis when the other PBPs are inhibited. As for PBP 3 in B. subtilis, SaPBP 2a endows S. aureus with resistance to a range of b-lactams, not least high level resistance to oxacillin, which is a distinguishing marker for MRSA. As PBP 3 homologues are present in a wide range of Bacillus spp. few if any of which are pathogenic, it is unlikely to have been acquired as a result of man's use of antibiotics. Our results therefore lend support to the idea that b-lactam antibiotics have exerted selective evolutionary pressure long before they were exploited by man. In accordance with previous speculation (Kreiswirth et al., 1993), our observations on the role of PBP 3 in Bacillus suggest that a mechanism to protect the cell from an abortive attempt to divide may be either very ancient in origin or a result of convergent evolution resulting in a similar solution to abortive division in the Entero/Staphylococci through clinical use of b-lactams. In this respect, the data presented here has strong parallels with that seen for PBP 2A in S. aureus (Pinho and Errington, 2005). However, the constitutive expression of PBP 3 during vegetative growth in addition to the active recruitment of PBP 3 to the assembling divisome suggests that PBP 3 might have an active role in septal PG synthesis and does not only function as a backup under antibiotic stress (Murray et al., 1996;Nicolas et al., 2012). The results presented here provide a functional role for another PBP encoded in the genome of B. subtilis and expressed in vegetative growth. Using this data and previous analyses Kawai et al., 2009Kawai et al., , 2011, it is apparent that there is significant functional redundancy for these cell wall synthetic enzymes. This highlights the functional importance of correct wall synthesis for bacterial viability and the ability to adapt to rapidly changing growth conditions or exposure to inhibitory compounds. PBP 3 does not appear to be upregulated upon cell wall stress, despite the existence of a diverse set of transcriptional regulation systems designed to respond to cell wall perturbation [e.g., ECF signal factors (Helmann, 2002)], but is constitutively expressed (see Supporting Information data). Specifically, the ECF sigma factor SigM is thought to play an important role in intrinsic resistance to antibiotics in B. subtilis (Lou and Helmann, 2012). However, a strain with sigM and pbpC mutations was more sensitive against b-lactams than the single mutants suggesting an additive effect. These results support the premise that PBP 3 contributes to the intrinsic resistance of B. subtilis to b-lactams independently of SigM (Supporting Information Fig. S5). These results could be interpreted as indicating that the process of septum formation during cell division is prone to perturbations that result in the need for functionally redundant enzymes that provide a robust system to avoid abortive cell division.
In the light of these results, we seem to have identified a point at which the septal PG synthesis can be arrested where it seems that full assembly of the complex has occurred. This is consistent with the results obtained by Bisson-Filho et al. (2017) looking at the dynamics of the division complex in living cells. Both these results and ours suggest that the lack of the key biochemical activity provided by PBP 2B or PBP 3 can block the constriction of the division site. Consequently, we are now focused on determining the biochemical role of PBP 2B in the division process and how PBP 3 is able to provide this activity. This potentially explains the delayed division phenotype observed for strains lacking active PBP 2B and offers a novel route toward looking at the dynamics of the division process by microscopy. Following submission of this work, similar results have been obtained by Angeles et al. (2017).
General methods
The strains, plasmids and oligonucleotides used in this study are listed in Table 3.
B. subtilis strains were transformed according the method of (Anagnostopoulos and Spizizen, 1961) as modified by (Jenkinson, 1983). Simple genetic constructions where markers were moved from one background to another are described in Table 3, whereas more complex strain constructions are described below.
DNA manipulations and E. coli transformations were carried out using standard methods (Sambrook et al., 1989). Plasmid DNA was purified using DNA purification systems of Qiagen and Promega according to the manufacturer's instructions.
B. subtilis was cultured on nutrient agar (Oxoid) as a solid medium, and antibiotic medium 3 (Difco) for liquid cultures. Genetic constructs were selected for using kanamycin at 5 mg ml 21 , chloramphenicol at 5 mg ml 21 and spectinomycin at 50 mg ml 21 . IPTG (0.5 mM) and/or xylose (0.5%) were added as necessary. E. coli strains were cultured on nutrient agar or in 2YT (Sambrook et al., 1989), supplemented with ampicillin (100 mg ml 21 ) as required.
Protein samples from B. subtilis and Western blotting was done as described by Daniel et al. (2000). Strain constructions were confirmed both by antibiotic resistance, dependence upon specific inducers (where appropriate) and by the use of PCR amplification across the regions of insertion, combined with DNA sequencing of the region if required.
Determination of sensitivity to b-lactam antibiotics
Serial dilutions of test b-lactams in PAB and inoculated with between 100 and 5000 cells in a final volume of 200 ll (confirmed post analysis by conventional CFU determination on nutrient agar plates) and incubated at 378C in a BMG microtitre plate reader with shaking. The optical density of the cultures was then measured after 8 h incubation at 378C. The lowest concentration of antibiotic preventing growth of the culture was defined as the MIC. The results of 8 independent assay are summarised in Table 2. However, as this represented a very small bacterial culture, it was impractical to use for analytical methods so the concentration of antibiotic necessary to lyse a culture at an optical density (600 nm) of 0.2 was determined and denoted the Minimum Lytic Concentration (MLC). The MLC values for of oxacillin, cephalexin and penicillin G for 168CA grown in PAB were determined to be 0.3 (6 0.1), 0.8 (6 0.15) and approximately 20 (6 4) lg ml 21 respectively.
To allow comparative sensitivity of strains, cultures of the test strains were incubated overnight at 308C in PAB medium and then diluted 1:10 in the same medium and incubated for a further 1 h at 378C. After this time, the cultures were diluted to OD 600 5 1.0 and a series of fourfold dilutions were made using SMM. A 10ll spot of each dilution serial was dropped onto nutrient agar (NA) plates containing cephalexin, oxacillin or penicillin G as well as onto a NA plate with no antibiotic. The plates were then incubated at 378C for 24 h before being photographed. The antibiotic concentrations required to detect differential strain sensitivity were determined empirically (using the MIC values as a guide). Form this analysis, it was found that 0.08 mg ml 21 cephalexin, 0.04 mg ml 21 oxacillin and 0.005 mg ml 21 penicillin G gave the best differentiation and reproducible results.
Elimination of the transpeptidase activity of PBP 2B by the S309A mutation
To test the ability of PBP 2B and PBP 2B (S309A) to bind penicillin, the coding sequence of pbpB was amplified by PCR from genomic DNA of strain 168 (using oligonucleotide primers PBPB-F and R), digested with BamHI and KpnI and ligated with similarly digested pQE31. The transformation of this DNA into NM554 (pREP4) allowed the isolation of pSG5670 in which the wild-type pbpB gene was expressed from the IPTG-inducible promoter of pQE31. The S309A mutation was then introduced into pSG5670 by site directed mutagenesis (primers SDM B-F and R) to give pSG5671.
Cultures of NM554 (pREP4) with pSG5670, 5671 or NM554 alone (as a negative control) were grown in 2YT to an optical density (600nm) of 0.5, at which point IPTG was added to the cultures and incubation continued for 1 h at 378C. Samples (50ml) of each culture were then harvested and the cell pellets were washed by PBS and then suspended in 15 ml PBS and held on ice. The cells in each sample were broken by three passages through a French Press (2,000 lb in 22 ). The resulting cell lysates were centrifuged at 2,000 r.p.m. in a bench top centrifuge for 20 min to remove unbroken cells and large bits of debris. The supernatants were then centrifuged at 80,000 r.p.m. for 30 min at 48C to pellet membrane vesicles. The resulting pellets were suspended in 400 ll PBS by sonication for 2 s. The resulting suspension was then divided into aliquots of 100 ll and stored at 2208C.
The affinity of the PBP 2B and PBP 2B (S309A) for penicillin was determined by adding 2 ll (1 mg ml 21 ) of bocillin-FL (Invitrogen) to 100 ml of the cell membrane samples, followed by incubation at room temperature for 15 min. 100 ml 2 3 SDS was then added to each sample and the proteins denatured at 998C for 2 min prior to being separated by SDS-PAGE. The proteins with covalently bound bocillin-FL were then identified by scanning the protein gel for fluorescent emission using a Fuji FLA3000 scanner (Zhao et al., 1999). The protein gel was stained with Coomassie Blue (R250; Sigma) to confirm that approximately equal amounts of E. coli total membrane proteins and PBP 2B and PBP 2B (S309A) had been loaded (Fig. 1). A fluorogram of the gel (Fig. 1) showed that only the wild type PBP 2B was fluorescent, with no detectable fluorescence in the equivalent position in the lane with the mutant protein, although Coomassie staining showed that similar amounts of protein
Construction of strains with conditional expression of modified pbpB genes
To allow repressible expression of pbpB and to introduce mutations into the genome of B. subtilis, plasmid pSG5601 was constructed such that the 5 0 portion of the pbpB gene, comprising the RBS and the coding sequence of the first 368 amino acids, using PCR primers pbpB-1 and pbpB-2 to amplify the gene and cloning it into pSG441 such that the RBS of pbpB gene was placed in front of the P spac promoter. Plasmids pSG5601 was then transformed into 168 selecting for the kanamycin resistance in the presence of IPTG. A clone that was IPTG dependent for growth for isolated and designated strain 3941.
To determine the functionality of PBP 2B (S309A) , the wildtype pbpB was amplified by PCR, with primers pbpB-3 and -4, and the resulting DNA fragment inserted into plasmid pSG1729 to give plasmid pSG5663; this construct was designed to produce a GFP fusion to PBP 2B that was under the control of the P xyl promoter and could be integrated into the amyE locus of B. subtilis. Site directed mutagenesis was then used to change the Serine 309 codon of pbpB to encode alanine (using primers SDM B-F and R), resulting in plasmid pSG5664. Plasmids pSG5663 and pSG5664 were transformed into strain 168 to give strains 4002 and 4003 respectively, giving strains with a second copy of either wild type or mutant PBP 2B (PBP 2B (S309A) ) under the control of the P xyl promoter. To determine if the inducible PBP 2B (S309A) genes could complement depletion of the wild protein, the P spac promoter was inserted immediately upstream of pbpB by transforming strain 4003 with a DNA fragment amplified by PCR, spanning from yllA to the end of ftsL (primers used yllA-F and ftsL-R), ligated to pSG5601, digested with SphI and blunted by Klenow. Selection for kanamycin resistance in the presence of IPTG gave rise to several transformant colonies that were then screened for kanamycin and spectinomycin resistance as well as dependence on IPTG for cell growth. A single clone (strain 4004) was then taken and the location of the insertion of the P spac promoter, lacI and the kanamycin resistance cassette between the stop codon of ftsL and the RBS of pbpB confirmed by PCR and sequencing.
As it was found that strain 4004 was viable when grown in the presence of xylose alone, indicating that PBP 2B (S309A) may function for cell division, site directed mutagenesis (primers SDM B-F and-R) was then used to introduce the S309A mutation into pSG5601, creating plasmid pSG5662. Plasmid pSG5662 was then integrated into 168 and the location of the pbpB* (S309A) mutation determined (either in the truncated copy of pbpB under the native promoter or in the full length copy under the P spac promoter). A clone of 168 with pSG5662 integrated into the chromosome was then isolated, where the S309A mutation was located in the functional copy of pbpB, as determined by PCR and sequencing, and denoted strain 4000. Strain 4000 was then grown in the absence of IPTG to promote the excision of the integrated plasmid. An isolate resulting from this technique, strain 4001, was found to have the pbpB* mutation without any of the plasmid sequences used to introduce the mutation.
To determine the phenotype of a strain with catalytically inactive PBP 2B and PBP 3, strain 4009 was constructed in 3 steps. Firstly, chromosomal DNA from strain 4002 (P xyl inducible gfp-pbpB) was transformed into strain 4001 (with pbpB*) to give strain 4006 containing both pbpB* at the chromosomal locus and a P xyl inducible wild-type copy of pbpB fused to GFP at amyE. Secondly, the C-terminal part of pbpC was amplified by PCR (primers pbpC-1 and 22) inserted into plasmid pUK19, to give plasmid pSG5665. Site directed mutagenesis of pSG5665 using oligonucelotides SDM C-F and -R was then used to change the codon for the catalytic serine at position 410 to encode for alanine, resulting in plasmid pSG5666. Finally, plasmid pSG5666 was transformed into strain 4006 selecting for kanamycin resistance in the presence of xylose. The resulting transformants were then screened for xylose dependence and then PCR used to confirm that the genotype of the strain was as expected, resulting in strain 4009.
Construction of conditional alleles of pbpC
To increase the cellular abundance of PBP 3, we cloned the pbpC gene including the native ribosome binding site (PCR amplified using oligos pbpC-3 and 24) was cloned into pDR111 digested with SphI to give pKS4. This plasmid was then used as a template for site directed mutagenesis to produced pKS5 where the active site serine of PBP 3 was replaced by alanine (S410A) as described above. These plasmids were then used to generate strains KS50 and KS53 by transformation and screening for loss of amylase activity to confirm insertion of the IPTG inducible copies of pbpC into the amyE locus. Strains KS51 and KS54 were then constructed by the transformation of strain 4001 with pKS4 and pKS5 respectively. Then finally, strain KS52 was constructed by the introduction of the pbpC null mutation from strains 4015 into KS50
Depletion of PBP 2B
Where lethal effects were predicted or identified, strains were constructed with a complementing copy of the pbpB gene under the control of an inducible promoter. To analyse the phenotype of such strains, the culture conditions were manipulated to remove one or both inducers as described in (Daniel et al., 2000), and samples taken every 30 min for the measuring of OD 600 , microscopy and Western blot analysis where required.
Cellular abundance of PBP 2B and GFP-PBP 2B (S309A)
To ensure that the normal growth of strain 4004 was not due to residual expression of the wild type gene, the total protein content of strain 4004 grown under the conditions described above was analysed by Western blotting using antiserum specific for PBP 2B (Fig. 1C). The results showed that a small amount of wild-type PBP 2B was present in the culture grown in the absence of IPTG after incubation for 1.0 h, but cell division was clearly perturbed in cultures lacking both xylose and IPTG by this time, and by 1.5 h cell lysis was evident.
Microscopy
For microscopy, cells from exponentially growing cultures were mounted on a thin film of 1% agarose in SMM medium (Anagnostopoulos and Spizizen, 1961), essentially as described previously (Glaser et al., 1997). To stain cell membranes, Nile red or FM5.94 was added to a sample of the culture to a final concentration of 2mg ml 21 prior to mounting on an agarose slide. For the subcellular localisation by immunofluorescence of PBP 3 and PBP 2B, the strains were grown to mid exponential phase of growth (OD 600 0.5). For the localisation of PBP 2B and PBP 3 in cells depleted of PBP 2B or FtsZ, the strains 3941 and 1801 were used, respectively, following the depletion protocol described in (Daniel et al., 2000). Subsequently, samples were fixed by the addition of an equal volume of ice cold fixation buffer (5% paraformaldehyde in PBS) and held on ice for at least 30 min. Cells were washed 33 in PBS and suspended in GTE buffer (50 mM Glucose, 25 mM Tris/HCl pH 8 and 10 mM EDTA pH 8). The cell suspension was then spotted on a dry multiwell slide and allowed to stand for 5 min. The solution was aspirated off and the slide left to dry. Poly-lysine (0.01%) was spotted onto the cells and left for 2 min, aspirated off and allowed to air dry. Cell spots were then treated with lysozyme (10 mg ml 21 in GTE) 2 min, washed with PBS and allowed to dry. Cells were re-hydrated with PBS for 2 min then blocked with PBS/2% BSA for 15 min. Primary antibody was added to cells and incubated overnight at 48C. The cell spots were then washed 10 3 with PBS before applying the secondary antibody (1/10,000 dilution in PBS/2% BSA) to the slide and incubating at room temperature in the dark for 1.30 h. Spots were then washed 103 with PBS, DAPI (0.2 mg ml 21 in antifade (ProLong Gold; Invitrogen) was used as a mountant and a coverslip applied.
Microscopic images were taken using Nikon TiE microscope coupled to a Hamamatsu C9100 EMCCD or a Sony CoolSnap HQ2 camera operated by Metamorph 7 imaging software (Universal imaging). All images were analysed with Metamorph 7 imaging software. Python27 software was used to sort the fluorescence data and ImageJ software was used to create the heat maps (Fig. 2), whereas Adobe Photoshop version 7.0.1 was used to construct figures.
PBP profile determination and b-lactam specificity
To provide sufficient protein for the analysis of the PBP profiles, cultures were grown to an OD 600 of 0.2 in PAB (with IPTG (1 mM where required) and then 0.5 lg ml 21 bocillin-FL added directly to the culture medium and incubation for 1 min at RT to allow binding of the penicillin. The cells were then harvested and broken by sonication for 15 s on ice. Total protein extracts were resolved by SDS-PAGE and the resulting gels scanned using a Typhoon scanner (GE healthcare).
To determine the specificity of other b-lactams compared to bocillin-FL, the cultures were pre-treated with the blactam (oxacillin, cephalexin, ceftriaxone or penicillin G) for 2 min prior to the addition of bocillin-FL. The cells were then treated as described above to allow the PBP profile to be identified. | 2018-04-03T02:53:29.802Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "664ae26aa665e542d8a71dd742c17f9cf2a1c866",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mmi.13765",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "664ae26aa665e542d8a71dd742c17f9cf2a1c866",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247467320 | pes2o/s2orc | v3-fos-license | A Sustainable Advance Payment Scheme for Deteriorating Items with Preservation Technology
: Profitably managing inventories is always a big challenge for retailers in the current context of transparent and competitive business. A general retailer always needs to handle both deteriorating and non-deteriorating products simultaneously to run a business. Deterioration of products sometimes impacts a retailer’s profits badly—a situation which can be alleviated by implementing proper preservation technology. In addition, to improve profits and minimize costs, a retailer always seeks some credit facilities (e.g., advance payment, trade credit facilities, etc.) from the supplier to continue the business smoothly with minimum investment. Advance payment is renowned for preventing the possibility of business orders being canceled and helping the retailer to minimize the risk of investing significant amounts at a single time. The foremost objective of this research is to analyze the facilities of advance payment and preservation technology investment and concurrent attempts to deal with shortages. This study shows that, given the presence of preservation technology, the result of case II is 68.06% higher than that of case I, whereas when preservation technology is absent, the result of case II is 71.93% higher than that of case I. The managerial insights of this analysis reveal that preservation technology attempts to prolong product life by preventing deterioration, which contributes to the retailer’s profitable business. On the other hand, in the case of an advance payment scheme, although the costs are relatively high, the study emphasizes the importance of the advance payment facility as it limits the risk of order cancellation and makes business more flexible for both supplier and retailer. The proposed model is solved by the classical optimization technique. Some theoretical derivations with numerical analysis support the model and provide some managerial insights for practitioners.
Introduction
Over the century, determining the order quantity of products (or lot size) for industries has been the prime concern of inventory researchers. In inventory research, the author of [1] first anticipated a simple economic order quantity model with consideration of holding cost and ordering cost to determine an inventory's order quantity. Over the years, many inventory researchers have tried to modify great work represented by Harris [1], but very few have successfully incorporated economic and non-economic attributes in the EOQ model of [1,2]. Usually, there are many factors (e.g., deterioration, demand, credit policy, etc.) affecting inventory models in daily life. So, a sustainable inventory has become a challenge for today.
1.
This study discusses how preservation technology can play an important role in preserving products.
2.
This allows the retailer to enjoy an optimum advance payment scheme when he cannot invest huge amounts in business and seek offers from the suppliers.
3.
This study frames some ordering and investment decisions that can allow the profitable preservation of products so that the retailer does not have to make continuous investments, supporting knowledge of how to manage a harmonic relation between the simultaneous investment in preservation and the offer enjoyed from the supplier.
This paper comprises 8 sections. In Section 2, a brief literature review has been presented. In Section 3, the problem description, notations, and assumptions are set out, and in Section 4 the mathematical formulation, together with some propositions (theoretical derivations), of the model is represented. Section 5 consists of numerical illustrations, and in Section 6 a sensitivity analysis is performed. Finally, Section 7 presents managerial insights, and conclusions and future prospects are described in Section 8.
Literature Review
This section contains a brief literature review on preservation technology (PT) and advance payment according to the joint pricing inventory model.
Traditional Inventory System
The authors of [14] pioneered the classical inventory model, the focus of which is constant demand. Several researchers have tried to extend the idea of [15] by incorporating numerous marketing parameters.
In [16], a model with a constant deterioration rate for the perishable items and compensation of the purchasing cost prior to receiving the products was established. Quite a few years later, Skouri and Papachristos anticipated a decaying model for constant deterioration, ordering decisions, capital constraints, and shortages [17]. The authors of [18] anticipated an economic order quantity model with consideration of linear type demand with credit policy and expiration dates for perishable product items. Price is always a vital factor to control the demand of the customers. Considering price-dependent demand, many researchers projected their models. In [19], a pricing model for partially backlogged shortages under two-levels of trade credit policy was considered, and Mashud [20] discussed an inventory model with consideration of numerous price-dependent demands under shortages. Most of these studies considered price-sensitive demand but none of them considered a combination of advertisement frequency and price of products simultaneously. The authors of [3] formulated a model which joined the effect of pricing strategies and advertisement policy for deteriorating items, with preservation technology used to curb deterioration. In [21], a model was designed on the basis of considerations of price and advertisement-dependent demand for non-instantaneous decaying products. The combination of advertisement, pricing, and preservation with advance payments is rare in the previous literature. It should also be noted that there are lots of other factors responsible for customer demand. Ignoring this gap for now, however, this study proposes to consider constant demand, with the main focus being on payment systems and preservation technologies.
Inventory Model with Preservation Technology (PT)
Deterioration of products means decay, evaporation, and loss of utility that results in the loss of qualities that were present in products' original conditions. It is also measured by injuries due to transportation, poor handling, etc., and applies to products that have lost their marginal value or have broken. A model was framed in [16] using an exponentially deteriorating inventory, and Mashud et al. [22] anticipated an economic order quantity model for numerous deteriorating items, while Mashud and Hasan [23] predicted the combined effects of advertisement and the price of products for a deteriorating model. To curb the deterioration, a number of energy and eco-friendly strategies have been discussed over the years. In this continuation, G. Li et al. [8] projected an inventory model using PT for non-instantaneous decaying items and projected two different models based on a non-instantaneous period, showing how preservation technology can help to optimize profit. However, the study also shows that investment in preservation has certain limits beyond which profits may decrease, while this proposed study considers preservation technology for deteriorating items. The main difference is that here we have used an advance payment scheme which was absent in [8]. After that, estimating the importance of PT on product degradation, the authors of [24] regarded two individual preservation rates and formulated an inventory model with selling price-dependent demand, whereas the authors of [7] formulated a carbon-emitting inventory model with consideration of PT for defective items. All the research on preservation technology and deteriorating items has mainly focused on optimal decisions regarding the efficient use of preservation technology but rarely considered any payment scheme, which has created a gap. It is often seen that some traders do business with a small amount of capital, so it becomes challenging for them to make large investments while purchasing products. An advance payment scheme offered by the supplier can help these traders to complete payment by paying a portion of the total purchase price in a few installments. The benefit of the supplier, in this case, is that there is no risk of cancellation of the order and at the same time they are able to build up the confidence of customers towards them, which plays a significant role in retaining customers. Considering this concept, we have tried to fill the research gap in our proposed study by projecting an advance payment scheme.
Inventory Model with Advance Payments and Shortages
Different payment systems have been generally used in inventory management over the years. Advance payments mean that a supplier offers a retailer recompense for a certain portion of the purchase cost before receiving the products to confirm the order as well as to provide some relaxations in payments for the retailer. Considering advance payments, Teng et al. [25] considered an inventory model aimed at deteriorating items using expiration dates of products, while Taleizadeh et al. [11] advanced an inventory model with considerations of incremental discounts and shortages. The authors of [26] anticipated a model for constant demand for decaying items under shortages and partial advance payment and partial trade-credit policy, while Taleizadeh [27] anticipated a supply disruption scenario with an advance payment scheme and price-sensitive demand under a shortage. In [27], a lot sizing model with disruptions is illustrated to solve a real problem and some optimal decisions regarding inventory management were presented. In [28], a partial upstream and partial downstream advance payment scheme is presented for partial back-ordering and a full back-ordering case for a single warehouse. A closed form solution has been derived in [28] to show the advantages of advance payment. After that, the authors of [13] extended the consideration of a single-warehouse to a two-warehouse situation and provided a mathematical model with advance payments and trade credit policy for shortages. However, in this paper, no advanced technology or strategy was used to curb deterioration. More related literature is detailed in Table 1. [30] − + − − Taleizadeh [31] + + − + Khedlekar et al. [32] − + + − Shah and Vaghela [33] − + − − Tavakoli and Taleizadeh [34] + + − + Taleizadeh [27] + + − + Mishra et al. [2] − + + − Mashud et al. [35] + + − − Noori-daryan et al. [36] − − − − Soni and Suthar [37] + + − − R. Li et al. [38] + + + + Das et al. [39] − + − − This study + + + + Note: '+' = present, '−' = absent.
Problem Description
The goal of all entrepreneurs is to maximize profits in today's competitive business world or to minimize the total cost of the chain. Retailing is a distribution system that structures a huge piece of the supply chain. Retailers purchasing products from suppliers and then selling them to customers is a natural process. Here is an explanation of how a retailer uses different customer management techniques and technologies to make a profit in business as well as retain the customer for a long time. After purchasing products from the supplier, the retailer stores them in his warehouse until they are sold. At this time, some products are perishable for a variety of reasons, so he uses preservation technologies for long-term preservation. The problem is to determine the costs for a whole cycle and the factors that influence whether costs go up or down.
Assumptions
The following assumptions are used in the development of the model:
•
The demand for the product follows a constant pattern; • Due to impatient customers, the demand during stockout is partially lost; • The backlogged demand is satisfied with the arrival of the next lot; • The products are deteriorating in nature; • There is no replacement of deteriorated items; Preservation technology is applied to reduce the existing rate of deterioration. The reduced deterioration rate is a function of the preservation technology cost ξ such that: where x is the coefficient which is representing the efficiency of preservation technology and K is the highest reducible rate of deterioration.
Model Formulation
In this section, based on the advance payment strategy, there are two cases. Section 4.1 considers the case when advance payment is absent; Section 4.2 considers the case with advance payment.
Case I (Without Advance Payment)
In the beginning, the retailer orders the products from the supplier. The supplier then starts delivering the products during the lead time and the delivery of the total amount of ordered products is completed at t = 0. In this case, the supplier does not offer any advance payment opportunity to the retailer, so the retailer has to pay the entire purchase price at once after the full delivery of the ordered products. Initially, with full stock in hand, the retailer starts to sell products to the customers. Due to customer demand and deterioration, the inventory starts to decline over time and the stock runs out at t = t 1 (See Figure 1). As a result, some customers switch to other shops to purchase products, and the retailer sees a loss in demand. Here, the retailer focuses on preservation technology to preserve products for a long time with optimum investment to secure a good profit margin. The rate of change of inventory during the positive stock period [0, t 1 ] and shortage period [t 1 , T] is governed by the differential equations: With conditions: (1) and (2), and using the given conditions, we get: Using I 1 (0) = S Equation (3), the retailer's initial stock is obtained.
Using I 2 (T) = −R in Equation (4), the amount of shortage is obtained.
Thus, the total ordered amount per cycle is: Ordering cost: The retailer has to spend some money to process the order depending on the type of material, the quantity ordered, and the source of the supplier. Let c o be the order cost per cycle. Then: Purchase cost: If the purchase price per unit of product is c p , then the retailer's total purchase cost for Q quantity of product will be: Holding Cost: Holding cost is the cost of keeping the goods in the warehouse from the time they are received until all the products are sold. If the holding cost per unit time is c h , then the total holding cost for storing the products in the warehouse from time 0 to t 1 will be: Shortage Cost: At the point when the interest for a product surpasses its provided amount, a shortage happens. We can see from this figure that the shortage starts at time t 1 . If c s be the shortage cost per unit, then the total shortage cost is: Lost Sale Cost: This refers to the cost associated with a situation when retailers lose opportunities to sell because products are out of stock. If c l be the lost sale cost per unit, then the total lost sale cost is: Preservation Cost: This refers to the cost of investing in preservation technology to reduce product degradation. If ξ be the preservation technology cost per unit time, then the total preservation cost can be written as: Finally, total cost is the summation of all costs, i.e.: Therefore, the total cost per cycle is: Proposition 1. The cost function TC(ξT) in Equation (15) states the convexity in T for any specific ξ > 0 and entails a unique solution T * .
Proof. Differentiating the cost function TC(ξT) in Equation (15) with respect to T, we get: To evaluate the value of T, ∂TC ∂T = 0. Then we get: and λ 2 = c l Dt 1 .
Ignoring the negative value, the required value of T is considered as follows: Now differentiating Equation (16) with respect to T we get: Expanding Equation (18) in Taylor's series (similar to [36]) and then substituting the value of T = T * , we get: Lemma 1. The total cost function TC(ξT) in Equation (15) is strictly convex when Proof. Since all the parameters are always assumed to be positive and the per unit purchase cost C p obviously greater than the per unit lost sale cost C l , we consider that If this condition is true, then the Equation (19) implies that ∂ 2 TC ∂T 2 T=T * > 0 and hence satisfies the sufficient condition for the convexity of TC(ξT). (15) states the convexity in ξ for any specific T > 0 and entails a unique solution ξ * .
Proposition 2. The cost function TC(ξT) in Equation
Proof. Similar to the proof of Proposition 1. (15) indicates the convexity in (ξT) and entails a unique solution (ξ * T * ).
Proposition 3. The cost function TC(ξT) in Equation
Proof. Let us define the cost function Equation (15) as follows: where According to Theorems 3.2.9 and 3.2.10 in [14], the fractional cost function in Equation (20) is strictly pseudo-convex if φ 1 (T) is non-negative, differentiable, and strictly convex, and φ 2 (T) is positive, differentiable, and concave. Now taking the first order derivative of φ 1 (T) with respect to T, we have: To find the value of T, place ∂TC ∂T = 0, which implies that: where ω = δ(c s t 1 +c l −c p )−c l c s δ . Now substituting the value of T in Equation (21), we get: which becomes the function of ξ. Let us take the first derivative of Equation (24) with respect to ξ, we get: Equate this to zero for finding the value of ξ = ξ * . Now differentiating Equation (25) with respect to ξ, we get: Then, after the submission of the value of ξ = ξ * in Equation (26), we simply write this as follows: where
Case II (With Advance Payment)
In this case, there is an option for the retailer to accept the offer of an advance payment scheme proposed by the supplier, so the retailer does not need to pay the entire purchase price at once during the full delivery of the ordered product. He pays β part of the total purchased price in N number of installments within the lead time L t , which is presented in Figure 2, and the remaining (1 − β) portion is to be paid at the time of receipt of the ordered products. The interest rate imposed on this pre-payment is τ. At t = 0, the retailer's warehouse becomes full of stock. After that, the inventory gradually starts to decline because of deterioration and customer demand and then finally becomes a vacuum at t = t 1 . Therefore, shortage of products is seen during [t 1 , T]. Capital cost: Then the total cost per cycle becomes as follows: Proposition 4. The cost function TC(ξ, T) in Equation (29) states the convexity in T for any specific ξ > 0 and entails a unique solution T * .
Proof. The proof of this proposition is similar to that of Proposition 1. (29) states the convexity in ξ for any specific T > 0 and entails a unique solution ξ * .
Proof. Similar to that of Proposition 3.
Numerical Illustrations
Some necessary data related to this model have been collected to validate the proposed model in real life. Profits are then numerically evaluated using those data which we have described in this section as examples. However, the total solution procedure is being visualized with the help of an algorithm.
Algorithm (For Case I)
Due to the high non-linearity in the cost function, a heuristic approach is presented in this section for Case I when taking single decision variables and another is taken as fixed.
Step 1: Plug in all the associated values of the parameters.
Step 2: When the situation c 0 + c s δDt 1 2 + (1 − δ) c p − c l Dt 1 > 0 holds, there exists T * for each cycle; if this form satisfies, proceed to Step 3; otherwise, proceed to Step 7.
Step 3: Step 4: When T * accomplishes the sufficient condition for the optimum result indicated in Equation (19), then T = T * is the optimal outcome which minimizes Equation (15); if not, proceed to Step 7.
Step 6:
The total cost is calculated from Equation (15), and T * is premeditated from Equation (17).
Case II (With Advance Payment)
Example 3. Considering the same input parameters for Example 1 and some following additional parameters for the model when an advance payment scheme is present. Let β = 0.4, L t = 0.6, τ = 0.1, N = 4. By solving Equation (29) with the assistance of Lingo 17 software, the optimum values are: Q * = 72.081, T * = 0.802, ξ * = 6.793, and TC * = 1852.487. Figure 5 shows the concavity of the total cost function graphically. Figure 6 represents a comparison between the results of the model with advance payment and without advance payment. It can be seen that the costs incurred in each situation of case II are higher than in case I. This is because the retailer gets the benefit of paying in advance in a few installments, meaning that he does not have to pay the entire amount at once. In return, however, he will have to pay a certain amount as interest, based on the number of installments, which reduces the total cost of the business. From Figure 6, we can see that the result of case II is 68.06% higher than case I when preservation technology is present. Next, the situation with no investment in preservation technology displays a 71.93% increase in case II from case I. Overall, the highest cost is seen in both cases when preservation technology is not used, this being 2.83% higher in case II and 0.52% higher in case I. Figure 6. Comparison between the models with advance payment and without advance payment.
Sensitivity Analysis
The sensitivity analysis is performed in this section (Figures 7-13). The total costs, replenishment cycles, preservation technology costs, and lot sizes are derived when different related parameters vary from −30% to 30%. From Figure 7, the total cost, preservation technology cost and lot size are swelling steadily, owing to the augmentation of δ. On the other hand, the replenishment cycle is declining slowly, owing to the growth of δ. So, δ has both positive and negative impacts on the determined values. In reality, when a shortage is increased, the retailer does not need to hold the products. As a result, he can save some expenses which later decrease the cost to the retailer.
From Figure 8, we see that total cost, preservation technology cost, replenishment cycle and lot size are increasing gradually due to the rise of θ. As the deterioration of products always decreases the amount of stock which has some value, it increases total costs; it is also observed that when the rate of deterioration increases, the preservation technology cost correspondingly increases. Figure 9 shows that total cost, preservation technology cost, replenishment cycle, and lot size increase gradually due to the increment of t 1 . With the increase of initial time, the deterioration period increases, and consequently the cost of preservation technology rises. As the non-shortage time or initial time augmented the order quantity, this also amplified because the chance to obtain the products at the right time upsurged. Figure 10 shows that total cost, preservation technology cost, replenishment cycle, and lot size increase gradually due to the increment of c h . From Figure 10, it is clear that when per unit holding cost increases, the total cost for the retailer also increases. As holding cost increases, it also means that the retailer holds the products for more time than is usual and consequently increases the preservation technology cost. Figure 11 shows that total cost, preservation technology cost, replenishment cycle, and lot size increase gradually due to the increment of c p . From Figure 11, it can be noticed that any increase in purchase cost will increase the total cost, and as the purchase cost increases, the retailer sets the selling price high. As a result, the length of total cycle length also increases. However, the preservation technology investment also needs to be implemented for a longer time. Figure 12 shows that total cost and preservation technology cost are increasing gradually due to the increment of c s . However, the opposite can be noticed for the total order quantity. On the other hand, the replenishment cycle and lot size are shrinking due to the growth of c s . A significant impact is noticed for preservation technology investment, while a stable intensification is noticed for the replenishment cycle. Figure 13 shows that total cost and preservation technology cost are increasing gradually due to the increment of c l . Since the retailer is unable to satisfy some demand, as a result, the customers move to other sources, so some additional amounts are added to the total cost. On the other hand, replenishment cycle and lot size are decreasing due to the increment of c l .
Managerial Insights
Deterioration of products is always an essential issue in proper inventory management. An advance payment scheme and preservation technology provide some flexibility to the retailer in order to deal with customers to secure a good profit margin. This study provides some managerial insights for the practitioners as follows: (i) One can quickly know how much and for how long one will have to invest in preservation technology to reduce product deterioration. (ii) An advance payment system creates flexibility for the retailer to deal with customers efficiently, although the capital is slightly lower compared to the general case. Moreover, advance payment always requires the retailer to complete the purchase in time, as some parts of the purchase cost have already been deposited in the supplier's account. Thus, it sometimes helps to make a rigid decision in purchasing items from suppliers.
In addition, the simultaneous integration of preservation technology and an advance payment scheme will provide unique outputs in ordering decisions and logistics management.
Conclusions and Future Prospects
An inventory model for a retailer with constant demand under an advance payment policy has been proposed in this paper. To manage deterioration, a preservation technology has been successfully implemented which provides some managerial insights for the retailer. Preservation technology allows a lengthened product life and works successfully to curb deterioration. This model reveals some pricing strategies and gives a clear idea about how advertisement frequency can affect a retailer's profits. Under the intensification of retailer profit, advance payment has been successfully implemented, and showed a significant result that helps to reduce the default risk and cancellation of orders. Some significant results have been developed considering a simultaneous investment in preservation and an advance payment scheme with the effect of advertisement. Prior studies also provide some theoretical analysis to validate the model with a numerical sensitivity analysis of key parameters.
This model can be extended in numerous ways; for instance, one can develop this model by implementing a trade credit policy (single- [40] or two-level [3]). It will be an exciting extension if environmental [7] factors can be added to the proposed model. One might also include stochastic deterioration [41] and a multi-item deteriorating inventory in the model [42]. | 2022-03-16T15:31:11.599Z | 2022-03-11T00:00:00.000 | {
"year": 2022,
"sha1": "6f741c8be2163547019ee85ee86441bfbeb9e08e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/10/3/546/pdf?version=1647316000",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "80dbcd5925eae6187ac95db433a5178a7f92c033",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
229456603 | pes2o/s2orc | v3-fos-license | THE RELATIONSHIP BETWEEN OIL PRICES AND REAL ESTATE LOANS AND MORTGAGE LOANS IN AZERBAIJAN
Azerbaijan is a major exporter of natural resources (oil). Improving the welfare of the population is a priority, as the driving force of the modern economy, including future economic progress, is the human factor, human capital, its science, knowledge, ability to use technology. Thus, at the current stage of Azerbaijan’s economic development, the issue of social welfare, including housing, is one of the most important indicators of the sustainability of dynamic socio-economic development in the country in the long run. For this reason, the study of the issue of directing part of the oil capital to mortgage loans and real estate is urgent.Taking into account the dependence of oil revenues on world oil prices, the article examines the relationship between world oil prices in the Republic of Azerbaijan over the past 10 years (2010M01−2020M01) between mortgage loans and real estate loans. The ARDL model was used as a research model. In addition, stationary tests of variables (ADF, PP, KPSS) were performed and the Engle-Granger cointegration equation was evaluated using both FMOLS and DOLS, as well as CCR. The stability of the models was studied. EViews_9 econometric software was used for calculations and graphing.As a result of the analysis, it was theoretically determined that there is a certain positive correlation between world oil prices, mortgage loans and real estate loans. Our recommendation may be to accelerate the transfer of part of oil revenues to mortgage loans and real estate to improve housing.
INTRODUCTION
Increasing housing opportunities for the Azerbaijani population will support economic development, further improve living standards, economic recovery and job creation, as well as further development of the mortgage and real estate markets. The formation and development of the mortgage market in modern times is the main direction of the social policy of each state. At the same time, the real estate market is one of the important indicators of the economy. In all countries, the construction sector is the most sensitive sector of the economy. For example, during the crisis, the negative situation first affects this area, and the real estate market begins to experience certain problems. However, the construction sector can show very good dynamics during the development period.
loans. It is known that when oil prices fall, activity decreases in almost all sectors of the economy, and there is a serious stagnation in terms of supply and demand. This also applies to the real estate market (Hasanov et al., 2019). Thus, the construction boom in all oil-exporting countries usually occurs at a time when world market prices for "black gold" are always high. When the price of oil falls, construction works weaken and supply in this area immediately decreases. It is a fact that real estate markets around the world are inactive as economic activity related to oil declines. This situation is also specific to our country. It is known that the Azerbaijan Mortgage Fund (AMF) operating under the Central Bank of Azerbaijan (CBA) provdes ordinary mortgage loans with a maximum amount of 50,000 manat for a period of 25 years for 8 years, with an initial payment of 20 and a social (preferential) mortgage for 50,000 manat for 30 years. 4 per year for the term, and the initial payment is 15 percent. We are talking about a mortgage issued by the state. Concessional mortgage loans are financed from the state budget, while ordinary mortgages are repaid at the expense of the AMF, in other words, at the expense of funds raised through the issuance and placement of relevant bonds.
Mortgage lending in the country plays an important role in shaping prices in the housing market. In fact, the mortgage should serve the development of the construction sector and the state should control the real estate market through it. The construction sector in Azerbaijan is closely linked to the oil sector. Even the private construction sector worked at the expense of such budget money, and more funds diverted from investment projects were directed to construction. However, the peculiarity of the Azerbaijani economy is that it has no deep connection to the global financial and stock markets. In this sense, it is healthier and free from inflated price increases and "bubbles" .
Despite the threat of protectionism, the world is now returning to strict state control and economic planning. The Azerbaijani government, feeling that difficult times were about to begin, never let go of the steering wheel and tried to manually adjust macroeconomic balances from internal and external influences. In particular, whether at a time when export oil revenues are increasing, or when there are various restrictions on the distribution of budget funds within Azerbaijan, both between sectors of the economy and between regions. There is a certain correlation between rising oil revenues and sustainable economic growth (Muradov et al., 2019;Humbatova and Hajiyev, 2019). In general, in our opinion, although the current global financial and credit crisis has been analyzed in all cases, there is no denying that it will affect Azerbaijan. The most important proof of this is the report on the funds lost due to falling oil prices (Aliyev et al., 2019).
In modern times, the fall in oil prices due to the spread of the pandemic around the world and its impact on the economy has already begun to have its say in the market. In other words, there is a risk of repeating the scenarios that occurred during the economic crises of 2008 and 2014. The crisis of 2014 was marked by two sharp devaluations. However, the structure of the country's economy is different from 2008. At present, the role of the non-oil sector in the economy is greater, the volume of GDP, slightly different movements of oil prices and so on. Available (Humbatova et al., 2020). The special quarantine regime applied in the country has affected all areas, as well as the real estate market.
The decrease in oil prices in 2014-2015 had a negative impact on the macroeconomic performance of oil-exporting countries and their banking systems . Although the macroeconomic consequences of lower oil prices for oil-exporting countries have been well studied, the impact of oil prices on financial stability and the banking system has not received much attention (Jesus and Gabriel, 2006).
Since the 1970s, there has been a steady increase in oil prices: 1973/74 (Arab oil embargo), 1979/80s (Iranian revolution), 1990 (occupation of Kuwait), after 1999, Until the middle of 2003(Global Financial Crisis) and until 2009. Steady declines in oil prices have been observed in recent years: in the early and mid-1980s, after the Asian financial crisis in 1991, and in late 2008 (Barsky and Kilian, 2002;2004;Kilian, 2008).
Oil Prices and Key Macroeconomic Indicators
Oil is an important source of energy, important transport fuel and invaluable raw material in many industries. In addition, it has become the main object of international trade in the world (Bass, 2018). In general, there are three main reasons for changes in oil prices: oil demand, oil supply and speculation (Brevik and Kind, 2004).
Since the beginning of the twentieth century, the growth of demand for oil has been influenced by economic growth in the United States and the rapidly growing economies of Asian countries, especially China and India (Cleaver, 2007). Global shocks of aggregate demand in the global crude oil market have increased significantly in recent years Kilian, 2010).
OPEC and contracts (OPEC +, OPEC ++) try to control energy supply and prices, manipulate resources and production. The diversity of stakeholders, such as oil companies, speculators and refineries, brings additional dynamics to the market. World events such as wars, revolutions and embargoes often affect crude oil prices. Based on these observations, it can be concluded that the price of crude oil has changed widely and chaotically (Alvarez-Ramirez et al., 2002).
At the same time, oil prices depend not only on supply and demand, but also on speculation and hedging, which lead to irrational changes in oil prices (Krichene, 2006;Federico et al., 2001;Eckaus, 2008).
Monetary Policy and Oil Prices
Monetary policy shocks do not necessarily occur in isolation from other shocks, and in some cases they respond to oil price shocks (Bernanke et al., 1997;Islam and Chowdhury, 2004;Islam and Watanapalachaikul, 2005;Hamilton and Herrera, 2004;Ozturk et al., 2008;Burakov, 2017;Omojolaibi, 2013;Kormilitsina, 2011). A decrease in the money supply can lead to a decrease in energy prices (Hammoudeh et al., 2015;Jawadi et al., 2016;Askari and Krichene, 2010;Hamilton, 2009;Ratti and Vespignani, 2014;Taghizadeh and Yoshino, 2013a;Taghizadeh and Yoshino, 2013b). Changes in monetary policy regimes were a major factor in the rise in oil prices in the 1970s (Barsky and Kilian, 2002;Kilian and Hicks, 2009). In 1960and 1980 demand for oil was severely affected by monetary policy regimes (Taghizadeh and Yoshino, 2014).
Monetary Policy and Property (Housing) Market
Although housing is generally one of the largest assets in a family's balance sheet, it has received limited attention (Emmons and Ricketts, 2017). However, there is little fundamental research. For example, Mian and Sufi (2009) Arslan et al. (2015) showed the importance of monetary policy in financing housing construction and regulating housing prices. A number of regional, national, and international studies have examined the relationship between the dynamics of the housing sector and changes in various indicators of real economic activity (Ismail and Suhardjo, 2001;Leung, 2004;Tsatsaronis and Zhu, 2004;Ceron and Suarez, 2006;Dufrénot and Malik, 2012;Poghosyan, 2016;Hiebert and Rome, 2010;Gattini and Hiebert, 2010). Other studies have examined the relationship between the housing market and financial relations (Englund and Ioannides, 1997;Loungani, 2010, Igan et al., 2011Anundsen et al., 2016;Rajan, 2005). Thus, monetary policy affects the profitability of the housing market (Chang et al., 2011) and a temporary decrease in risk-free interest rates may have a moderate or strong impact on housing prices (Arslan 2014(Arslan , 2015. Sá and Wieladek (2015) also claim that lower interest rates and capital inflows are associated with higher housing prices. Thus, monetary policy measures can have a strong impact on housing prices. Thus, since the global financial crisis, the link between the housing market and macroeconomic variables has weakened, and the link between the housing sector and financial variables has strengthened (Leung and Ng, 2018).
A number of researchers, such as Lastrapes (2002), Aoki et al. (2002), and Elbourne (2008), have focused on assessing the impact of money shocks on the housing sector. In addition, the level of inflation to increase housing prices; cost and average rate of mortgage loan; The impact of labor force growth, investment, trends and the growth rate of oil prices were studied. The choice of these variables has been studied in a number of studies on the determinants of housing prices in developing and developed countries (Piazzesi and Schneider, 2009;Glindro et al., 2011;Adams and Fuss, 2010;Geraint and Hyclack, 1999;Islam and Watanapalachaikul, 2005).
Based on the discrete-time model, Veybulla, Agnello et al. (2018aAgnello et al. ( , 2018b showed that different phases of the housing market cycle are strongly dependent on real GDP growth. Kannan et al. (2012) the potential interactions between monetary policy and housing finance regulation, Agnello et al. (2020), Carbó-Valverde and Rodriguez-Fernandez, (2010) the housing and mortgage market, Bernanke et al. (1997) a method of effective management of imbalances that create financial stability risks. Yoshino and Taghizadeh-Hesary (2016) examined how monetary policy affected crude oil prices after the mortgage crisis. Chen et al. (2014) showed that inflation and interest rates are the most reliable determinants of housing prices. Balke et al. (2002), Dodson and Sipe (2008) concluded that the impact of oil shocks on monetary policy and, consequently, on housing prices and incomes.
In addition, Krichene (2006) shows that the relationship between oil prices and interest rates has two sides to supply shocks: rising oil prices lead to higher interest rates, whereas lower oil prices lead to lower interest rates as demand increases.
Previous research has shown that falling housing prices and jumping oil prices generally go hand in hand with the likelihood of an economic downturn. Hamilton (2011) also argues that the link between housing price regulation and energy price volatility is strengthened during the Great Recession. Leamer (2007) argues that although the housing sector is a relatively small part of GDP, it plays an important role in recession. Gunarto et al. (2020) accept that oil prices and their uncertainty have a significant impact on overall economic activity. Jones (1999), Gentry (1994), Medlock and Soligo (2001), Liddle (2013) and Claudy and Michelsen (2016) argue that over time, oil prices and their uncertainty affect energy consumption and urbanization.
Researchers describe the impact of oil prices on housing prices as follows: • Rising energy prices affect the income and expenditures (students) of the population, as it increases unemployment, reduces the purchasing power of oil-importing countries in the interests of oil exporters and reduces incomes. This can have a detrimental effect on housing demand. (Spencer et al., 2012;Kaufmann et al., 2011) also found a correlation between the population's energy expenditure and the level of overdue mortgage debt • Rising energy prices can affect the production and operation of equipment, consumption of raw materials, construction costs, housing and communal services, the number and price of houses (Quigley, 1984;Swan and Ugursal, 2009) • Rising energy prices affect the overall inflation rate and may lead to tightening monetary policy, reduced liquidity and housing demand (Edelstein and Kilian, 2009) • Rising energy prices increase the attractiveness of oil and energy companies, which can lead to the withdrawal of capital from the housing market (Caballero et al., 2008;El-Gamal and Jaffe, 2010;Basu and Gavin, 2010) • An increase in energy prices may affect the joint dynamics of housing prices with a significant increase in commodity prices (Batten et al., 2010;Belke et al., 2010;Frankel, 2014;Hammoudeh and Yuan, 2008;Ratti and Vespinyani, 2014) • Rising energy prices can lead to the devaluation of the national currency and increased foreign demand for local property (Chiquier and Lea, 2009).
Data Descriptions
Lending to the construction of real estate and mortgage data are obtained from the Central Bank of Azerbaijan. Brent type oil prices are obtained from the U.S. Energy Information Administration data base. The data used in the analysis are in monthly frequency covering the period between January 2010-January 2020. Descriptive statistics are given in Table 1 ( Table 1 and Figure 1). Descriptive statistics are given in Table 2.
Methodology
The econometric tools used are used to identify short-term and long-term dependencies in the assessments. Several evaluation methods were used to verify the reliability of the results. autoregressive distributed lags boundstesting approach (ARDLBT), Engel-Granger cointegration test, also ully modified ordinary least squares (FMOLS), Dynamic Ordinary Least Squares (DOLS) and canonical cointegrating regression (CCR) evaluated by applying.
Unit Root Test
Before evaluating regression equations, it is important to check the stationary nature of the variables using unit root tests. This is because the stability of time variables is necessary in estimating the relationship between two or more variables using regression analysis. In most methods, the existence and evaluation of a long-run or cointegration relationship requires that the variables be non-stationary, and that the first-order differences be stationary, ie, the variable I (1). Note that I (0) is considered to be stationary with the real values of any time sequence variable. If a variable is not I (0), its first difference is calculated and its stationary is checked. If it is stationary in this case, that variable is I (1). I (0) and I (1) indicate the extent to which the sequence variables are stationary when used and are determined by uniform root tests. The article uses three different single root tests for the reliability of stationary test results: Augmented Dickey Fuller (ADF) (Dickey and Fuller, 1981), Phillips-Perron (PP) (Phillips and Perron, 1988) and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) (Kwiatkowski et al., 1992).
Auto Regressive Distributed Lags BoundsTesting
ARDLBT is a cointegration method developed by Pesaran et al. (2001). This approach has many advantages over previous alternative cointegration methods. First of all, in cases where the number of samples is relatively small, this approach gives more reliable results and can be easily evaluated using the ordinary least squares (OLS) method. The ARDLBT approach does not have the problem of endogenousness as one of the main problems to be considered in econometric modeling, and it is possible to evaluate both short-term and long-term coefficients within one model. In the ARDLBT cointegration approach, it is possible to perform calculations regardless of whether all the variables are I (0) or I (1) or a mixture of them. Evaluations using ARDLBT are carried out in the following stages: 1. Unlimited error correction model (ECM) is structured: Here are two ECM variable structures. Here y is a dependent variable, and x is independent or explanatory variable. β 0 represents the free limit of the model, and β i and μ i the white noise error. represents the long-run ratio, and shortterm ratios. Selecting the most appropriate delay size and meeting the required conditions of the model ECM is one of the issues to be considered when setting up. One of the most important conditions in this case is the absence of autocorrelation or ECM sequential correlation problem, which will be used for the next stage. The optimal delay size is then determined according to the Akaike or Schvarz criteria from among the ECM ones that do not have this problem. 2. Among the variables after the establishment of the ECM in the ARDLBT approach whether there is a cointegration connection is checked. To do this, a Wald-test (or F-Test) is applied to the θ i mentioned above as long-term coefficients, and the hypothesis H 0 :θ 0 =θ 1 =0...θ i =for the absence of cointegration is tested. An alternative hypothesis is that there is a cointegration relationship between the variables (opposite hypothesis: H 1 :θ 0 ≠0,θ 1 ≠0,...θ i ≠0). If it is determined that there is a cointegration relationship between the variables, the stability of this relationship is checked. If the coefficient y t-1 of is statistically significant and negative, the cointegration relationship is said to be stable. This means that deviations from the equilibrium (long-term relationship) that occur in the short term are temporary and are corrected over time to the long-term relationship. Note that is θ expected to be between -1 and 0. If the cointegration relationship between the variables is proved, the long-run coefficients can be estimated at the next stage. To do this, the long-run coefficients in equation 1 are equal to (β 0 +θ 0 y t-1 +θ 1 x t-1 =0), this equation can be solved in relation to y, and the long-run coefficients can be calculated as follows: 3. The long-run white noise error is (ect t ) calculated and included in the model instead of the part with long-run coefficients in , if it is between δ-1 and 0 and is statistically significant, this means that the cointegration relationship is stable. As mentioned above, this means that for the short run, the deviations will be corrected for the long run. If there is no serious calculation error, the δ coefficient gets the same or very close value to the θ in equation 1.
Engel-Granger Cointegration Test
One of the methods used to check the cointegration relationship between variables is the Engel-Granger (EG) cointegration test (Engle and Granger, 1987). This test can be used to check for a long-term connection. Through the EG cointegration test, it is also possible to determine the direction of the relationship between the variables and to investigate the short-term relationship. The EG co-integration test consists of the following steps: 1. The regression equation is evaluated for variables that are not stationary in the original case, but are stationary in the case of differentiation by the same degree (usually I (1)). So for the simplest case with two variables: Here a 0 və a 1 represent the regression coefficients to be evaluated, y and x represent the dependent and free variables, respectively, ε t the white noise error, and t time. 2. The stationaryness of the white noise error is checked. If ε t stationary, there is a cointegration relationship between these variables. Based on this, the estimated equation 4 is considered to be a long-run period equation. 3. The ECM is evaluated using stationary variables and a periodic delay white noise error (ε t-1 ) to check the strength and direction of the cause-and-effect relationship between the variables, in other words, the dependence: Here ρ 0 , τ, φ i , σ i represents the coefficients, q is optimal delay size, ω is the white noise error of the model. i=1,…q. To determine the optimal delay size, the relationship between the variables is first evaluated in the vector autoregressive (VAR) model.
Equation 5 is then evaluated using the least square method (LSM), taking into account the optimal delay size. Engle and Granger (1987) show that if there is cointegration between variables, this dependence should also be evaluated through the . If the cointegration relationship is stable, the coefficient of the term Error Correction Term (ECT), ie (e t-1 ), should be negative and statistically significant. Usually takes price in -1 and 0 range. Using Equation 5, the following cause-and-effect relationships can be tested.
Granger cause-and-effect relationship for the short term
For each free variable using statistical values of F or X i squared statistical values are evaluated by checking all ∆x t-1 delayed firstorder differences (H 0 :σ 1 =σ 2 =...=σ i =0,H 1 :σ 1 ≠0,σ 2 ≠0...σ i ≠0,i=1,…q). The rejection of the zero hypothesis indicates that x has an effect on y in the short run.
Granger cause-and-effect relationship for the long term
To test this relationship, the statistical significance of the t-test utilization factor e t-1 is checked. To do this, you need to test the hypothesis of zero (H 0 : τ=0, H 1 : τ≠0). If, as a result, the null hypothesis is rejected, this long-run period shows that deviations from the equilibrium state have an effect on the dependent variable and will return to the equilibrium state over time.
FMOLS DOLS and CCR
The fully modified minimum squares method (FMOLS) proposed by Phillips and Hansen (1990) and the dynamic minimum squares method (DOLS) proposed by Stock and Watson (1993) are alternative cointegration methods developed by Park (1992). Note that the Philips-Ouliaris (1997) and Engel-Grange cointegration tests were used to test for cointegration in all regression equations evaluated using FMOLS, DOLS, and CCR.
Diagnostics
When conducting econometric analyzes, it is important to check whether the models have consistent correlation, heteroskedasticity, and normal distribution of white noise error. When performing assessments using the FMOLS, DOLS, and CCR methods, sequential correlation and heteroskedasticity problems are automatically corrected. However, for the ARDLBT co-integration approach, it is important to perform all tests when evaluating ECMs. Here, both the Breusch-Godfrey LM test ("no serial correlation") to test a consistent correlation problem, and the Breusch-Pagan-Godfrey test ("no heteroskedasticity problem") and the autoregressive conditional heteroskedasticity test (Automatic) are used to obtain a more reliable result. Hederoscedasticity test, ARCH, "no heteroskedasticity problem"), Ramsey RESET Test (statistic) are used. In all cases, it is desirable not to reject the zero hypothesis. The Jarque-Bera test will be used to check the normal distribution of white noise error. The null hypothesis tested by this test is the assumption that "there is a normal distribution in the white noise error."
RESULTS AND DISCUSSION
The initial static expression of the econometric models to be evaluated is as follows: Here β regression coefficients, t expresses the time, ϵ t is white noise error. Ln(LCREM) t is the amount of loans allocated for the construction and purchase of real estate, including mortgage loans. The key research factor here is the β 1 coffecent. Taking into account the important role of oil in the Azerbaijani economy, oil prices Ln(OP) were taken as the second variable.
Results of Unified Root Tests
As noted above, it is important to check the variability of the variables before conducting a model evaluation. Table 3 shows the results of the ADF, PP and KPSS single root tests obtained without trends and with the addition of trends.
The variable LLCREM is I (0) based on all three tests (ADF, PP and KPSS) in the case of "With Intercept only." In the case of with intercept and trend and no intercept and no trend, I (1). The LOP variable is I (0) according to the KPSS test only with "With Intercept only." In the case of with intercept and trend and no intercept and no trend, I (1). This result is suitable for subsequent assessments and all methods to be used. Based on the results of the ADF, PP, and KPSS tests, it is assumed that the variables here are I (0) and I (1). This means that all of the above methods can be applied. As mentioned above, one of the key issues in building a model when applying the ARDLBT cointegration method is to determine the optimal delay size.
VAR Lag Order Selection Criteria
The optimal lag is found using the VAR method (Table 4). The results obtained show that in the long run, a 1% increase or decrease in oil prices will decrease or increase the volume of mortgage loans, respectively, model 1 (0.51%), model 2 (0.52%), model 3 (0.36%), model 4 (0.37%), model 7 (0.51%), model 8 (0.33%), model 9 (0.46%) and model 10 (0.33%) show an decrease or increase. It is also not expected in theory that this will have a positive effect in the long run (Table 6) (model 5 (215%), model 6 (212%), model 11 (218%) and model 12 (206%) show an increase or decrease. It is also expected in theory that this will have a positive effect in the long run).
ARDL Bounds Test, Long Run and Short Run Results
As can be seen, all the prerequisites required in the model are met. Thus, the coefficient of the lagged dependent variable entered into the model as a free variable with a period delay is model 1 (5%), model 2 (5%), model 3 (5%), model 4 (5%), model 5 (5%), model 6 (5%), model 11 (5%), model 12 (5%), model 7 (1%), model 8 (1%), model 9 (1%) and model 10 (1%) is negative and statistically significant. The same can be said about the results of the stability test in the evaluated cointegration equations. Stability here means the rate at which deviations from equilibrium in the short run are corrected to equilibrium in the long run and are determined on the basis of. Model 1 (5%), Model 2 (5%), Model 5 (10%), Model 6 (10%), Model 7 (1%), Model 8 (5%), Model 11 (10%) and Model The fact that all relevant coefficients are negative and statistically significant at 12 (5%) supports the idea that the cointegration relationship is stable in the models. In the short run, deviations are corrected over time and accumulated into a long-run equilibrium relationship.
Diagnostic Tests Results
The regression equations are also adequate, as all diagnostic tests for Serial Correlation (Durbin-Watson test and Breusch-Godfrey test), heteroskedasticity (ARCH -Heteroskedasticity test and Breusch − Pagan − Godfrey -Heteroskedasticity test) -12.06620*** -12.14943*** N/A S I(0) D LSERIES02 -7.675700*** -7.016003*** N/A S I(0) ADF denotes the Augmented Dickey-Fuller single root system respectively. PP Phillips-Perron is single root system. KPSS denotes Kwiatkowski-Phillips-Schmidt-Shin (Kwiatkowski et al., 1992) single root system. *** , ** and *indicate rejection of the null hypotheses at the 1%, 5% and 10% significance levels respectively. The critical values are taken from MacKinnon (Mackinnon, 1996). Assessment period: 2010M01−2020M01. S: Stationarity; N/S: No stationarity, N/A: Not applicable According to the Ramsey RESET test, it can be indicated that the model is well defined. All results of these tests are given in the table (Table 7). Table 7 shows the results for CUSUM and CUSUMSQ tests. The results indicate that the some coefficients are instable, this is because the plot of the CUSUM and CUSUMSQ statistic is not located inside the critical bands of the 5% significant level of parameter stability. It should be noted that the required conditions in the models for testing the stability of the cointegration relationship were also tested. As you can see, the models meet all the conditions. Diagnostic testing of white noise error in all models gives a positive result. In other words, none of the models has a consistent correlation and heteroskedasticity problem, and the regression standard error is small.
Analysis of FMOLS, DOLS, CCR and Engle-Granger Analysis Results
Other evaluation methods used -FMOLS, DOLS and CCR cointegration methods and analysis of the results of Engle-Granger analysis are very useful in our study (Table 8). This is because the revision of the results obtained with the ARDLBT co-integration approach with the application of these methods allows for a more reliable analysis.
Another feature that indicates a cointegration relationship between the variables is that the white noise errors obtained from the estimates are stationary. Table 9 shows the results of the stationary test by applying single root tests ADF, PP and KPSS on the white noise error of each long-run equation evaluated by FMOLS, DOLS and CCR. In general, white noise errors are stationary, but it appears in the first 3 equations. Based on these results, the fact that white noise errors are stationary in all models and thus the existence of a cointegration relationship is once again confirmed. However, this result does not support the results of the Engle-Granger and Phillips-Ouliaris cointegration tests given above.
Short-term and long-term cause-and-effect relationships can be more clearly analyzed using the Granger cause-and-effect relationship using the Engle-Granger cointegration method. Table 9 presents the results of the analysis of the impact of oil prices on mortgages and real estate loans in the short and long term. It is known to have no significant effect on short-term analysis.
To be more precise, the results obtained are statistically insignificant. However, it has been confirmed that there is a long-term relationship and a strong cause-and-effect relationship between the variables.
CONCLUSION AND POLICY IMPLICATIONS
The proposal is to use SOFAZ's funds to diversify AMF's financial sources and increase opportunities, to involve insurance funds in financing mortgage lending, and to ensure the inflow of private investment in this field by increasing activity in the securities market. In addition, construction savings banks should be established and in this way the passive savings of the population ADF denotes the Augmented Dickey-Fuller single root system respectively. PP Phillips-Perron is single root system. KPSS denotes Kwiatkowski-Phillips-Schmidt-Shin (Kwiatkowski et al., 1992) single root system. *** , ** and * indicate rejection of the null hypotheses at the 1%, 5% and 10% significance levels respectively. The critical values are taken from MacKinnon (Mackinnon, 1996). Assessment period: 2010M01−2020M01. Legend: S-Stationarity; N/S-No Stationarity N/A-Not Applicable should be attracted to the mortgage market. To create a standard of contracts concluded by construction companies during the purchase and sale in order to attract new buildings to the mortgage market. .692059** (0.0023) 9.692994** (0.0079) 4.846497** (0.0095) ADF unit root test -3.262966* */-3.134453/-2.907814** *** , ** and * indicate rejection of the null hypotheses at the 1%, 5% and 10% significance levels respectively | 2020-12-03T09:04:48.893Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "526520ac8e6e062ad83de92fe6e76cfece7db5ce",
"oa_license": "CCBY",
"oa_url": "https://econjournals.com/index.php/ijeep/article/download/10532/5599",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8c3dcb06b5b974dda366bf5ef4f489744190bce7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
1025160 | pes2o/s2orc | v3-fos-license | 8R-Lipoxygenase-catalyzed synthesis of a prominent cis-epoxyalcohol from dihomo-γ-linolenic acid: a distinctive transformation compared with S-lipoxygenases.
Conversion of fatty acid hydroperoxides to epoxyalcohols is a well known secondary reaction of lipoxygenases, described for S-specific lipoxygenases forming epoxyalcohols with a trans-epoxide configuration. Here we report on R-specific lipoxygenase synthesis of a cis-epoxyalcohol. Although arachidonic and dihomo-γ-linolenic acids are metabolized by extracts of the Caribbean coral Plexaura homomalla via 8R-lipoxygenase and allene oxide synthase activities, 20:3ω6 forms an additional prominent product, identified using UV, GC-MS, and NMR in comparison to synthetic standards as 8R,9S-cis-epoxy-10S-erythro-hydroxy-eicosa-11Z,14Z-dienoic acid. Both oxygens of (18)O-labeled 8R-hydroperoxide are retained in the product, indicating a hydroperoxide isomerase activity. Recombinant allene oxide synthase formed only allene epoxide from 8R-hydroperoxy-20:3ω6, whereas two different 8R-lipoxygenases selectively produced the epoxyalcohol.A biosynthetic scheme is proposed in which a partial rotation of the reacting intermediate is required to give the observed erythro epoxyalcohol product. This characteristic and the synthesis of cis-epoxy epoxyalcohol may be a feature of R-specific lipoxygenases.
Expression and purifi cation of 8R-lipoxygenase
cDNA of the 8 R -LOX domain of the P. homomalla peroxidaselipoxygenase fusion protein ( 19 ) was subcloned into the pET3a vector (with an N-terminal His4 tag), and the protein was expressed in Escherichia coli BL21 (DE3) cells and purifi ed by nickel affi nity chromatography according to a previously published protocol ( 20 ). For clarity, this 8 R -lipoxygenase is referred to herein as the recombinant 8 R -LOX.
The second P. homomalla 8 R -lipoxygenase tested here was the soluble enzyme purifi ed in 1996 ( 21 ); aliquots from the original purifi cation were stored at Ϫ 70°C and these retained suffi cient activity for use 15 years later. This enzyme is referred to here as the soluble 8 R -LOX.
Incubation with enzymes
Side-by-side incubations were performed at room temperature in 1 ml of 50 mM Tris pH 8.0 containing 500 mM NaCl, 2 mM CaCl 2 and 0.01% Emulphogene detergent using [ 14
GC-MS analysis of 18 O 2 incorporation in product from coral
Incubation of 20:3 6 (100 M) with an extract of P. homomalla acetone powder (3 mg powder/ml pH 8 buffer) was conducted under an atmosphere of 18 O 2 . The products were purifi ed initially by RP-HPLC (MeOH/H 2 O/HAc, 80/20/0.01 by volume), and then further purifi ed by SP-HPLC (Hex/IPA/HAc, 100/5/0.1 by volume for the epoxyalcohol). Aliquots of HETrE (prepared by TPP reduction) and epoxyalcohol from the 18 O 2 incubation, together with unlabeled samples, were hydrogenated (H 2 , palladium on carbon in ethanol for 2 min) and after addition of water and extraction with ethyl acetate, they were converted to the pentafl uorobenzyl (PFB) ester TMS ether derivative. The 18 O content was determined by GC-MS analysis in the negative ion/chemical ionization mode using a Nermag R10-10B instrument with a 5 m SPB-1 capillary column programmed from 150° to 300° at 20°/min. The samples were subjected to rapid repetitive scanning over a 10 a.m.u. mass range (0.2 s per scan) covering the prominent M-181 ion (loss of PFB, resulting in the RCOO ion of product); approximately 30 scans were collected during elution of the GC peak, and these were averaged for calculation of the relative ion abundances. For analysis of hydrogenated Although the naturally occurring prostaglandin products in P. homomalla are all 2-series derived from arachidonic acid, we included a study of the metabolic fate of 20:3 6 because it was originally reported as a substrate for the enzymatic activity in the coral ( 18 ) and because study of 20:3 6 metabolism in P. homomalla is not complicated by the presence of large amounts of endogenous products. With the availability of cloned recombinant enzymes from P. homomalla , we recently returned to the issue of the origin of this extra product from 20:3 6. The novel product we characterize herein is formed specifi cally by 8 R -lipoxygenase metabolism, and its unusual stereochemistry may represent a feature of the secondary reactions of R -as opposed to S -lipoxygenases.
Incubation with coral extracts
Frozen P. homomalla was cut into small pieces with scissors and placed in 10 vols of 50 mM Tris, pH 8, containing 1 M NaCl on ice and homogenized using a Polytron blender (Brinkmann) in 10-s bursts. The homogenate was allowed to settle under gravity for up to 30 min; aliquots of the supernatant were diluted 10-fold into fresh buffer for incubations with fatty acid substrates (100 M), typically for 5 min at room temperature. Products were extracted by the addition of 1 M KH 2 PO 4 plus suffi cient 1 N HCl to give pH 4, followed by extraction with 2 vols of ethyl acetate. The organic phase was collected, washed with water to remove traces of acid, and taken to dryness under nitrogen. The extracts were redissolved in a small volume of MeOH before HPLC analysis.
Acetone powders of P. homomalla were prepared as described ( 5 ) and stored at Ϫ 70°C until use. Typically, a 3 mg/ml suspension/solution in 50 mM Tris (pH 8) containing 1 M NaCl was prepared for incubations with substrates (5 min at room temperature). For recovery of 8-hydroperoxides from these incubations, the 3 mg/ml suspension was diluted 10-fold, and the incubation time was extended to 20 min; a few milligrams of 8 R -HPETE or 8 R -HPETrE could be prepared and purifi ed from 0.5 l of the dilute acetone powder incubations. Products were extracted as described above. If required, before HPLC, hydroperoxides were reduced using a molar excess of triphenylphosphine in MeOH (5 min at room temperature).
HPLC analyses
Typically, aliquots of the extracts were analyzed initially by RP-HPLC using an ODS Ultrasphere 5 column (Beckman) (25 × 0.46 cm) or Waters Symmetry column (25 × 0.46 cm) using a solvent of MeOH/H 2 O/HAc (80/20/0.01 or 75/25/0.01 by volume) at a fl ow rate of 1 ml/min with on-line UV detection (1100 series diode array detector; Agilent, Santa Clara, CA) and radioactive monitoring (Radiomatic Flo-One). Larger amounts (0.5-1 mg of total fatty acids) were injected for collection of products, or a semi-preparative column (Ultrasphere ODS, 25 × 1 cm; Beckmann) was used for larger quantities. Further analysis and purification was achieved by SP-HPLC using a 5-silica column (Alltech) or a Beckmann Ultrasphere 5 silica column using a
Determination of the C-10 hydroxyl confi guration
To establish the relative stereochemistry of the epoxide to the C-10 hydroxyl, two saturated analogs of the natural product were prepared by total chemical synthesis as outlined in the supplementary data and in supplementary Scheme I.
These synthetic standards, 8 R ,9 S -cis -epoxy-10-hydroxy-eicosanoates with the 9,10 erythro and threo relative confi gurations, were fi rst analyzed by GC-MS (EI mode) in comparison to the hydrogenated natural product as the methyl ester TMS ether derivatives. The threo 8,9-cis -epoxy-10-hydroxy-eicosanoate standard eluted before the erythro diastereomer (5 m SPB-1 capillary column, 150° to 300° at 20°/min) each as well resolved peaks with retention times of 4 min 52 s and 5 min 1 s, respectively. Their mass spectra had a noticeably different pattern of ion fragments, especially at the lower m/z values (see supplementary , because it had been used three times previously for 18 O syntheses). The [ 18 O 2 ]8 R -HPETrE labeled in the hydroperoxy group was purifi ed by SP-HPLC and reacted with recombinant 8 R -LOX under a normal atmosphere to produce the corresponding epoxyalcohol. The 18 O contents of the 8 R -HPETrE and its corresponding epoxyalcohol product (which share the same molecular weight, 338 for the unlabeled species) were measured by negative ion electrospray LC-MS using a Ther-moFinnigan TSQ Quantum instrument by rapid repetitive scanning over the mass range encompassing the M-H anions ( m/z 330-350, 5 scans/sec). A total of 20-30 scans over the HPLC peaks were averaged to obtain the partial mass spectra of labeled and unlabeled epoxyalcohol and 8 R -HPETrE.
Metabolism in extracts of P. homomalla
As originally reported ( 5 ), when arachidonic acid (20:4 6) is incubated with extracts of P. homomalla , the fatty acid is rapidly metabolized by 8 R -LOX, and the resulting 8 R -HPETE is further transformed by allene oxide synthase, leading to the appearance of ␣ -ketol and cyclopentenone end products ( Fig. 1 , lower panel). Metabolism of dihomo-␥ -linolenic acid (20:3 6) is similar, except for the appearance of a prominent, more polar product that is absent (or present in insignifi cant amounts) in the arachidonic acid incubations ( Fig. 1 ).
Identifi cation of the novel 20:3 6 product
The structure was established based on UV, NMR, and GC-MS data. The purifi ed polar product displayed only end absorbance in the UV data, indicating no conjugated double bonds. A quantity of ف 100 g was prepared, and the proton NMR and COSY spectra were recorded in CDCl 3 . These results (see supplementary Table I ( 22 )], with ␣ -hydroxyl at C-10 and two cis double bonds at 11,12 and 14,15. So far this established the covalent structure as 8,9-cis -epoxy-10-hydroxy-eicosa-11 Z , 14 Z -dienoic acid, an epoxyalcohol of the hepoxilin B-type that is distinctive in being a cis -epoxide ( 23 ). Because the precursor of the epoxyalcohol is 8 R -HPETrE (an assumption proved formally using purifi ed enzymes, vide infra), these results are compatible with essentially complete retention of the hydroperoxy oxygens from the precursor 8 R -HPETrE.
Lack of product using allene oxide synthase
There are several precedents for the transformation of fatty acid hydroperoxides to epoxyalcohols catalyzed by allene oxide synthase (AOS) and related enzymes (24)(25)(26), and it seemed possible that this might account for the formation of the 20:3 6-derived epoxyalcohol. However, experiments with the expressed AOS domain of the P. homomalla AOS-LOX fusion protein ( 19 ) produced only allene oxide as product [detected as the major ␣ -ketol hydrolysis product and cyclopentenone ( 5 )] from 8 R -HPETE or 8 R -HPETrE (data not shown).
Formation of epoxyalcohol by 8 R -LOX enzymes
By contrast, use of the recombinant LOX domain of the AOS-LOX fusion protein gave positive results. When suffi cient enzyme was used to quickly transform (<1 min) all the fatty acid to the corresponding 8 R -hydroperoxide, further reaction generated secondary products. When observed by repetitive scanning in the UV, the rapid appearance of the derivatives (data not shown). The erythro standard had an indistinguishable mass spectrum and retention time to the hydrogenated epoxyalcohol product of P. homomalla . Their structural identity was confi rmed by comparison of the NMR spectra of the saturated natural product with the synthetic standards ( Fig. 2 ). These data confi rmed the erythro relative confi guration at 9,10 in the natural product. Because P. homomalla exhibits only 8 R -LOX activity, the cis epoxide moiety can be assigned as the 8 R ,9 S enantiomer. Thus, the complete structure of the novel product from 20:3 6 is established as 8 R ,9 S -cis -epoxy-10 S -hydroxy-eicosa-11 Z ,14 Z -dienoic acid. Metabolism in the coral extracts is summarized in Scheme 1 . We also tested the soluble 76-kDa 8 R -LOX from P. homomalla , which was available in limited quantities from the original purifi cation ( 21 ). It reacted very similarly to the recombinant 8 R -LOX from the AOS-LOX fusion protein. The substrates 20:4 6 and 20:3 6 were comparable for oxygenation to the corresponding 8 R -hydroperoxide; however, 8 R -HPETrE was converted to further products at over twice the rate of 8 R -HPETE. When reactions with identical amounts of enzyme were analyzed and stopped at the same time (with half of the 20:3 6 hydroperoxide consumed), subsequent RP-HPLC analysis confi rmed the more extensive metabolism of 8 R -HPETrE and the appearance of a single prominent, more polar peak detected at 205 nm, with no comparable prominent product from 8 R -HPETE ( Fig. 3B ). This polar product from 20:3 6 was identifi ed as the same epoxyalcohol identifi ed earlier by its identical UV profi le and cochromatography on both RP-HPLC and SP-HPLC with the epoxyalcohol formed by the recombinant 8 R -LOX.
Retention of hydroperoxy oxygens in the epoxyalcohol
When 8 R -HPETrE containing an ف 1:2 mixture of 2 16 O and 2 18 O in the hydroperoxide group was reacted with the recombinant 8 R -LOX, the 18 O contents of the substrate and epoxyalcohol product were almost indistinguishable ( Fig. 4 ). Close inspection indicated 98% retention of both hydroperoxy oxygens in the epoxyalcohol, pointing to a conjugated diene at 237 nm was followed by the gradual decrease in intensity at this wavelength, with the appearance of a new chromophore characteristic of a conjugated triene(s) centered on ف 270 nm and a weaker broad absorbance in the area of 300-350 nm. The main product of the 20:3 6 reaction absorbs relatively weakly, at 205 nm, and is not detected by UV scanning (see below). In side-by-side incubations monitored in the UV, it was apparent that the 20:3 6-derived 8 R -HPETrE disappeared more quickly than the corresponding arachidonic acid-derived 8 R -HPETE. These side-by-side reactions were also conducted using 14 Clabeled fatty acid substrate, and, after extraction of these samples using C18 cartridges, RP-HPLC analysis showed distinctly different profi les of products ( Fig. 3A ). The results confi rmed the more extensive metabolism of the 20:3 6-derived 8R-HPETrE (less remaining compared with 8 R -HPETE) and, more signifi cantly, the prominent appearance of a polar product unique to 20:3 6 metabolism. This distinctive peak at ف 10 min is the most abundant secondary product from 20:3 6, detected at 205 nm in the UV. In larger-scale incubations, this polar product from 20:3 6 was prepared in suffi cient amounts for structural analysis by 1 H-NMR (see supplemental Tables I and II). On the basis of these data, the 8 R -LOX product was shown to be identical to the coral epoxyalcohol 8 R ,9 S -cis -epoxy-10 Shydroxy-eicosa-11 Z ,14 Z -dienoic acid. ing for the erythro confi guration of the epoxyalcohol product (discussed in the following subsection), and therefore it is imperative that the structural assignment is secure. For trans -epoxy epoxyalcohols, there are empirical rules that reliably allow assignment of the erythro or threo confi guration. These rules relate to their relative polarity on TLC, relative retention time on GC, and both the relative chemical shifts and coupling constants on NMR ( 22,32,33 ). However, for cis -epoxy products there are fewer closely analogous examples in the literature (e.g., all fatty acidrelated epoxyalcohols with data available are trans epoxides), and the differences for erythro and threo on NMR are small or nonexistent ( 34 ). Our assignment is founded on the well precedented threo product in Sharpless' hydroxyldirected epoxidation of Z-allylic alcohols with Ti(OiPr) 4 (see supplementary Scheme I, epoxidation of 10 R -3) (34)(35)(36)(37). This allowed assignment of the two epoxide diastereomers ( erythro and threo ) obtained via Sharpless asymmetric epoxidation (see supplementary Scheme I). Indeed, the latter assignment shows good agreement with precedent using closely related model compounds ( 34,35 ). For example, the asymmetric epoxidation (using L-(+)diisopropyl tartrate) of 3-hydroxy-4 Z -undecenol yields an unreactive 3 R enantiomer with 2:3 ratio of erythro : threo products ( 35 ); our results concur exactly with this precedent and others ( 34,36,37 ).
Proposed catalytic cycle
The reaction is catalyzed and controlled by the active site iron, which must fi rst cleave the hydroperoxide and subsequently catalyze an oxygen rebound and hydroxylate the intermediate epoxyallylic radical while both hydroperoxy oxygens are retained in the epoxyalcohol product ( Fig. 5 ). This is easy to conceptualize for the reactions of S -confi guration fatty acid hydroperoxides because all steps occur on the same face of the reacting molecule, allowing formation of a trans epoxide and threo alcohol ( Fig. 5 , box). Our results with the R -confi guration hydroperoxide indicate not only formation of a cis -epoxide, which itself presents no conceptual problem, but also the erythro confi guration of the alcohol. Assuming the iron is in control, this necessitates either a 9,10 bond rotation before hydroxylation or fl ipping over of the reacting epoxyallylic radical intermediate ( Fig. 5 , right and left options). Perhaps the 8 R -hydroperoxide sits partly turned away from square so that the epoxyallylic intermediate, when formed, further rotates to expose the opposite face of the intermediate for hydroxylation. We note too that the formation of cis -epoxides may be a characteristic of 8 R -LOX because the activity in P. homomalla extracts was shown to convert 5 S -HPETE to cis -epoxy LTA 4 , not to the well known trans -epoxy leukotriene A 4 ( 38 ). Although the mechanisms of epoxyalcohol and LTA 4 synthesis differ, the reactions being initiated by the ferrous and ferric enzymes, respectively, the substrate conformation that predisposes to cis -epoxide formation is dictated by binding in the active site and thus could be dictated in similar fashion by an enzyme that favors R versus S oxygenation. mechanism involving close control of the transformation by the 8 R -LOX enzyme.
Hydroperoxide isomerase activity
The typical dioxygenase activity of lipoxygenase enzymes involves activation of the resting ferrous enzyme to the ferric form, then cycling of the ferric enzyme as it catalyzes reaction with polyunsaturated fatty acid and O 2 ( 27 ). By contrast, the epoxyalcohol biosynthesis we characterize here fi ts the criteria for a LOX enzyme acting as a hydroperoxide isomerase ( 28,29 ). In this case, the reaction cycle is initiated by the ferrous enzyme. Several lines of evidence suggest that a lack of access of molecular oxygen within the active site promotes hydroperoxide isomerase activity ( 30 ). If present, molecular oxygen reacts readily with radical intermediates, thus intercepting and blocking hydroperoxide isomerase cycling. Furthermore, molecular oxygen promotes enzyme activation to the ferric form, also inhibiting isomerase activity ( 29,31 ). Therefore, one can deduce that the 8 R -HPETrE is an acceptable substrate for interaction with the ferrous iron and that O 2 is excluded from intercepting the radical intermediates. With the arachidonic acid-derived 8 R -hydroperoxide, the overall rate of reaction is comparatively sluggish, and very little epoxyalcohol product is formed. The main products are dihydroperoxides or leukotriene A-related diols, both of which are products of the ferric enzyme. This suggests that the selective reaction with the 20:3 8 R -hydroperoxide is facilitated by exclusion of O 2 within a critical part of the active site and that this does not occur with binding of the arachidonate analog.
Assignment of the 10S (erythro, anti) confi guration
In postulating a mechanism for the hydroperoxide cycling with 8 R -HPETrE, there is some diffi culty in account- lipoxygenases can diffuse out of the active site or be subject to interception by molecular oxygen, an event that promotes lipoxygenase activation to the ferric form ( 30 ). Accordingly, one might expect there is more time in the 8 R -LOX reaction for the rotation required to form the observed erythro epoxyalcohol product ( Fig. 5 ).
Wrap-up of a historical issue
The striking and unexpected difference between 20:4 6 and 20:3 6 metabolism in P. homomalla was detected in the original investigations of prostaglandin biosynthesis by Corey and Ensley, and the prominent extra product from 20:3 6 was partially characterized ( 17 ). For example, it was shown to exhibit only weak end absorbance in the UV, to not react with sodium borohydride, to contain two double bonds and an alcohol and a possible epoxy functionality, and to have a molecular formula as the methyl ester of C 21 H 36 O 4 , all a perfect match for the epoxyalcohol we identify. Furthermore, the reported mass spectrum of the hydrogenated product as the methyl ester TMS derivative [listed in tabular form in the thesis ( 17 )] contains all the major ions and similar ion abundances as reported in our Results section. There is little doubt that this product and our epoxyalcohol are the same compound. The existence of 8 R -LOX metabolism in P. homomalla was not uncovered until the mid-1980s, a decade after these early biosynthetic studies ( 8 ), and it was only around the years 1995-2000 that the origin of the coral prostaglandins via cyclooxygenase was fi rmly established ( 6,7,(47)(48)(49). (8,9-cis -epoxy, 9,10 erythro ) and the complete retention of the hydroperoxy oxygens. Assuming that the active site iron cleaves the hydroperoxide and momentarily binds the distal hydroperoxy oxygen, the epoxyallylic radical intermediate must either rotate at the 9,10 bond (left) or fl ip over (right) to produce the epoxyalcohol product. In the box: reaction of S -confi guration fatty acid hydroperoxide forms a trans -epoxy threo -hydroxy epoxyalcohol.
Other biosyntheses of cis-epoxyalcohols
Although heretofore only trans -epoxyalcohols have been reported from lipoxygenase catalysis (e.g., 23,28,[39][40][41], other enzymes can make the cis -epoxides. The majority of these are mechanistically quite distinct, however, because the epoxide is formed via oxygen transfer. The epoxyalcohol synthase activities in the fi sh parasitic fungus Saprolegnia parasitica ( 42 ) and in potato leaves and beetroot ( 43,44 ) catalyze oxygen transfer from the hydroperoxy fatty acid to the adjacent conjugated diene; the original hydroperoxide moiety is reduced to an alcohol, while the transferred oxygen produces trans -or cis -epoxidation of the trans and cis double bonds, respectively. In the case of plant peroxygenases, epoxidation may occur via intermolecular or intramolecular oxygen transfer from a fatty acid hydroperoxide to a cis double bond ( 45,46 ). More similar to our reaction, but forming the threo product, is the conversion of 13 S -hydroperoxylinoleic acid to the 11 S -threo -hydroxy-12 R ,13 S -cis -epoxide by a cytochrome P450 in the amphioxus Branchiostoma fl oridae ( 26 ). Notably, the oxygen rebound step in P450 catalysis is very fast ( ف 10 Ϫ 9 s), tending to favor suprafacial hydroxylation of the intermediate, forming the threo epoxyalcohol. By comparison, the equivalent intermediate in the hydroperoxide isomerase activity of | 2018-04-03T03:11:26.218Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "54c4d8f9d29685fdd7790a1a1603b167acd08ee1",
"oa_license": "CCBY",
"oa_url": "http://www.jlr.org/content/53/2/292.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "104342b95c82711cd47f196d48d17573799b6262",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
259297203 | pes2o/s2orc | v3-fos-license | Neuropsychology Social Cognitive Disruptions in Multiple Sclerosis: The Role of Executive (Dys)Function
Objective: Multiple sclerosis (MS) is a chronic demyelinating disease of the central nervous system, resulting in a range of potential motor and cognitive impairments. The latter can affect both executive functions that orchestrate general goal-directed behavior and social cognitive processes that support our ability to interact with others and maintain healthy interpersonal relationships. Despite a long history of research into the cognitive symptoms of MS, it remains uncertain if social cognitive disruptions occur independently of, or re fl ect underlying disturbances to, more foundational executive functions. The present preregistered study investigated this directly. Method: Employing an experimental design, we administered a battery of computerized tasks online to a large sample comprising 134 individuals with MS and 134 age-and sex-matched healthy controls (HCs). Three tasks measured elements of executive function (working memory, response inhibition, and switching) and two assessed components of social cognition disrupted most commonly in MS (emotion perception and theory of mind). Results: Individuals with MS exhibited poorer working memory ( d = .31), response inhibition ( d = − .26), emotion perception ( d = .32), and theory of mind ( d = .35) compared with matched HCs. Furthermore, exploratory mediation analyses revealed that working memory performance accounted for approximately 20% of the group differences in both measures of social cognition. Conclusions: Disruptionsto working memory appearto serveas one of the mechanisms underpinning disturbances to social cognition in MS. Future research should examine if the bene fi ts of cognitive rehabilitation programs that incorporate working memory training transfer to these social cognitive processes.
Multiple sclerosis (MS) is a chronic demyelinating disease of the central nervous system that affects more than 2.8 million people worldwide (Walton et al., 2020). As a result of widespread neurodegeneration, MS is characterized by highly variable symptoms that can include both physical (motor) and cognitive impairment. Although physical symptoms can impose direct constraints on individuals' mobility, cognitive symptoms can also impinge profoundly upon quality of life. This is true especially for cognitive disturbances that interfere with individuals' ability to maintain meaningful interpersonal relationships and, in turn, a healthy social environment. The present study set out to provide a precise characterization of the cognitive disturbance(s) occurring in MS that might underpin such negative psychosocial outcomes.
Cognitive symptoms can occur in all clinical phenotypes of MS, with estimated prevalence rates between 20% and 75% (Benedict et al., 2020;Johnen et al., 2017). Disruptions to cognitive processing speed and executive functions have been reported most frequently, perhaps reflecting the historical focus of research (Chiaravalloti & DeLuca, 2008;Sumowski et al., 2018). Executive functions refer collectively to mental operations that orchestrate adaptive and goaldirected behavior (Diamond, 2013), and their disruption is likely to interfere with activities of daily living. In addition to these foundational cognitive systems, however, disturbances to social cognition are reported in 20%-40% of individuals with MS (Islas & Ciampi, 2019)-that is, the collection of cognitive processes that allow us to interact effectively with others and conduct ourselves appropriately in interpersonal contexts (C. D. Frith & Frith, 2012;Happé et al., 2017). In particular, two core components of social cognition have been shown repeatedly to become impaired in all MS phenotypes: our ability to process others' emotional states from their facial expressions (referred to herein as "emotion perception") and our capacity to attribute mental states to others (e.g., beliefs, intentions; for reviews, see Bora et al., 2016;Cotter et al., 2016;Lin et al., 2021). The latter is referred to as theory of mind (ToM; C. Frith & Frith, 2005;Premack & Woodruff, 1978) and considered essential for social interaction; understanding that others have beliefs independent of our own allows us to understand, predict, and even manipulate their behavior. In this light, disruptions to these two elements of social cognition in MS could impede individuals' ability to develop and maintain interpersonal relationships with friends, family members, colleagues, and health care providers, thereby compromising their overall quality of life (Islas & Ciampi, 2019;Topcu et al., 2020).
A long-standing yet still unanswered question is whether social cognitive disturbances in MS occur independently or reflect manifestations of disruptions to more foundational executive functions that guide behavior in both social and nonsocial contexts (see Doskas et al., 2021). Several studies have reported that the performance of individuals with MS on tasks designed to measure emotion perception or ToM correlate positively with their performance on neurocognitive tests of working memory (e.g., Genova et al., 2015;Lenne et al., 2014) and other executive functions (e.g., Ciampi et al., 2018;Dulau et al., 2017;J. D. Henry et al., 2009;Kraemer et al., 2013). The presence of such associations is highly inconsistent, however, likely reflecting the underpowered samples and/or heterogeneous methods employed typically in this research domain (e.g., A. Henry et al., 2011;Kraemer et al., 2013). Further, some studies report that the impaired performance of individuals with MS compared with matched healthy control (HC) samples on tasks measuring emotion perception and ToM remain significant after controlling for performance on neurocognitive assessments (Genova et al., 2020;Pöttgen et al., 2013;Raimo et al., 2017; for a review, see Cotter et al., 2016) and can occur independently of disturbances to executive functions in some individuals (A. Henry et al., 2022). Perhaps more importantly, none of these studies provide insights into the causal relationships among measures of these seemingly discrete cognitive systems. As such, it remains unclear if and how disturbances to emotion perception and/or ToM are underpinned by disruptions to more foundational executive functions.
A similar debate is found in the broader field of social cognitive research, wherein some scholars conceptualize components of social cognition as particular instantiations of foundational cognitive processes deployed in both social and nonsocial domains (e.g., Binney & Ramsey, 2020;Ramsey & Ward, 2020). Certain executive functions should play a particularly pivotal role in supporting social cognition: These include working memory (monitoring and updating memory representations), response inhibition (intentionally overriding automatic or involuntary behavior that is inappropriate in the current context), and switching (switching flexibly between multiple tasks/mental sets; see Darda et al., 2020;Shaw et al., 2020). Emotion perception and ToM are cases in point: To infer another's emotional and/or mental state at any given moment, we must continuously process available social cues (e.g., their eye gaze and facial expressions) and update our working memory representations accordingly, inhibit our own emotional and mental state to avoid egocentric misattributions ("decentering"; Bukowski, 2018;Lamm et al., 2016), and switch flexibly between self-and other-directed mentation (inferring another's state often requires us to consider how we ourselves might think or feel in their position; see Samson, 2009). In this light, disruptions to emotion perception and ToM might represent manifestations of disturbances to one or more of these underpinning executive functions.
In the present preregistered study, we investigated if and how disturbances to emotion perception and/or ToM in MS might reflect underlying disruptions to working memory, response inhibition and/or switching components of executive function. First, we created a neuropsychological test battery comprising computerized versions of experimental tasks used frequently to assess each element of these cognitive systems. For executive functions, working memory was assessed with the keep track task (KTT), response inhibition with the Stroop task, and switching with the color-shape switching task (CSS; Friedman et al., 2009;Miyake et al., 2000). We measured emotion perception with the reading the mind in the eyes test (RMET; Baron-Cohen et al., 2001), given meta-analytic evidence of reliable performance deficits on this task in individuals with MS (see Bora et al., 2016;Cotter et al., 2016;Lin et al., 2021). Although the RMET has been used extensively as a measure of ToM in studies investigating social cognition into MS, formal assessments of its factorial structure, construct validity and associations with other tasks suggest that it more likely measures the accuracy with which emotions are perceived (Higgins et al., 2022;Kittel et al., 2022;Oakley et al., 2016;Quesque & Rossetti, 2020;Schurz et al., 2021). To measure ToM, we utilized another tool employed commonly in this area of research-the faux pas test (FPT; Gregory et al., 2002). This is considered as an advanced test of ToM ability that requires social sensitivity; to correctly detect the occurrence of a faux pas among fictional characters in social situations, respondents must appreciate that each character has a different mental (e.g., belief) state that can be influenced by another's statements. Employing a battery of tasks used most frequently to assess these specific components of social cognition and executive function in MS allowed us to not only draw comparisons with previous research findings but also identify interrelationships and dependencies among these cognitive systems that might provide further mechanistic insights into their co-occurring disruptions. To overcome the small sample sizes recruited typically in previous studies, which are likely to have obfuscated true relationships among social and domain-general cognitive processes, we administered this battery online using a crowdsourcing platform. This allowed us to acquire data from a sample of individuals with MS powered sufficiently to detect small-to-medium effect sizes while also capturing the heterogeneity of this patient population that is seldom considered in existing research. Moreover, this approach allowed us to recruit an equally sized group of HCs matched closely on various demographics.
Driven by meta-analyses that synthesize vast corpora of research studies into disrupted executive function (Islas & Ciampi, 2019;Johnen et al., 2017;Sumowski et al., 2018) and social cognitive impairments in MS (Bora et al., 2016;Cotter et al., 2016;Lin et al., 2021), we hypothesized that individuals with MS would perform worse than HCs across all measures of executive function and social cognition. For those measures of executive functions on which the MS group exhibited impairment relative to the HC group, we then performed exploratory mediation analyses to quantify the extent to which performance on that measure accounted for between-group differences on the RMET and FPT. In doing so, we examined whether disruptions to specific executive functions might underpin the disturbances to social cognitive processes.
Transparency and Openness
This study was preregistered on the Open Science Framework prior to data collection and analyses (https://osf.io/shukw/) and any necessary deviations are outlined within this report. All materials and data are available publicly (https://osf.io/2bhmy). This report of the study follows the Journal Article Reporting Standards for quantitative research.
Participants
The sample size was determined using an a priori power analysis conducted with G * Power (Faul et al., 2007), as described fully in the preregistration. In brief, we estimated the sample size required to detect between-group differences with an effect size of d = .305 at 80% power and α = .05 for pairwise comparisons following significant analyses of variance (ANOVAs). The effect size of interest was the smallest mean difference between an independent group of MS and HC samples in a previous study (Czekóová et al., 2019). A sample size of 268 individuals was required with n = 134 in each group. This defined our target sample size after any exclusions (e.g., failed attention checks; see below).
Volunteers were recruited online through Prolific Academic (https://www.prolific.co/), which has been shown to yield higher quality data than other online recruitment platforms (Peer et al., 2017). All participants were required to be aged 17-75 years, fluent in English, and report no history of mild cognitive impairment or dementia and no known psychiatric or neurological conditions (other than MS in the MS group). Although initial inclusion criteria specified that participants must have English as their first language, this was extended to include those who were fluent in English to enable us to achieve our planned sample of individuals with MS.
Individuals with a formal diagnosis of MS were recruited in three steps: first, prescreening criteria on Prolific were used to selectively recruit individuals who reported a diagnosis of MS when signing up to this platform; second, a formal diagnosis of MS was stated as one of the inclusion criterion in the study advert before volunteers progressed to the procedure; third, the demographics survey asked participants to confirm explicitly that they had a formal diagnosis of MS but no other form of neurological or psychiatric diagnoses. Participants were also asked a series of questions concerning their diagnosis: their specific type of MS (e.g., secondary or primary progressive), disease duration, current treatment, and recent history of relapses.
After excluding six participants due to technical issues (n = 3), careless responding (n = 2; failed attention checks, poor data), and misreporting their diagnosis (n = 1), the target sample of 268 participants was achieved. Of this sample, 134 reported a formal diagnosis of MS and 134 were age-and sex-matched HCs. Table 1 summarizes participant demographics (see supplemental Table S1 for more detailed information).
All participants provided written informed consent, and the study was approved by Aston University's research ethics committee (ref. 1791). Participation was recompensed at £7.50/hr.
Procedure
Demographic data and consent were acquired through Qualtrics (Provo, Utah, United States; https://www.qualtrics.com), after which participants were redirected to Pavlovia (https://pavlovia.org; Peirce et al., 2019) to complete five experimental tasks administered in a fully randomized order. The two social cognition tasks were selected on the basis of meta-analytic data (Bora et al., 2016;Cotter et al., 2016;Lin et al., 2021), and the three executive function tasks were selected from the seminal article by Friedman et al. (2009). Figure 1 presents a schematic of these five tasks. Two attention checks were embedded in the first and second half of the experiment: First, participants were asked "Which planet do you live on?" and were required to select from four possible answers ("EARTH," "SAT-URN," "MERCURY," and "MARS"); in the second half of the procedure, participants were asked to type the word "purple" into a free-response box. These questions were chosen as ethically viable attention checks, as recommended by Prolific guidelines, and only SOCIAL AND EXECUTIVE COGNITION IN MULTIPLE SCLEROSIS participants who passed both of these checks were included in the analysis.
KTT
The KTT (adapted from Yntema & Mueser, 1960) was administered as a measure of updating. On each of 12 trials, participants were first presented with two, three, or four target categories (four trials each; metal, country, distance, relative, color or animal). Fifteen words were then presented sequentially, each for 1,500 ms, including two to three exemplars of each target category. Participants were instructed to remember the last (most recent) word belonging to each of the target categories; when all the words had been presented, they were asked to indicate with a button press which of two, three, or four exemplar words was the last to be presented for a specific target category. A participant's data were excluded if they achieved <60% accuracy. An index of working memory was computed by calculating response accuracy across all 36 questions in the trials, with higher accuracy reflecting better working memory ability.
Stroop Task
The Stroop task (Stroop, 1935) was employed as a measure of response inhibition. In each trial, a fixation cross was presented for 500 ms and then replaced immediately by one of three color words or a string of asterisks displayed in red, green, or blue. Participants were asked to indicate the color in which the words or asterisks were presented by pressing one of three response keys. After each response, a blank screen was presented for 1,000 ms before the next trial began. Participants' reaction time (RT) was recorded only for correct responses. The task consisted of three trial types: (a) 60 nonword trials, comprising strings of three to five asterisks presented in one of the three colors; (b) 60 incongruent trials, in which a word for one of the three colors was printed in a different color font (e.g., "RED" printed in blue); and (c) 60 filler trials, whereby a neutral (noncolor) word was printed in one of the three colors (e.g., "cow" presented in red font). Three practice trials were also administered, but were discarded from subsequent analyses. Trial order was pseudorandomized so that the same trial type was presented on no more than three consecutive occurrences, and color words or fonts were different to that of the preceding trial. There were four blocks of trials, with each trial type presented 15 times per block. Individual participant data sets were excluded in full if their response accuracy was below 60%, and individual trials were omitted if RTs were ±3 SD of their mean score. An interference effect was computed by subtracting the mean RT of correct nonword trials from those of correct incongruent trials. A lower interference effect was used as an index of better response inhibition.
CSS
The CSS (Miyake et al., 2004) was administered as a measure of switching. At the start of each trial, participants were shown a fixation cross for 350 ms, followed by the word cue "Shape" or "Color" presented for 150 ms. A triangle or circle was then presented in red or blue with the cue remaining on the screen. The word cue instructed participants how they should respond on each trial: If "Color" was presented, they were required to indicate whether the shape was red or blue; if "Shape" was presented, they were required to indicate whether it was a circle or triangle. Participants gave their response via the left and right arrow keys, respectively. There were two types of trials: On no-switch trials, the word cue was the same as the previous trial; on switch trials, the word cue changed from the previous trial (e.g., a "Color" trial followed by a "Shape" trial). There were two blocks of 48 trials, each with 24 no-switch and 24 switch trials presented in a pseudorandom order that ensured the same trial type was presented on no more than three consecutive occurrences. Again, a participant's data were excluded in full if their response accuracy was below 60%, and individual trials were omitted if RTs were ±3 SD of each participant's mean. A switch cost was computed by subtracting the mean RT of correct no-switch trials from correct switch trials, and a lower switch cost indexed better switching ability.
RMET
The RMET ( Baron-Cohen et al., 2001) was administered to measure individuals' accuracy in emotion perception. This task consisted of 36 trials, each presenting a photograph of a person's eyes portraying an emotional state. Each photograph was presented with four words of different emotions, and participants were required to indicate which word best described the emotion being portrayed by clicking on it with their computer mouse. Trials (photographs) were presented in a fixed order. Participants were encouraged to keep a dictionary to hand during this task to ensure they understood the meaning of infrequent emotion words (e.g., aghast). An index of emotion perception was computed by calculating accuracy across all 36 trials.
FPT
The FPT (Gregory et al., 2002) was administered as a measure of ToM. This task consisted of 20 trials, each presenting a vignette that described a social encounter between two or more characters. In 10 of the stories, a social faux pas occurred through the verbal or nonverbal behavior of a character (experimental trials); a faux pas is defined as a situation in which a speaker says something without considering if the listener wants to hear it, and which has negative consequences that the speaker did not intend. In the other half of stories, no such faux pas occurred (control trials). After each story, participants were first asked if a faux pas had occurred ("Did anyone say something they shouldn't have said?") to which they responded by selecting either "yes" or "no." If they reported to have detected a faux pas, they were then asked to identify the culprit ("Who said something they shouldn't have said or something awkward?") by typing a free response. Together, these two questions measured faux pas detection. Participants who detected a faux pas were then asked an additional four questions to assess their understanding of the source of the faux pas: why a faux pas had occurred, why someone had said something inappropriate or awkward, and how the faux pas had made the victim feel. Free-response answers to these three questions assess different aspects of social awareness, and were not considered in subsequent analyses. Finally, regardless of whether they had detected a faux pas, participants were asked two openended questions that assessed their comprehension of the story (e.g., "Who arrived late for the meeting?"), to which they provided a free response. A faux pas detection score was calculated as a ratio of experimental trials in which the participant correctly detected the presence of a faux pas and comprehended the story, to control trials in which they correctly detected the absence of a faux pas and comprehended the story (1.0 = perfect accuracy). Higher faux pas detection scores were used as an index of better ToM.
Task Reliability
Permutation-based split-half reliability estimates were calculated for each of the dependent measures of interest using the splithalf package in R (Version 0.8.2; Parsons, 2021), whereby the results of 5,000 random splits were averaged. Although reliability estimates are continuous, and arbitrary thresholds may therefore hinder their utility, to facilitate interpretation we adopt Koo and Li's (2016)
Data Analysis Strategy
As described above, participants' data on each measure of executive function were excluded if their response accuracy was below 60% (KTT = 19, Stroop = 5, CSS = 3). For tasks utilizing RT as the primary measure, scores ±3 SD of the entire sample mean were considered outliers and were also excluded (Stroop = 4, CSS = 3, RMET = 2, faux pas = 6). For the final data set, each of the five dependent variables were z-scored to permit direct comparisons among the different units of measurement. Data were analyzed with the Statistical Package for the Social Sciences (V.26; IBM Corp, 2019). To examine if individuals with MS exhibited disruptions in executive function and/or social cognition compared with the HC group, two mixed-design ANOVA tests were conducted: a 2 (group: MS, HC) × 3 (task Executive : KTT, Stroop, CSS), and a 2 (group: MS, HC) × 2 (task Social : RMET, FPT). For both these ANOVAs, task was a repeated-measures factor that assessed specific differences between task performance, and group was a between-measures factor that assessed differences between MS and HCs. Bonferroni corrections were applied to pairwise comparisons.
Exploratory mediation analyses were then conducted using ordinary least-squares path analysis (PROCESS V.3.5; Hayes, 2013) to assess if measures of executive function that differed between the groups mediated group differences on measures of social cognition. While some scholars contend that mediation analyses are only appropriate for longitudinal data, in which a mediator transmits the influence of a predictor on an outcome variable in a clear temporal order, others suggest that such analyses are appropriate for cross-sectional data if (a) there is a theoretically driven prediction and (b) the measured variables reflect nearly instantaneous processes (see Fairchild & McDaniel, 2017). The use of mediation analyses in the present study satisfies both of these criteria; as outlined above, current theories predict that social cognitive disruptions are (instantaneous) manifestations of disturbances to foundational executive functions. Therefore, mediation analyses allowed us to explore if common or specific disruptions to executive functions account for a significant proportion of disturbances to social cognitive processes. A necessary component of mediation is a statistically and practically significant indirect effect (Preacher & Hayes, 2004). Indirect effects were assessed with 10,000 bias-corrected bootstrap 95% confidence intervals (CIs; see Preacher & Hayes, 2004; CIs that do not overlap with zero indicate a significant mediation model. Percent mediation is reported, which is the ratio of the indirect to the total effect (ab/c; Preacher & Kelley, 2011).
Results
With listwise deletion, participants with outlier scores on any measure of executive function or social cognition were removed from each ANOVA analysis. The lowest sample size was 238 participants, comprising 114 individuals with MS and 124 HCs. Sensitivity power analyses indicated that across all analyses, effect sizes of d > .32 could be detected with 80% power at α = .05.
The split-half reliability estimates for the dependent measures calculated from these participants in each task are shown in Table 2.
This shows moderate reliability for all measures except the switch cost, which was poor. Table 3 presents correlations among the dependent measures computed from each task, participant age, and self-reported disease duration (both expressed in years) for the MS group. Of particular interest, this shows that among the two measures of social cognition, accuracy on the RMET was correlated positively with that achieved on the FPT (increases in emotion perception were associated with increases in ToM); among the measures of executive function, accuracy on the KTT was correlated negatively with the interference effect shown on the Stroop task but positively with switch costs on the CSS task (increases in working memory associated with increases in response inhibition but decreases in switching), while Stroop and CSS task performance were not correlated significantly; and between measures of social cognition and executive function, a significant positive correlation was observed between accuracy on the RMET and performance on the KTT. Age was correlated positively with disease duration, and both age and disease duration were correlated positively with interference effects on the Stroop task, but both age and disease duration were correlated negatively with switch costs on the CSS task (increases in age and disease duration associated with decreases in response inhibition but increases in switching). To investigate this unexpected pattern of associations with switch costs, we performed a closer inspection of performance on the CSS task. This revealed that switch costs gradually disappeared and eventually reversed with the longer response latencies expressed by older adults in both the MS and HC groups. This opposes the pattern for interference effects, explaining these unexpected correlations (see supplemental Figure S1).
Exploratory Mediation Analyses
Since the MS group demonstrated poorer working memory and response inhibition relative to the HCs, two multimediator models were performed to assess the mediating effect of both executive functions on emotion perception and ToM. The first of these models revealed a significant indirect effect of KTT performance on the group difference in accuracy on the RMET (ab = .07, SE = .04, 95% CI [.01, .15]), indicating that working memory ability mediated poorer emotion perception for the MS group relative to HCs. This mediator accounted for 23% of the total (group) effect. The indirect effect of Stroop performance on the group difference in emotion perception was not significant (ab = .005, SE = .02, 95% CI [−.03, .05]), and this mediator (response inhibition) accounted for only 2% of the total effect. The second model revealed a significant indirect effect of KTT performance on the group difference in FPT accuracy (ab = .04, SE = .02, 95% CI [.002, .09]), revealing that working memory ability mediated the difference in ToM between the MS group relative to HCs. This mediator accounted for 20% of the total effect. Again, the indirect effect of Stroop performance on this group difference in ToM was not significant (ab = −.007, SE = .01, 95% CI [−.04, .02]), and this mediator accounted for only 4% of the total effect. Figure 3 illustrates these results.
Discussion
The present study investigated if the disturbance(s) to social cognition reported frequently in MS are underpinned by disruptions to executive functions. In line with our preregistered hypotheses and previous meta-analyses (Cotter et al., 2016;Johnen et al., 2017;Lin et al., 2021), a large sample of individuals reporting a diagnosis of MS showed poorer performance on measures of emotion perception, as measured with the RMET, and ToM relative to a group of age-and Note. N = 112. Values present Pearson correlation coefficients, with square brackets containing upper and lower 95% confidence intervals. Correlation analyses were performed among z-scored accuracy scores for RMET, FPT, and KTT and z-scored differences in reaction time between experimental and control conditions for the Stroop ("interference effect") and CSS tasks ("switch cost"; see text for details). MS = multiple sclerosis; KTT = keep track task; CSS = color-shape switching task; RMET = reading the mind in the eyes test; FPT = faux pas test. * p < .05. ** p < .01 (two-tailed).
Figure 2 A Box Plot Showing Group Performance on Each Task
Note. Middle lines present medians, and error bars depict upper and lower quartiles. Horizontal lines indicate tasks on which performance differed significantly between the two groups. MS = multiple sclerosis; HC = healthy control; CSS = color-shape switching task; KTT = keep track task; RMET = reading the mind in the eyes test; FPT = faux pas test. See the online article for the color version of this figure.
SOCIAL AND EXECUTIVE COGNITION IN MULTIPLE SCLEROSIS
sex-matched HCs. Furthermore, the MS group also displayed impairments in two measures of executive function-namely, working memory and response inhibition. Exploratory mediation analyses revealed that working memory performance accounted for a considerable portion of the between-group difference in both emotion perception and ToM, but response inhibition did not. This indicates that disruptions to working memory in MS might serve as one of the mechanisms underpinning those observed in social cognition, supporting the notion that social cognitive impairments in MS are instantiations of alterations to fundamental executive processes. This study is certainly neither the first to reveal disruptions to the working memory and response inhibition components of executive function, emotion perception, and ToM aspects of social cognition nor relationships among these sets of cognitive processes in MS (e.g., Drew et al., 2008;Dulau et al., 2017;Genova et al., 2015;J. D. Henry et al., 2009;Kraemer et al., 2013;Neuhaus et al., 2018;Ouellet et al., 2010;Raimo et al., 2017; for reviews, see Chiaravalloti & DeLuca, 2008;Islas & Ciampi, 2019;Langdon, 2011;Prakash et al., 2008;Sumowski et al., 2018). However, the present results extend these earlier findings by identifying a potential causal relationship; disruptions to working memory, but not response inhibition mediated group differences in emotion perception and ToM, accounting for approximately 20% of the effects. Although we acknowledge that other factors are likely to explain additional variance in such group effects, such as education level that was not measured in the present study (e.g., Ciampi et al., 2018;A. Henry et al., 2022), this finding could inform future research and clinical practice. Considerable variability exists in the measures employed to Note. Panel A shows that performance on the KTT, but not the Stroop task, mediates the group difference in accuracy on the RMET. Panel B shows that performance on the KTT, but not the Stroop task, mediates the group difference in accuracy on the FPT. KTT = keep track task; HC = healthy control; MS = multiple sclerosis; RMET = reading the mind in the eyes test; FPT = faux pas test; CI = confidence interval. * p < .05. ** p < .01. *** p < .001. 8 PENNINGTON, OXTOBY, AND SHAW assess neurocognitive functioning in MS, for both individual tests and multidomain batteries (Elwick et al., 2021). Although memory span is assessed commonly, working memory updating is rarely considered. As such, common neurological assessments will be unable to detect this specific domain of disruption-one that might interfere with a host of daily activities and, as we have shown, has the potential to impact negatively on cognitive systems supporting interpersonal behavior. Given the ease with which the KTT can be administered, and the resulting data can be analyzed, we encourage researchers and clinicians to incorporate this test into their routine neurological assessments. Furthermore, the present findings should guide future evaluations of cognitive rehabilitation programs. A number of such programs incorporate working memory training and have demonstrated their effectiveness in enhancing performance on outcome measures that require working memory updating, such as the Paced Auditory Serial Attention Test (for a review, see Sokolov et al., 2018). If disruptions to working memory updating do indeed contribute to impairments in social cognitive processes, the beneficial effects of working memory training should transfer to measurable improvements in emotion perception and ToM.
Unlike previous studies that have shown poorer switching ability in individuals with MS compared with HCs (e.g., Ciampi et al., 2018;Drew et al., 2008), we observed no such performance detriments on the CSS. This might reflect the poor reliability of switch cost measurements that we acquired with this task, which have also been reported elsewhere (e.g., Sicard et al., 2022). In the present study, reliability may have been further compromised as a result of the pattern of responses expressed by our sample; switch costs gradually disappeared and eventually reversed with the longer response latencies expressed by older adults. With longer response latencies, switch costs will become more variable as a result of various subprocesses; for example, greater response-to-stimulus intervals permit longer preparation times, which are known to influence switch costs substantially (Monsell, 2003). Increases in measurement error such as these can obscure true effects. This emphasizes the importance of future studies reporting reliability estimates for the cognitive behavioral tasks they employ to permit comparisons with other research findings (see Parsons et al., 2019).
An explosion of research into MS over the past few decades has examined emotion perception and ToM abilities. Meta-analyses of this literature have reached conflicting conclusions with regard to the individual tests employed to assess these core components of social cognition; while Cotter et al. (2016) and Bora et al. (2016) report impaired performance in individuals with MS compared with HCs on the RMET but not the FPT, Lin et al. (2021) report a reliable difference in the performance of these groups on both measures. Importantly, however, both Cotter et al. (2016) and Bora et al. (2016) report small but potentially meaningful effects with regard to the FPT (Cohen's d ≈ .26). The present study observed that an effect size of similar magnitude was mediated fully by working memory performance, providing the first insight into possible mechanisms driving this small, but potentially impactful social cognitive disturbance. Even subtle disruptions to our capacity to infer others' beliefs, intentions, motivations, and perspectives on the world are likely to influence our behavior in interpersonal situations and, in turn, our social environment.
The link we have observed between working memory, response inhibition, and social cognition in MS is perhaps unsurprising when we consider their putative neural substrates. The dynamic updating of memory engages a frontoparietal brain network that transiently connects neural systems spanning dorsolateral prefrontal and parietal cortices (e.g., Menon & D'Esposito, 2022;Nee & Jonides., 2013;Uddin et al., 2019). Interestingly, then, meta-analytic data indicate that inferences about others' mental (e.g., intentional) states are supported by a partially overlapping network encompassing medial and lateral prefrontal and parietal cortices (Molenberghs et al., 2016;Schurz et al., 2013Schurz et al., , 2021. Altered functional connectivity among nodes of the frontoparietal network is reported frequently in MS (for reviews, see Chard et al., 2021;Tahedl et al., 2018), likely resulting from widespread demyelination among constituent white matter tracts. Damage to the nodes and connecting tracts shared by networks supporting working memory processes and mental state inferences will have concomitant effects in these cognitive functions. Future research should investigate this by building on our behavioral data and assessing whether signatures of functional brain connectivity elicited during working memory updating, emotion perception, and/or ToM processes resemble one another, and if they are similarly (dis)connected in MS. This would go some way toward identifying biomarkers for the effects we have observed.
The method of online participant recruitment and data collection that we employed in the present study permitted us to not only acquire data from a well-powered sample, thereby overcoming the limitations of underpowered samples employed frequently in clinical studies (Lin et al., 2021), but also to capture the natural distribution of different MS disease courses. In the MS group, 80% reported relapse-remitting, 12% secondaryor primary progressive, and 7% clinically isolated syndrome. This converges with formal prevalence estimates (e.g., Benedict et al., 2020;Engelhard et al., 2022;Nazareth et al., 2018), which is important when we consider differences in the prevalence of cognitive symptoms presented in these phenotypes; estimates are 30%-45% in relapsing-remitting, 50%-75% in secondary-progressive MS, 20%-25% in clinically and radiologically isolated syndrome (Benedict et al., 2020). Similarly, 73% of our MS sample were female, converging with global ratios (Walton et al., 2020). Furthermore, correlations (or lack thereof) among demographic, clinical, and performance variables in the present sample align closely with those reported in clinical studies: Self-reported disease duration was unrelated to either measure of social cognition (e.g., Drew et al., 2008;Dulau et al., 2017;Neuhaus et al., 2018; for reviews, see Cotter et al., 2016;Lin et al., 2021), but age and disease duration were correlated with response inhibition and switching (Ciampi et al., 2018; but see Drew et al., 2008), likely reflecting the reliance on processing speed in the Stroop and CSS tasks (for a review, see Vallesi et al., 2021). Finally, through this method of recruitment, we were able to acquire data from individuals with MS residing across 20 different European (e.g., Ireland and Germany) and non-European countries (e.g., the United States and South Africa), and a range of ethnicities. Although the vast majority of our MS and HC samples reported to be White, somewhat limiting the generality of the present findings, this demonstrates the utility of online methodology for research into cognitive function in MS.
We are not, of course, claiming that self-report data acquired online present an alternative to controlled clinical assessments. Although we restricted our analyses to data acquired from participants who passed two separate attention checks administered at different points of the procedure and excluded from our analyses any data that might indicate careless responding (i.e., outliers), it is important to acknowledge some potential limitations of this approach to data acquisition. First, we cannot know about the conditions in which participants complete tasks administered online. It is entirely SOCIAL AND EXECUTIVE COGNITION IN MULTIPLE SCLEROSIS possible that environmental distractions could have influenced participants' performance, though this is unlikely to have exerted a systematic influence on the between-group differences we have observed. Second, in the sample of individuals with MS that we recruited, 10% were unwilling or unable to state their current diagnosis, and without clinical records, we are unable to verify the reports of those who did declare this information. This data acquisition method also prevented us from collecting objective measurements of disease severity or depressive symptoms. Although several studies have reported that neither clinical characteristics is reliably correlated with executive functions (Ciampi et al., 2018;A. Henry et al., 2022;Johnen et al., 2017;Raimo et al., 2017; but see Dulau et al., 2017) nor social cognition (Dulau et al., 2017;J. D. Henry et al., 2009;Kraemer et al., 2013;Neuhaus et al., 2018; for reviews, see Bora et al., 2016;Cotter et al., 2016;Lin et al., 2021), it is important that these clinical data are collected and reported if we are to eventually develop precise characterizations of cognitive syndromes that can occur at different disease stages. Furthermore, although we used prescreening criteria on Prolific to ensure that the study was available only to individuals who reported no "mild cognitive impairment/dementia," this did not preclude volunteers with preclinical dementia. The accurate identification of such individuals requires sensitive global cognitive screening assessments. For these reasons, we stress that the findings of this study should be treated as preliminary and in need of replication in studies that administer our publicly available assessment battery under tightly controlled experimental conditions and on individuals for whom clinical records and screening assessments are available.
Rather than focusing on isolated deficits, in the present study, we administered a broad battery of experimental tasks that allowed us to explore multiple aspects of executive function and social cognition as well as their interrelationships and dependencies simultaneously in a within-subject manner. To build upon previous research on social cognitive disruptions in MS and guide future studies, we measured each executive function and social cognitive process with tasks and performance indices (i.e., relative response times and accuracies) used commonly in the literature. However, such crude metrics can only offer limited insights into each of these seemingly complex cognitive operations. This is true especially when examining accuracy across all items of the RMET; recent meta-analyses have shown this task to have a multidimensional structure, in which subsets of items appear to assess different aspects of social cognition (Higgins et al., 2022). Similarly, responses to subsets of items on the FPT task can be combined to assess different dimensions of social awareness (e.g., understanding others' intentions and empathic awareness). To achieve even more precise characterizations of the social cognition disturbances and underpinning disruptions to executive functions in MS, future studies should build upon our findings and assess dependencies among the constituent dimensions of these and other tasks measuring components of cognitive function.
Conclusion
Consistent with a growing body of research, our findings from an online sample show that MS can result in disruptions to core social cognitive capacities crucial for maintaining a healthy social environment; specifically, poorer emotion perception and ToM. Moreover, we provide preliminary evidence that such impairments to social cognition are underpinned partly by disruptions to a specific executive function-working memory. These results should guide further research into the interdependent and possibly causal relationships between this and other executive functions and social cognitive processes. | 2023-07-01T06:16:09.033Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "914410c98a5af6394bf8a888e1f3cfcb598d6697",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1037/neu0000917",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bda8574032958c5ed07fa878145b9f090f0be670",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266224508 | pes2o/s2orc | v3-fos-license | Perioperative complications of laparoscopic inguinal hernia repair in India: a prospective observational study
Purpose These days laparoscopic inguinal hernia surgery, both totally extraperitoneal (TEP) and transabdominal preperitoneal (TAPP), is a commonly performed procedure due to advancements in laparoscopic instruments and the availability of skilled laparoscopic surgeons. The purpose of this study was to compare the perioperative complications of these two procedures. Methods This was a prospective observational study between July 2019 and December 2020. Perioperative complications were compared with a 6-month follow-up. It included 144 patients, of whom 71 underwent TAPP repair and 73 underwent TEP repair. The selection was based on the surgeon’s choice. Results Early postoperative complications were scrotal edema (12 cases in TEP and 16 in TAPP), urinary retention (one case in TEP), ecchymosis (six cases in TEP and two in TAPP), and scrotal subcutaneous emphysema (two cases in TEP). On follow-up, seroma was found in a total of 22 cases, of which 12 were TEP and 10 were TAPP. While only one case of TAPP developed surgical site infection. There was no statistically significant difference in hospital stay between the two groups (p = 0.58). The pain scores significantly decreased throughout recovery and were comparable between the groups. Neither group experienced a recurrence during the 6-month follow-up. Fifty-eight patients developed Clavien-Dindo grade I complications, one had grade II, and three had grade IIIa complications. Conclusion With the increasing experience of the surgical fraternity in laparoscopic surgery, TEP and TAPP were proven to be comparable in terms of duration of surgery, postoperative complications, hospital stay, pain scores, and recurrence during the 6-month follow-up.
tally extraperitoneal (TEP) and transabdominal preperitoneal (TAPP) mesh hernioplasty are the most common laparoscopic procedures for inguinal hernia patients these days.There are different opinions about intraoperative complications, postoperative course, and recurrence in various studies on LHR.The search for the best approach with minimal complications and recurrence in laparoscopic inguinal hernia surgery is still going on.In LHR, the experience of the surgeon plays a great role in early recovery, less pain, and fewer complications [1].The present study was conducted with the objective of comparing intraoperative, immediate, early, and late postoperative complications of TEP and TAPP mesh hernioplasty performed by experienced laparoscopic surgeons with a 6-month follow-up.
We also tried to find a better approach out of these two with respect to complications and recurrence in the 6-month followup period.The current study allowed the surgeon to choose the type of surgery to be performed on the patient so that the best results of a particular approach could be delivered to a patient as per the surgeon's clinical judgment and skill.
METHODS
This was a single-center, prospective, observational study of 144 patients (aged >18 years) who underwent LHR for groin hernias between July 1, 2019 and December 31, 2020, performed by seven experienced surgeons in our institution.Our institution is a high-output center, with 20 to 25 LHRs performed each month.Seven surgeons performed these surgeries.Each surgeon has performed more than 300 laparoscopic hernia surgeries (TEP and TAPP) for more than 5 years.The selection of the technique, either TEP or TAPP, was based on the surgeon's preference.Patients with recurrent hernias, complicated inguinal hernias, i.e., obstructed or strangulated, laparoscopic hernia approaches converted to an open procedure, patients unfit for general anesthesia, patients with morbid obesity, and patients with any other immunocompromised state like human immunodeficiency virus-positive or any other risk factors for impaired healing like diabetes mellitus were excluded from this study.All patients were thoroughly questioned and examined on an outpatient department basis and on admission individually.They were admitted to our hospital 1 day before surgery or on the morning of surgery.The preanesthetic evaluation was performed by the corresponding anesthesia team.Part preparation was done using a hair clipper from the umbilicus to the mid-thigh.The procedure was performed with the patient under general anesthesia.Urinary bladder catheterization was done with a 14-French Foley catheter in all patients after induction.
The patients were placed in the supine position with both arms by the patient's side in bilateral repair or the contralateral arm by the patient's side in unilateral repair.A single-dose injection of cefuroxime (1,500 mg) after a skin test was given intravenously as antibiotic prophylaxis preoperatively.Both TEP and TAPP were performed as per the three-port position and standard procedural guidelines with a 14 × 13-cm polypropylene mesh.Mesh was fixed with absorbable trackers at the level of Cooper's ligaments and anterior abdominal wall muscles.The peritoneum was closed with V-Loc 180 (size, 3-0; Covidien) 15-cm absorbable polyglyconate knotless wound closure device.Adequate scrotal support was advised, and application was ensured starting in the immediate postoperative period.
Twenty-four hours was considered an immediate postoperative period, and 1 to 7 postoperative days was considered an early postoperative period.The urinary catheter was removed in the morning postsurgery, and the patient was closely monitored for any urinary complaints, if present.Pain scores were recorded at 6 hours after the operation, at the time of discharge, and during follow-up based on a visual analogue scale (VAS) where 0 indicated no pain and 10 indicated the worst possible pain.The follow-up of patients was done at 1-week, 1-month, 3-month, and 6-month intervals.The complications were graded according to the Clavien-Dindo (CD) classification system.
Statistical analysis
The data was analyzed using Stata version 14 (StataCorp).
Continuous and normally distributed data like age, body mass index (BMI), symptoms duration, and operation duration were presented in mean ± standard deviation.Categorical data like sex, American Society of Anesthesiologists (ASA) physical status (PS) classification, and hernia characteristics were presented using a number (%).Continuous and non-following normally distributed data like pain VAS scores and hospital stays were presented using the median and interquartile range (IQR).
Continuous variables were compared by the Student t test (following the normal distribution) and the Wilcoxon Rank Sum test (not following the normal distribution).Within a group, pain VAS scores were compared with repeated measure analysis of variance (ANOVA).Categorical variables were compared by the chi-square test or Fisher exact test.A p-value of <0.05 was considered statistically significant.
RESULTS
A total of 180 patients underwent LHR during the study period.
Of them, 144 patients were included in the study after applying exclusion criteria: 71 patients (49.3%) were selected by the operating surgeon for TAPP repair and 73 (50.7%) for TEP repair.
A study flow chart is shown in Fig. 1.
Demographic profile
The demographic profile of the patients included in the study is shown in Table 1.The mean age of patients was 46.38 ± 16.98 years.The mean BMI was 26.48 ± 4.41 kg/m 2 in the TEP group and 27.52 ± 4.29 kg/m 2 in the TAPP group and was not considered statistically significant.The study found no statistically significant difference in the mean duration of symptoms or the ASA PS classification between the TEP and TAPP groups.
The majority of the hernias were unilateral (81.3%) and bilateral (18.7%).The majority of the patients in both groups had an incomplete hernia (bubonocele or funicular hernia); more patients with a complete hernia (inguinoscrotal hernia) underwent TAPP repair (p = 0.01).There was no statistical difference in the reducibility of hernias among the patients undergoing TEP and TAPP repair (p = 0.06).
Intraoperative complications
There was no statistical difference in mean operative time between TEP and TAPP (p > 0.05).Three patients required open assistance in the reduction of their sac content (indirect hernia) via an inguinal incision, and these cases were excluded from the current study.The mean blood loss in both groups was not statistically significant (p = 0.28).No major visceral, vascular, or vas deferens injury was encountered during the study.All intraoperative complications are shown in Table 2.The intraoperative peritoneal breach was managed either by increasing the CO 2 flow rate or by peritoneal decompression using a Veress needle puncture at Palmer's point.
Postoperative complications
Scrotal edema was documented in 12 (16.4%)and 16 patients (22.5%) who underwent TEP and TAPP repair, respectively.All of them were resolved spontaneously by the end of 1 week with the application of scrotal support.The difference was found to be of no statistical significance (p = 0.40).Postoperative complications are shown in Table 3. Operative site skin ecchymosis was noted in eight patients (5.6%); out of them, six Seroma was noticed at the 7-day follow-up.Twelve patients (16.4%) who underwent TEP repair developed seroma, com-pared with 10 (14.1%) in the TAPP group.Of these, only two patients (2.8%) in the TAPP group required a one-time aspiration during the follow-up.The rest of the patients were managed with supportive care.The difference in incidence did not have any statistical significance (p = 0.82) (Table 3).Only one patient (1.4%) in the TAPP group had a wound infection at a 7-day follow-up.It was a case of superficial surgical site infections (SSIs) Pain VAS score findings are summarized in Table 5. Pain VAS scores in both the TEP and TAPP groups were not statistically significant, but intergrouply, there was a significant improvement in the pain VAS score at each postoperative follow-up.The repeated measure ANOVA revealed a p-value of <0.01 and an effect size of 0.99.There was significant improvement noted at each point in time, as revealed through the pair-wise comparison with p < 0.01.None of the patients enrolled in the study developed recurrences during the 6-month follow-up period.
DISCUSSION
Currently, TEP and TAPP are the two standard techniques practiced worldwide.Several studies compare the two techniques by randomly assigning patients to each group.As aforementioned, the outcomes are undeniably dependent on the surgeon's learning curve and interfere with the interpretation of results by acting as a confounder, especially when the study population is operated by a team of consultants at various stages of learning [1].In this study, seven experienced laparoscopic surgeons were allowed to choose the procedure based on their clinical judgment and skill.The mean age in our study was 46.38 ± 16.98 years.This result closely correlates with two randomized controlled trials conducted previously [2,3].
In addition, 64.1% of our hernias were right-sided.In their study on the Indian population, Krishna et al. [2] reported that the majority of the hernias (62.3%) in their study were right-sided.
This distribution matches the above-stated study and similar studies conducted in other countries [1,3,4,5].There appears to be a higher rate of visceral (especially urinary bladder) and vascular injury in laparoscopic repair when compared to open surgery, especially with TAPP [6,7].The nonrandomized trials of TEP and TAPP showed that inferior epigastric vessels are the most often injured among vascular injuries, and there is only one case of iliac vessel injury [7].Another study observed that TEP and TAPP had similar epigastric vessel bleeding rates [8].
Our study encountered no visceral or major vascular injuries because experienced surgeons performed minimal and precise dissections.They were well aware of the plan of dissection and major vessels.We also excluded patients with a recurrence or a history of previous groin surgeries.
A pooled estimate from the systematic review by Hung et al. [8] showed that TEP resulted in lower scrotal and cord edema rates at immediate postoperative and 1 week after surgery.On the other hand, a study by Krishna et al. [2] reports significantly higher scrotal edema in the TAPP group (34%), compared to the TEP group (9.4%).Our study found a 19.4% incidence of scrotal edema and no statistical difference between these groups.So, the incidence of scrotal edema is comparable across different approaches by experienced hands.Seroma formation is a natural process that cannot be completely prevented following laparoscopic inguinal hernioplasty, especially in patients with direct and large indirect inguinal hernias.In one study, the range of seroma formation was between 0.5% and 12.2% after TEP repair, and between 3% and 8% for TAPP [9].
Krishna et al. [2] reported an incidence of seroma up to 28% after the first postoperative week, predominantly in the TEP group, but only 5.0% at the end of the first month, and most of the seromas were resolved without any intervention.In our study, seroma incidence was 15.3%, with no statistical difference in incidence among the two procedures.Our findings agree with those of Aiolfi et al. [4], which are among the recent meta-analyses published.All patients with seroma in our research improved with time, with the exception of two patients.
So, we can postulate that the more experienced the surgeon is, the better the dissection and the lower the rate of seroma formation.
At our institute, we routinely give a single dose of antibiotic before surgery as routine surgical antibiotic prophylaxis, according to the National Institute for Health and Care Excellence guidelines [10].Cai et al. [11], in their study on SSIs after inguinal hernia repair in low-and middle-HDI countries, including six studies from India, found that LHRs had a weighted pooled SSI rate of 0.4 infections per 100 laparoscopic repairs.Aiolfi et al. [4] found no difference between TEP and TAPP repair in terms of post-SSI.In our study, we encountered no deep-space or mesh infections.We had one case of superficial SSI that amounted to an SSI rate of 0.7 infections per 100 laparoscopic repairs.
Chen et al. [12], in their meta-analysis, analyzed the outcomes of TEP and TAPP repair and found that the short-term postoperative pain scores were significantly lower in the TEP group, whereas the scores beyond 6 months were comparable in both groups.On the other hand, Wei et al. [5] found no significant difference in short-term postoperative pain scores between TEP and TAPP in their meta-analysis.In our study, after 6 months of follow-up in both groups, the median VAS score was 1, reflecting careful and minimal dissection by experienced laparoscopic surgeons.A meta-analysis comparing TEP to TAPP revealed that the recurrence rates were comparable between the two groups [2,5,8,12].They also found evidence to support the conclusion that the surgeon's experience had a significant impact on the recurrence of the hernia [7,13,14].In this study, procedures were performed by surgeons with experience above the minimum of 50 LHR, as recommended by Bracale et al. [15].
There are some limitations of the current study.This study is a small-scale observational study at a single institute, restricted in timespan due to academic obligations and in the number of study subjects by the ongoing SARS-CoV-2 pandemic.We observed no recurrences in 6 months.However, it is too early in the course to determine the recurrence rates accurately.Overall, this study has proven that in the hands of an experienced surgeon, the results of both LHR in terms of complications, du-ration of surgery, and hospital stay are good and comparable.
Both TEP and TAPP, performed by experienced hands, were comparable in terms of duration of surgery, postoperative complications, hospital stay, pain scores, and recurrence during the 6-month follow-up.The most commonly encountered postoperative complications in our study were scrotal edema (19.4%) and seroma formation (15.3%).
Notes Ethical statements
Ethical approval was taken from the Institutional Ethics Committee of All India Institute of Medical Sciences, New Delhi (No. IECPG-373/29.05.2019).The informed written consent of all patients was obtained prior to the commencement of the study.
Table 1 .
Baseline patient characteristics in TEP and TAPP group Values are presented as number only, mean ± standard deviation, or number (%).TEP, totally extraperitoneal; TAPP, transabdominal preperitoneal; ASA, American Society of Anesthesiologists; PS, physical status.
Table 2 .
Intaoperative results of patients in the TEP and TAPP group
Table 3 .
Postoperative complications of patients in the TEP and TAPP group
Table 4 .
Postoperative complications in TEP and TAPP group with intervention done and the Clavien-Dindo classification grading
Table 5 .
Pain VAS score in postoperative period and during follow-up in TEP and TAPP group | 2023-12-16T12:45:57.651Z | 2023-12-15T00:00:00.000 | {
"year": 2023,
"sha1": "874e9de0679e9e27bb3b2c5d48b68bd96f41aeb6",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "88a6bf541957c68db602e708134aeef2f5d8bdcd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85599221 | pes2o/s2orc | v3-fos-license | “ Bacteriological analysis of drinking water sources ”
The quality of potable water and treatment of waterborne diseases are critical public health issues. Bacterial contamination of drinking water sources is the most common health risk. The research determines bacteriological quality of drinking water sources in Serbo town, south west Ethiopia. A Cross-sectional study design on bacteriological analysis of drinking water was conducted in Serbo town from September to October, 2010. 100 ml of water specimen was collected from each water sources and transported for testing to the department of medical laboratory sciences and pathology laboratory by cold chain. The water samples were tested using the multiple tube technique on OXOID MacConkey Broth, (Oxoid Ltd, Basingstoke, Hampshire, England) for presumptive coliform count followed by Escherichia coli confirmation. A total of twenty four drinking water samples were analyzed. Eighteen (75%) were from unprotected wells and the remaining six (25%) were from protected wells. Twenty three out of the total (87.5%) have presumptive bacteria count above the permissible limits for drinking water. Majority of the water sources were not safe for drinking. Hence, regular disinfection of drinking water sources needs to be run.
INTRODUCTION
Water is one of the most important elements for all forms of life.It is indispensable in the maintenance of life on earth.It is also essential for the composition and renewal of cells.Despite of this, human beings are continuing to pollute water sources resulting in provoking water related illnesses (Ethiopian Federal MOH, 2004, WHO, 2008).
Diseases related to contamination of drinking-water constitute a major burden on human health.The most common and widespread health risk associated with drinking-water is microbial contamination.Up to 80% of all sicknesses and diseases in the world are caused by inadequate sanitation, polluted water or unavailability of water.As to 2006 report of world health organisation (WHO) approximately three out of five persons in *Corresponding author.E-mail: solomon.abera@ju.edu.et.developing countries do not have access to safe drinking water and only about one in four has any kind of sanitary facilities.Water may also play a role in the transmission of pathogens which are not faecal excreted.Contamination of drinking water with a type of Escherichia coli known as O157:H7 can be fatal.Many microorganisms are found naturally in fresh and saltwater (WHO, 1996;Amira, 2011).The microbiological quality of drinking water has attracted great attention worldwide because of implied public health impacts (Amira, 2011).Total and fecal coliform have been used extensively for many years as indicators for determining the sanitary quality of water sources.Water born outbreaks are the most obvious manifestation of waterborne disease.
In Ethiopia over 60% of the communicable diseases are due to poor environmental health conditions arising from unsafe and inadequate water supply.Frequent examinations of faecal indicator organisms remain the most sensitive way of assessing the hygienic conditions of water.Fecal coliform have been seen as an indicator of fecal contamination and are commonly used to express microbiological quality of water and as a parameter to estimate disease risk.Most portable number (MPN)) is a typical test for fecal coliform (Mengesha et al., 2004).
In 2007, 74% of Ethiopia's population had lack of safe drinking water.Although urban coverage is around 80%, the majority of the population (89%) live in rural areas, where most reports suggest that fewer than 12% have access to potable water.Only 19% of the rural populations have access to safe drinking water supplies (Government of Ethiopia, 2007).
The provision of safe and adequate water supply for the population has far reaching effects on health, productivity and quality of life, as well as on the socioeconomic development of the nation.Therefore, this study determines the quality of water sources and the extent of contamination at study area which will help in the intervention actions to be taken by the concerned bodies and will provide baseline information for further study.
Study design and period
A cross sectional study was conducted on drinking water sources to assess the extent of bacterial contamination from September to October, 2010 in Serbo town, south west Ethiopia.
Study area
The study was conducted in Serbo town.Serbo is found in Jimma zone, Kersa woreda; the town is located 325 km southwest of capital Addis Ababa and 19 km from Jimma town.Jimma is the largest city in southwestern Ethiopia, located in the Jimma zone of the Oromia region with 17 woredas.Based on figures from the central statistics agency (CSA, 2007) the zone has an estimated total population of 2,495,795, of whom 1,255,130 are men and 1,240,665 are women; 141,013 (5.6%) of its population are urban dwellers (CSA, 2007).
Data collection and processing
From individual water sources, 100 ml sample of water was collected.The water was collected using sterile bottles and transported for testing immediately to the department of medical laboratory science and pathology laboratory by ice cold containers within 50 min of collection.All communal public water source and twenty randomly selected private owned water sources were included.The water samples were tested by multiple tube technique using OXOID MacConkey Broth (Oxoid Ltd, Basingstoke, Hampshire, England).First 100 ml of water specimen was Abera et al. 2639 collected for each sample and distributed five tubes with 10 ml of water and one 50 ml amount of water in bottles of sterile selective culture broth containing lactose and an indicator were incubated in incubator at 44°C for 24 h.After incubation, the number of bottles in which lactose fermentation with acid and gas production has occurred was counted.Finally, by referring to probability tables the MPN of coliform in 100 ml water sample has been estimated (Cheesbrough, 2006).
Ethical consideration
Permission from municipality of the town for public water source samples and consent from private water source owners were obtained before water sample collection.
RESULTS AND DISCUSSION
Twenty four water samples were collected from the study area.Six were from protected wells and eighteen were from unprotected wells.From the six protected wells, four of them were public owned and the rest two of them were owned by private.All the water sources had no regular treatment.From these water sources 87.5% (21/24) have presumptive bacteria count MPN above the permissible limits for drinking water.Analysis of protected wells which demonstrated three of the six samples had total coliforms count of more than 10 per 100 ml of water and all these three had E. coli (Table 1).On the other hand, analysis of unprotected wells revealed that all eighteen of the samples had total coliform count greater than 10 per 100 ml.In all of the unprotected well E. coli was confirmed.However, from the total samples only one sample had fecal coliform count of zero (Table 1).Both protected and unprotected wells were contaminated by fecal coliform, which is particularly E. coli.Totally, there was only one water source with excellent type, two with acceptable, nine unacceptable and twelve grossly polluted (Table 1).
Out of twenty four analyzed wells, seventeen of them were located downhill and the rest of the water sources were located above hill.All the wells located below hill had total coliform count of more than 10 per 100 ml of water (Table 1).Of the total analyzed samples, only three had acceptable fecal coliform count (less than 10 MPN per 100 ml of water), from these one source was in an excellent range and two of them were within an acceptable range.All the three of these samples were collected from protected wells (Table 1).
In relation to distance of water source from latrine, 79.2% of water sources were found at a distance of less than 30 m which is below WHO recommendation for minimum distance that should be exist between latrine and water source.On top of this majority (54.2%) of water sources were without cover.Out of eleven water sources owing cover, 27% of them were safe for drinking and on the other hand, all the wells without cover had fecal coliform count of more than 10, hence unsafe for drinking.Supply of water that owes no threat to the consumer's health depends on continuous protection.Because of human frailty associated with protection, priority should be given to selection of the purest source.Polluted sources should not be used unless other sources are economically unavailable.Ensuring bacteriological quality of drinking water sources is vital to public health function.On the other hand regular examination of water quality for the presence of organisms, chemicals, and other physical contents should provides information on the level of the safety of water.Frequent examinations of fecal indicator organisms remain the most sensitive way of assessing the hygienic conditions of water (World Health Organization 2003).
This research measures only microbial water quality by using E. coli as an indicator for fecal pollution.As a limitation, the physiochemical analysis was not done due to logistics constraints.However, we believe that the information obtained about fecal contamination of the water sources at Serbo town is the first in its kind and revealed the hygienic condition of water sources which are used by the community.
In this study 87.5% of wells have MPN of E. coli above the allowable limit.This indicates that majority of the water sources of Serbo town were fecally polluted.In comparison with a study conducted in Uganda, 2002 which showed that 90% samples had exceeded the WHO guideline (Haruna et al., 2005), the finding of this study was consistent.However as compared with a study conducted in North Gondar 2000 on unprotected wells and springs, the finding of this study was a little bit higher.This might be associated with the majority of water sources included in this study were unprotected (Mengesha et al., 2004).On the other hand as compared with a study done in Sudan Darfur 2011 to investigate drinking water quality, our finding showed higher percentage of MPN above allowable limit.This might be associated with the type of water sources difference in two communities (Amira, 2011).
If we compare the finding of this study with a study conducted in Jimma town in 2005, it showed that 95.8% of samples were unacceptable or grossly contaminated.The finding of this study (87.5%) was lower.This difference in percentage might be due to variation in methods used.The presence of fecal coliforms and E. coli in almost all of water sources were demonstrated in this study.Accordingly the potability and safety of these sources was questionable.As it is shown in a study conducted in Lesotho Highlands, adequate protection of water sources could improve the hygienic quality of water sources (Kravitz et al., 1999).
In our study from total analyzed twenty four samples, there were three water sources with MPN less than 10 per 100 ml of water.Three of them were from protected well whereas there is no water source with this MPN less than10 per 100 ml from unprotected sources, showing that protected wells are safer than unprotected sources.
According to a research conducted in south western Saudi Arabia, 2009(AlOtaibi, 2009) and in Tamil Nadu, 2006(Rajendran et al., 2006), all well water sources were positive for coliforms using MPN method whereas in our study, one well was free of total coliform.The gap might be due to the protection of wells.The appropriate location of wells with respect to latrine needs to be above hill (Ethiopian Federal MOH, 2004).From a total of twenty four analyzed water sources, Seventeen (70.8%) of the wells were located below hill and seven (29.1) of them were located above hill.This greater percentage of wells, which were located bellow hill, might have contributed for larger number of water sources for not to be safe as a result of having a chance to leak to the well.
Conclusion
In conclusion, majority of the water sources had unacceptable total coliform count and all the water sources which were positive for presumptive coliform count had E. coli showing fecal contamination of water sources, and we recommend regular disinfection of drinking water sources, periodic bacteriological appraisal of drinking water sources, and construction and distribution of piped water.
Table 1 .
Indicator bacteria count and possible factors of water source contamination in Serbo town, Jimma zone, Ethiopia 2010. | 2018-12-19T01:00:11.324Z | 2011-09-16T00:00:00.000 | {
"year": 2011,
"sha1": "1f233910aa4ebe574f1397659c80bf53ea0c9e5f",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/17535DA12621",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1f233910aa4ebe574f1397659c80bf53ea0c9e5f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
91548353 | pes2o/s2orc | v3-fos-license | Rigid Foot Soles Improve Balance in Beam Walking
Maintaining balance while walking on a narrow beam is a challenging motor task. This is presumably because the foot’s ability to exert torque on the support surface is limited by the beam width. Still, the feet serve as a critical interface between the body and the external environment, and it is unclear how the mechanical properties of the feet affect balance. Here we examined how restricting the degrees of freedom of the feet influenced balance behavior during beam walking. We recorded whole-body joint kinematics of subjects with varying skill levels as they walked on a narrow beam with and without wearing flat, rigid soles on their feet. We computed changes in whole-body motion and angular momentum across these conditions. Results showed that wearing rigid soles improved balance in the beam walking task, but that practice with rigid soles did not affect or transfer to task performance with bare feet. The absence of any after-effect suggested that the improved balance from constraining the foot was the result of a mechanical effect rather than a change in neural strategy. Though wearing rigid soles can be used to assist balance, there appear to be limited training or rehabilitation benefits from wearing rigid soles.
INTRODUCTION 1
Whether walking over rocks or across logs, humans have remarkable ability to maintain balance while 2 navigating difficult terrain 1,2 . In fact, healthy humans are so proficient in their ability to balance that some 3 turn to walking along a thin wire to truly challenge their skills. While there has been prolific research on 4 the control of postural balance over the past decades, this work has largely focused on understanding 5 how humans maintain balance during quiet standing 3-8 . Despite many insights into the limits of postural 6 balance, it is still an open question how the central nervous system controls the highly redundant and 7 extremely complex architecture of the body to maintain balance during more realistic locomotion, 8 especially in challenging environments. 9 10 A paradox of human motor control is that while the human body is vastly complex (e.g., large number of 11 degrees of freedom, long time delays, sensorimotor noise 9 , nonlinear muscle properties, intersegmental 12 dynamics), the overt behavior is often surprisingly simple in structure. Thus, low-dimensional models, 13 derived by compressing the number of degrees of freedom in the body, can be used to competently 14 describe human balance. For example, an inverted pendulum can adequately capture much of the 15 behavior that humans exhibit during quiet stance 8,10 . When the base of support is reduced, such as in the 16 case of standing on a narrow beam, adding a second linkage to make a double-inverted pendulum model 17 has proven sufficient 8,10,11 . In a recent study, Chiovetto et al. 12 walking on a beam, Chiovetto et al. 12 allowed participants to freely move their arms during the experiment 23 with the goal to look at the full complexity of the realistic behavior. How does the nervous system control 1 the high-dimensional architecture of the entire body to generate such low-dimensional patterns? 2 3 A critical aspect of maintaining balance is managing the physical interaction between the body and its 4 external environment. Because the feet serve as interfaces through which the body and ground 5 simultaneously act upon each other, they play a pivotal role in maintaining balance. As seen in the 6 development of prosthetics, the mechanical properties of the foot can significantly influence balance 7 behavior [13][14][15] . And yet, how the complex architecture of the human feet contributes to balance is still 8 poorly understood. Each foot consists of many articulated, rigid segments which are surrounded by 9 compliant, heterogeneous tissue, making it difficult to accurately measure and model the subtle 10 coordinated behavior of the foot [16][17][18] . Paradoxically, most models of human balance drastically simplify 11 the foot. In the inverted pendulum models of standing balance, the foot is typically reduced to a static, 12 rigid segment attached to the ground acted upon by an ideal torque source at the ankle. Thus, the foot's 13 influence on human balance, particularly during walking, remains understudied. 14 15 The aim of this study was to understand how the degrees of freedom of the foot and the ankle contribute 16 to maintaining mediolateral (ML) balance when walking on a narrow beam (Figure 1a). Both feet were 17 constrained by attaching a flat, rigid sole to the bottom of each foot (Figure 1b). The rigid sole prevented 18 any motion of the foot joints distal to the ankle, namely bending at the midfoot and torsion on the long 19 axis of the foot. Importantly, plantarflexion/dorsiflexion and inversion/eversion ankle motion was not 20 affected. 21
22
On the one hand, a highly flexible foot may be critical for actively sensing and controlling the physical 23 interaction between the foot and the beam. Sawers et al. 19 found that dancers have an increased set of available whole-body actions (i.e., more muscle synergies) to maintain balance when walking on a beam 1 compared to novices. This underscores that limiting degrees of freedom reduces the number of 2 movements available to withstand perturbations and maintain upright balance. Thus, constraining the set 3 of motor actions of the feet could impair balance and worsen performance in the beam walking task 4 (Hypothesis 1a). An alternative argument, however, is equally plausible. Constraining the foot to act as a 5 rigid, flat segment could increase contact stability between the foot and the flat surface of the beam and 6 thereby improve performance 20 . For example, Robbins et al. 21 found that elderly men improved beam 7 walking when they wore shoes with hard, thin soles. They stepped off the beam less frequently compared 8 to performing the task with bare feet or shoes with softer soles. Hence, an alternative expectation is that 9 rigid soles positively affect balancing performance (Hypothesis 1b). 10 11 Changing the mechanics of the feet could also cause subjects to adapt their control strategy for 12 maintaining balance with practice. When the rigid soles are removed, this altered strategy could 13 subsequently influence balance performance. For instance, if the rigid soles led to worse performance 14 when the rigid soles were removed, we would expect subjects to quickly return to their original control 15 strategy (Hypothesis 2a). This scenario corresponds many adaptation studies where, for example, the 16 adaptation to a perturbing force field only persists as short-term after-effects as they are not functional 17 when the perturbation is removed. If the adapted strategy leads to improved balance behavior after 18 removing the soles, however, we would expect that this acquired strategy and its positive impact on 19 performance would persist (Hypothesis 2b). This scenario would indicate that the soles acted as a teaching 20 aid that could accelerate learning to balance. A third feasible scenario is that humans do not even alter 21 their control policy when the rigid soles are attached to their feet. For example, if it is only the change in 22 the foot mechanics that altered performance, we would not expect subjects to change their control policy 23 (Hypothesis 2c). If this was the case, we would expect practice with constrained feet to have no influence on subsequent performance with bare feet. By assessing how practice of the beam-walking task with 1 constrained feet influences subsequent balance behavior with bare feet, we gain insight not only into how 2 the complex architecture of the foot influences the neural control of balance, but also whether this may 3 be a suitable intervention for either assisting or rehabilitating impaired balance behavior. 4 5 This study investigated how constraining the foot affected mediolateral (ML) balance in beam walking for 6 young individuals with varying levels of prior balance training. We tested whether constraining the feet 7 influenced ML-balance during beam walking compared to performing the task with bare feet. Previous 8 work has shown that the velocity of the center of mass (COM-V) in the ML-direction is a good indicator of 9 skilled balance 12 . Hence, impaired balance is indicated by an increased velocity of the center of mass 10 (COM-V) in the ML-direction and increased whole-body angular momentum (WB-AM) about the beam 11 axis; improved balance would show the opposite trend. To evaluate whether practice with constrained 12 feet affected performance after removing the rigid soles, we tested subjects walking with bare feet before 13 and after walking with rigid soles. In addition to testing the hypotheses, further analyses of whole-body 14 coordination were conducted to shed light on how constraining the foot influenced ML-balance during 15 beam walking. 16
17
The results showed that constraining the feet improved ML-balance in the beam walking task (Hypothesis 18 1b). Moreover, task performance with bare feet was unaffected by practice with rigid soles (Hypothesis 19 2c). Together, these findings indicate that the improvement in balance from constraining the foot was the 20 result of a mechanical effect rather than a change in neural strategy. Additional analyses showed that the 21 angular momentum of most individual segments was reduced when wearing the rigid soles. Moreover, 22 the contribution of ankle torque relative to hip torque was increased when the feet were constrained. We 23 propose that constraining the feet improved performance because of an increase in contact stability 1 between the foot and the beam 20 . 2 3 RESULTS 4 Seven healthy subjects took part in the experiment. Their prior balance training ranged from none to 5 several years in competitive gymnastics. In each trial, subjects were instructed to walk the length of a 6 narrow beam (3.4cm wide and 5m long) without stepping off the beam (Figure 1a). A trial was deemed 7 successful if the subject did not step off before reaching the end of the beam; otherwise the trial was 8 declared a failed trial. Subjects had to complete 20 successful trials in each of the following three blocks: 9 The first block consisted of 20 successful trials with bare feet (BF-Pre block), followed by 20 successful 10 trials with constrained feet (CF block), and another 20 successful trials with bare feet (BF-Post block) 11 (Figure 1c). 12 13
Number of Failed Trials 14
To gauge if constraining subjects' feet affected their ability to accomplish the beam-walking task, we 15 examined its influence on the number of failed trials in each of the three blocks. A one-way within-subject 16 analysis of variance (ANOVA) revealed that foot condition (BF-Pre, CF, BF-Post) did not have a significant 17 effect on the number of failed trials (F2,12 = 0.38, p = 0.69) (Figure 2). On average, subjects failed in 18 approximately 4-5 trials in each block. As expected, performance across subjects varied along a continuum 19 determined in part by their prior balance training. Subjects who exhibited the best performance (shown 20 in red and orange in Figure 2) were trained gymnasts. As the results below show, the cohort presented a 21 sufficient spectrum of balance abilities that allowed more general conclusions. 22
Example Data
Even though constraining the subjects' feet did not require more attempts to accomplish the overall task 1 goal, analysis of more fine-grained measures revealed that it did significantly influence their balance 2 proficiency as they performed the task. Figure 3a-c displays the series of body postures of two 3 representative subjects during a typical trial in each of the three conditions. For reference, data from 4 Example Subject 1 is shown in light blue in other results figures; Example Subject 2, who was trained in 5 gymnastics, is shown in dark red. Subjects displayed not only large trunk movements, but also large and 6 variable movements of both arms. Importantly, these body movements were visibly reduced in the CF 7 block.
Center of Mass Velocity (COM-V) 14
As demonstrated in prior work 12 (Figure 5a). Thus, practice with constrained feet did not influence subjects' balance performance 8 with bare feet (Hypothesis 2c). 9 10
Whole-Body Angular Momentum (WB-AM) 11
We also examined how performing the balance beam task with rigid soles influenced subjects' whole-12 body angular momentum (WB-AM) about the axis of the beam. The measure of WB-AM quantified the 13 angular momentum of a subject's body with respect to the beam. In the beam walking task, the body was 14 subject to ground reaction forces acting on the feet. These external forces induced considerable changes 15 in the body's WB-AM. We quantified WB-AM with respect to the beam axis, rather than the body's center 16 of mass or head position for two reasons: First, the beam was fixed and thus provided an inertial reference 17 frame. Second, our prior work revealed that the structure of AM was less complex when quantified about 18 the beam axis 12 . 19 20 The same one-way ANOVA rendered a significant effect of block on the RMS of WB-AM (F2,12 = 21.73, p < 21 0.001) (Figure 6a-b). Planned comparisons revealed that constraining the foot had a similar effect on RMS 22 of WB-AM as it did on COM-V. The RMS of WB-AM significantly decreased from the BF-Pre block (M = 23 5.07kg·m 2 /s, SD = 2.40kg·m 2 /s) to the CF block (M = 3.17kg·m 2 /s, SD = 1.52kg·m 2 /s) (t6 = 4.69, p = 0.0034), and then significantly increased from the CF block to the BF-Post block (M = 4.75kg·m 2 /s, SD = 2.21kg·m 2 /s) 1 (t6 = -5.33, p = 0.0018) (Figure 6b). There was no difference in RMS of WB-AM between the BF-Pre block 2 and the BF-Post block (t6 = 1.71, p = 0.14) (Figure 6b), nor between the last successful trial of the BF-Pre 3 block (M = 4.01kg·m 2 /s, SD = 2.08kg·m 2 /s) and the first successful trial of the BF-Post block (M = 4 4.67kg·m 2 /s, SD = 2.158kg·m 2 /s) (t6 = -0.86, p = 0.42), (Figure 6a). Again, these results indicate that 5 constraining subjects' feet significantly improved their ML balance (Hypothesis 1b), but the improved 6 performance with constrained feet did not transfer or influence subjects' subsequent performance with 7 bare feet (Hypothesis 1c). Each segment's AM significantly decreased from the BF-Pre block to the CF block (ps > 0.014) and then 23 subsequently increased from the CF block to the BF-Post block (ps < 0.024) (Figures 7-8). There were no significant differences between AM in the BF-Pre and BF-Post blocks (ps > 0.14). Hence, the reduction in 1 WB-AM when wearing rigid soles was due in large part to a reduction in each segment's contribution to 2 WB-AM. It was not the result of reduced AM from a single large segment, for example. 3 4
Correlation of Upper-and Lower-Body Angular Momentum (CORR-AM) 5
As seen in Figure 9 the upper-body segments (head, thorax, upper arms, lower arms, and hands) 6 generated AM opposite in direction to the AM generated by the lower-body segments (pelvis, thighs, 7 shanks, and feet). To examine if the coordination of upper-body and lower-body AM contributions were 8 affected by constraining the foot, we computed correlation between the sum of AM of lower body 9 segments (LB-AM) and the sum of AM of upper-body segments (UB-AM) for each trial, which we refer to 10 as CORR-AM. Note that three outlier trials (0.7% of all trials) were omitted from the analysis as the CORR-11 AM values were uncharacteristically low. Consistent with the representative data shown in Figure 9, 12 upper-body AM and lower-body AM were highly anti-correlated as the overall mean of CORR-AM across 13 all conditions was -0.88 (SD = 0.05). As illustrated in Figure 10, rotation of the upper body segments (with 14 respect to the hip) was opposite to that of the lower body segments (with respect to the beam). This 15 suggests that subjects used a "hip-dominant" strategy to maintain balance. 22 16 17 A one-way within-subjects ANOVA found a significant effect of block on CORR-AM (F2,12 = 22.75, p = 18 0.000083) (Figure 11a-b). The CORR-AM significantly increased, i.e., became less correlated, from the BF- (Figure 11b). There was no difference in CORR-AM between the BF-Pre 22 block and the BF-Post block (t6 = 0.85, p = 0.43) (Figure 11b), nor between the last successful trial of the 23 (t6 = 1.84, p = 0.12), (Figure 11a). Interestingly, the upper-body AM and lower-body AM became less 1 correlated when the foot was constrained, even though balance proficiency was improved. In fact, the 2 trained gymnasts (red and orange traces in Figure 11a-b) also had the least anti-correlation. 3 4 DISCUSSION 5 The goal of this study was to determine how reducing the complexity of the foot influenced whole-body 6 coordination to maintain balance in a challenging beam walking task. We found that constraining the feet 7 with rigid soles immediately improved balance as indicated by a reduction in the variability of COM-V and 8 WB-AM. However, we did not find evidence that subjects altered their control strategy in response to the 9 reduction in foot degrees of freedom. Practicing the task with rigid soles did not influence subsequent 10 behavior with bare feet. 11
12
In support of Hypothesis 1b our results showed that constraining the feet with rigid soles is an effective 13 method of assisting balance. This finding is in accordance with the conclusions of Robbins et al. 21 . By 14 further assessing the effect of practice with the rigid soles, however, we found that it may not be an 15 effective intervention for training or rehabilitating balance (Hypothesis 2c). In general, an intervention 16 meant to enhance the performance or learning of a motor skill and should change neural control such 17 that it results in improved task performance under normal conditions 23,24 . In our study, barefoot 18 performance was unaffected by practice with constrained feet. In fact, we did not even observe a short-19 lived after-effect, which further suggests that subjects did not change their neural control strategy. While 20 it is conceivable that subjects could alter their control strategy with long-term practice wearing rigid 21 soles 19 , it remains an open question whether that learned strategy would improve or impair subsequent 22 barefoot performance. It also cannot be ruled out that subjects might have learned a new control strategy, 23 but that strategy was entirely context-dependent such that it did not transfer 25 . Evidence for this would require testing the effect of longer practice with the rigid soles to seek improvements within one 1 condition. Without any further investigations, the results presented here suggest that constraining the 2 feet may not be an effective intervention for training or retraining balance. Understanding how this 3 manipulation improved balance performance can inform the development of future interventions. 4 5 When balancing on a narrow base of support, the ankle's ability to exert torque on the beam through the 6 foot is limited 10 . Thus, it is not surprising that numerous studies have reported that humans use a hip-7 dominant strategy to maintain balance when standing on a beam 3,4,8,10,22,26 . Consistent with these studies 8 of standing balance, we similarly observed that subjects used a hip strategy to maintain ML balance when 9 walking on a beam, as indicated by high anti-correlation between the AM generated by the upper-and 10 lower-body segments 22 . Importantly, this was observed even though the arms were allowed to move 11 freely, representative of real-world conditions. Even though subjects used a hip-dominant strategy during 12 beam walking, this did not mean that the influence of the foot and ankle was minimal as is often presumed 13 when balancing on a narrow beam. In fact, our finding that constraining the feet significantly altered 14 balance behavior showed otherwise. 15 16 Not only did constraining the feet decrease the overall AM magnitude of most individual segments, it also 17 resulted in less anti-correlation between the AM of the upper-and lower-body segments. Though the 18 change in anti-correlation was small, it was significant and observed in all subjects (Figure 11b). As 19 demonstrated in a prior simulation study of a double-inverted pendulum model 22 , the degree of anti-20 correlation decreases when the overall magnitude of ankle torque increases relative to the magnitude of 21 hip torque. These simulations assumed no change in signal-to-noise ratio. It is important to note that the 22 increase in relative ankle contribution observed with constrained feet could have resulted from increased 23 ankle torque, decreased hip torque, or a combination of the two. While we cannot definitively discern how the change in relative ankle contribution occurred, we do know that it can be attributed to altering 1 the physical interaction between the foot and beam. The fact that subjects did not appear to learn or 2 adopt a new control strategy during practice with constrained feet further supports the notion that the 3 improvement in balance was the result of a mechanical effect. 4 5 We speculate that constraining the feet improved balance because it increased the stability of contact 6 between the foot and the beam. Note that the width of the support surface was identical in the BF and 7 CF conditions, meaning that adding the flat, rigid soles did not increase the maximum torque that could 8 be applied at the ankle. However, constraining the feet may have increased the "effective" range of ankle 9 torque. While the multiple degrees of freedom in each foot may increase control and/or sensing abilities, 10 they also make the foot compliant. Without the soles, exerting large ankle torque onto the beam could 11 cause the compliant foot to rotate. If so, subjects possibly reduced the amount of torque applied at the 12 ankle to avoid this rotation. Future studies comparing the distribution of pressure under the feet in each 13 condition would shed further light on this possible explanation. 14 15 Wearing rigid soles may have increased the amount of torque that could be applied at the ankle without 16 resulting in foot rotation about the beam 20 , and thus improved performance. But note, this is only one 17 mechanism through which the "effective" range of ankle torque could have been increased. Interestingly, 18 the subjects who were most proficient at maintaining balance tended to have less anti-correlation of 19 upper and lower body momentum in the BF conditions. It is possible that these subjects were either able 20 to modulate the mechanical properties of their ankle and foot or they could better compensate for the 21 interaction dynamics at the foot-beam contact. This could explain why Sawers and Ting 19 observed more 22 muscle synergies in experienced balancers. For instance, reducing the interaction dynamics could require 23 finer control of the degrees of freedom in the feet (e.g., toes) that expert dancers and balancers might learn with training. This also underscores that simple structure in overt balance behavior is not necessarily 1 indicative of a "simple" controller in the neuromotor system. 2 3 While our results gave clear evidence that adding flat rigid soles can assist balance, this benefit to balance 4 may come at cost. For instance, Takahashi et al. 27 found that wearing shoes with stiff soles significantly 5 increased the metabolic cost of walking. Moreover, we observed that there was no transfer from 6 practicing beam walking with constrained feet to walking with bare feet. Ultimately, future work is needed 7 to further understand (1) the influence of the foot and ankle mechanical properties on balance, and (2) 8 how expert balancers modulate or compensate for its effects. We expect that addressing these open 9 questions will yield promising new insights for enhancing the assistance and rehabilitation of balance. 10 11
METHODS 12
Subjects 13 Seven healthy subjects (gender: 2 females and 5 males, age: 28.7 ± 2.5 years, mass: 68.4 ± 10.9 kg, height: 14 1.74 ± 0.08 m) took part in the experiment. None had any prior experience with the specific experimental 15 task. The experiment conformed to the Declaration of Helsinki and written informed consent was 16 obtained from all participants according to the protocol approved by the ethical committee at the Medical 17 Department of the Eberhard-Karls-Universität of Tübingen, Germany. 18
Experimental Protocol 19
In each trial, the subject walked along a narrow beam (3.4 cm wide, 3.4 cm high, 4.75 m long) at a self-20 selected speed. Before the start of each trial, subjects stood with their left foot on the beam and their 21 right foot on the ground. After the experimenter gave the "go"-signal, they placed their right foot on the 22 beam and began walking. Upon reaching the end of the beam, subjects were instructed to step off, placing their feet on either side of the beam. Subjects did not receive any other instruction on how to walk or 1 how fast they should walk across the beam. They could use all body segments, including arms, as they 2 wished to maintain balance. For data processing, the placement of the right foot on beam indicated the 3 start of each trial; the last step before stepping off the beam marked the end of the trial. A trial was 4 deemed successful, if the subject remained on the beam for its entire length. If the subject lost balance 5 and had to step on the ground before reaching the end, the trial was labeled as unsuccessful. After each 6 trial, subjects were allowed to take a short rest if needed. 7 Each subject was instructed to complete 20 successful trials in each of the following three blocks: Bare 8 Feet-Pre (BF-Pre), Constrained Feet (CF), and Bare Feet-Post (BF-Post). In the BF-Pre and BF-Post blocks, 9 participants walked without shoes; in the CF block, participants performed trials with flat, rigid soles 10 attached to each foot. The solid soles were 3D printed and designed to be slightly larger than all subjects' 11 feet (width: 12cm at widest point, length: 31cm, depth: 1cm). All subject wore the same size soles. They 12 were secured to the subjects' feet with hook and loop straps and reinforced with duct tape as illustrated 13 in Figure 1b. These soles did not affect the plantar/dorsi-flexion and inversion/eversion motion of the 14 ankle. 15
3D Motion Capture Data Collection 16
Reflective markers were placed on the subjects' bodies following Vicon's Plug-In Gait marker set ( Figure 17 1). During each trial, 3D whole-body motion capture data was collected using a 10-camera motion capture 18 system (Vicon, Oxford, UK) at a sampling rate of 100Hz. As illustrated in Figure 1a, the origin of the lab 19 coordinate frame was set to the start end of the beam, with its y-axis aligned along the beam and its x-20 axis perpendicular to the beam. Commercial Vicon software was used to reconstruct and label the markers 21 to interpolate between short missing segments in the 3D marker trajectories. 22 Based on the subject's self-reported height and weight, subject-specific dynamic models (Plug-In Gait 1 model consisting of 15 rigid body segments, Table 1) were fit to the 3D marker trajectories using C-Motion 2 Visual3D software (Germantown, MD). The dependent measures for each trial were calculated using the 3 model-based data exported from Visual3D that were subsequently analyzed using custom scripts in 4 Matlab (The Mathworks, Natick, MA) as described in detail below. | 2019-04-03T13:09:20.799Z | 2019-01-03T00:00:00.000 | {
"year": 2019,
"sha1": "82be44d6a59d0e57be4f3a2833cae1a760bc5eae",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-64035-y.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "dfc86510770f49cf173562e849ac24c58eb02ec7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
11264782 | pes2o/s2orc | v3-fos-license | Frequency of abdominal wall hernias: is classical teaching out of date?
Objectives Abdominal wall hernias are common. Various authors all quote the following order (in decreasing frequency): inguinal, femoral, umbilical followed by rarer forms. But are these figures outdated? We investigated the epidemiology of hernia repair (retrospective review) over 30 years to determine whether the relative frequencies of hernias are evolving. Design All hernia repairs undertaken in consecutive adult patients were assessed. Data included: patient demographics; hernia type; and operation details. Data were analysed using Microsoft Excel 2007 and SPSS. Setting A single United Kingdom hospital trust during three periods: 1985–1988; 1995–1998; and 2005–2008. Main outcome measures Frequency data of different hernia types during three time periods, patient demographic data. Results Over the three time periods, 2389 patients underwent 2510 hernia repairs (i.e. including bilateral and multiple hernias in a single patient). Inguinal hernia repair was universally the commonest hernia repair, followed by umbilical, epigastric, para-umbilical, incisional and femoral, respectively. Whereas femoral hernia repair was the second commonest in the 1980s, it had become the fifth most common by 2005–2008. While the proportion of groin hernia repairs has decreased over time, the proportion of midline abdominal wall hernias has increased. Conclusion The current relative frequency of different hernia repair type is: inguinal; umbilical; epigastric; incisional; para-umbilical; femoral; and finally other types e.g. spigelian. This contrasts with hernia incidence figures quoted in common reference books.
Introduction
An abdominal wall hernia is an abnormal protrusion of a peritoneal-lined sac through the musculo-aponeurotic covering of the abdomen. Abdominal wall hernias are common, classically taught to occur in at least 2% of men 1 while statistics from the USA estimate 15 per 1000
None declared
Funding None Ethical approval
Not applicable
Guarantor GTR
Contributorship
All authors contributed equally population (1.5%). 2 More than 20 million hernias are estimated to be repaired every year around the world. 3 Per year approximately 700,000 hernia repairs are carried out in the USA, 4 and over 100,000 in the UK, 5,6 bringing about a significant cost and morbidity burden.
The introduction of independent treatment centres to produce additional capacity for some elective care (elective hernia repair being a prime example), with the aim of reducing waiting times and support the National Health System in meeting targets adds emphasis to this. Given the common nature of hernias, medical students are taught hernia epidemiology and examination techniques, and surgical trainees are often able to take advantage of their frequency to hone their surgical skills at a relatively early stage in training. Figure 1 shows the placement of various external hernias.
The most frequent hernia is the inguinal hernia (73% of cases). 1,2,7 Various authors all quote the following order of hernias, in decreasing frequency: inguinal (70-75%), femoral (6-17%), umbilical (3-8.5%) followed by rarer forms (1-2%). 1,2,8 But are these figures outdated? Anecdotally, it has seemed over the past few years that midline abdominal wall hernia repairs dominate day-case operating lists. We aimed to investigate the epidemiology of hernia repair over the past 30 years in order to determine whether the relative frequencies of abdominal wall hernias are evolving.
Results
The total number of patients undergoing hernia repair over the three time periods was 2389: Group A -426; Group B -647; and Group C -1316. There has been a significant increase in the number of patients undergoing hernia repair over time (Chi-squared test p < 0.001).
Over these periods, the total number of hernia repair procedures (i.e. including bilateral hernias and multiple hernias in a single patient) was 2510: Group A -456; Group B -675; and Group C -1379, which also represents a significant increase over time (Chi-squared test p < 0.001).
Demographic data are shown in Table 1.
There is a small overall reduction in the age of patients over time with the mean age falling from 59.6 in Group A to 55.6 in Group C ( p = 0.002 using ANOVA regression). (Although there was an even lower mean age in Group B, this did not reach significance.) Inguinal hernia repair was the commonest hernia repair undertaken in all groups, however there is has been a change in the proportion of other hernia repairs from Group A to C. Whereas femoral hernia repair is the second most common in Group A, it had become the fifth most common by Group C. The number of femoral hernia repairs in absolute terms, however, has not significantly changed over this time period ( p = 0.423, Chi-squared test). All other types of hernia have significantly increased in numbers from Group A to Group C (inguinal p < 0.01, para/umbilical p < 0.01, epigastric p < 0.01, incisional p < 0.01, other p < 0.01, Chi-squared test). The increase in other types of hernias explains the relatively reduced proportion of femoral hernia repairs. Table 2 shows the relative frequencies of the types of hernia repair undertaken.
In total, 175 repairs were carried out on recurrent hernias (6.9% of repairs). Over the three groups A, B and C, this was 38 (8.3%), 53 (7.9%), and 84 (6.1%), respectively. There was a trend towards a slight decrease in the proportion of recurrent hernias over time (Spearmans correlation co-efficient of -0.034), however this failed to reach significance, p = 0.097. There was variation in the recurrent hernia proportion between 0%). There was a reduction in the number of recurrent femoral hernias from Group A to Group C (2 to 0), however due to the small numbers, this failed to reach significance ( p = 0.06 Spearmans correlation co-efficient). There were no significant differences in the recurrent proportion of other hernia types over time.
Discussion
Abdominal wall hernia repair is a commonly performed general surgical operation, and therefore comprises a significant proportion of trainee teaching time.
Our results clearly differ from the classically taught order of hernia frequency (i.e. inguinal (70-75%), femoral (6 -17%), then umbilical (3 -8.5%) followed by rarer forms 1-2% 8,9 (Figure 2). In fact, our results suggest an order of: inguinal; umbilical; epigastric; incisional; para-umbilical; femoral, and finally other hernia types, e.g. spigelian. More interestingly, our results seem to suggest that although the incidence of hernia by type in textbooks 1,2 was accurate in the 1970s and 1980s, this has since changed. We sampled textbooks commonly used by medical students and junior surgical trainees as a measure to understand commonly accepted prevalence figures. Although newer editions of these textbooks are available, 10 incidence figures quoted are the same, and we have been unable to find any recently published figures pertaining to hernia incidence.
The choice of study design was based on a single unit, however one with a large and diverse catchment population (>1.3 million local population 11 ). Although the authors recognize that multicentre data would confer more generalizable conclusions, it was felt more important to obtain complete data. Similarly, the original study design included a plan to collect continuous data starting with the earliest records available (October 1985). It became apparent that the absolute numbers of hernia repairs would be very large and so a power calculation was made at three years, yielding an 80% power to the study. As data were available and complete for later 3-year intervals, we have included these as comparative cohorts.
Inguinal hernia
Inguinal hernia repair consumes a lot of healthcare resources because it has a high lifetime risk; 27% for men and 3% for women. 12 Inguinal hernias are undoubtedly the commonest hernia type. Our results showed approximately 71% of all hernia repairs undertaken were inguinal, a figure slightly lower than the 75% quoted by various authors. 1,9 In England and Wales, approximately 10 elective inguinal hernia repairs per 10,000 population are carried out per year. 13 The number of inguinal hernia repairs performed in NHS hospitals in England and Wales in 1998 -1999 was 76,087, of which about 8% were for recurrence, 13 compared with our figure of 8.3%.
Inguinal hernias are quoted as being 20 times more common in men than women. 8 Our results are similar, in fact showing that inguinal hernia repairs were carried out in total almost 15 times more commonly in men than women. Inguinal hernias are also quoted to be right-sided in 55% of cases. 8 Our results have mirrored this slight right predominance, with 49.0% left and 51.0% right-sided repairs.
Moreover, we have shown a trend towards a reduction over time in the proportion of groin hernia repairs, with a simultaneous increase in the proportion of midline abdominal wall hernia repairs. We postulate that a possible reason for Graph comparing study results to previously published hernia frequency these trends is the simultaneous trend towards increasing population body mass index (BMI). In fact, the proportion of the population classified as obese has more than doubled for men, and shown a similar but less steep trend for women even over the last 15 years. 14
Femoral hernia
Classical textbooks quote femoral hernia as the third most common type of primary hernia. 8,9 In our total study group, the rate of femoral hernia was only 3.7% (even lower during the time period 2005 -2008), equating to the fifth commonest hernia type. In particular, femoral hernias are quoted as accounting for 20% hernias in women, and 5% in men. 8,10 In fact, our results suggest femoral hernia account for less than 2% hernias in men, and just over 14% in women.
Other statistics quoted in the classical teaching include that femoral hernia are twice as common on the right side as the left. 7 -10 Our results did show preponderance of right-sided femoral hernia (2:1). Our results do suggest that the classical belief that femoral hernia are commoner in women than men remains true, but not as strongly as the four times commoner that has been quoted. 9 Umbilical/Para-umbilical hernia Textbooks also quote the rate of umbilical/ para-umbilical hernia to be up to five times commoner in women, 8 -10,15 citing pregnancy as a significant aetiological factor. Our results are in complete contrast with this, showing that men in fact underwent more than twice as many umbilical/para-umbilical hernia repairs. Any condition which raises intra-abdominal pressure, such as a powerful muscular effort, may produce a hernia. 8 Stretching of the abdominal musculature because of an increase in its contents, as in obesity, can be another factor. Adipose tissue acts to separate muscle bundles and layers, weakens aponeuroses and favours the appearance of para-umbilical, direct inguinal and hiatus hernias. 8 Therefore obesity, physical strain and pregnancy are important aetiological factors in the development of both umbilical/ para-umbilical hernias and epigastric hernias.
Our results however, have shown no difference in gender in epigastric hernias.
So why do our results differ to such a degree from long-believed teaching? We explore two major reasons. The first of these relates to parity in women. The total fertility rate in the UK has reduced significantly over this period. At the height of the 'baby boom' (1964) the mean number of children born to each woman in was 2.95, after which it steadily dropped to a low of 1.63 in 2001. 16 Another factor that has been well-documented over the past half-decade is that of rising rates of obesity. In England, the proportion of men classed as obese increased from 13.2% in 1993 to 23.1% in 2005 and from 16.4% to 24.8% for women during the same period. 14 Moreover, adipose deposition differs between genders 17,18 and perhaps this contributes to gender differences in hernia formation. Men and postmenopausal women accumulate more fat in the intra-abdominal depot than do premenopausal women. It is feasible then, that this may lead to a relatively greater intra-abdominal pressure in men, predisposing to abdominal wall hernias. Moreover, as the population ages, there will be a resultant increase in the number of postmenopausal women who accumulate intraabdominal adiposity thereby predisposing to hernia development. Over the last 25 years the percentage of the population aged 65 years and over increased from 15% in 1984 to 16% in 2009, an increase of 1.7 million people. This trend is projected to continue. By 2034, 23% of the population is projected to be aged 65 years and over. 6 We have shown that there has been a great increase in the absolute number of hernia repairs of most types being undertaken in a single trust over the years. It is likely that this increase is mirrored in most hospital trusts throughout the UK as well as internationally, driven by increased healthcare spending, day-case operating becoming commonplace, and the greater feasibility of elective surgery in the elderly. Perhaps another factor affecting these results may be that doctors recommend surgery for earlier, even asymptomatic hernias, which in the past were left until they became symptomatic. This may be as a result of new surgical (e.g. laparoscopic) and anaesthetic techniques perceived by referrers as 'safer', thereby allowing for repair on older and higher-risk surgical candidates.
We have shown a general trend towards fewer operations being carried out on recurrent hernias. Our study period occurs over the era during which the use of mesh became commonplace ( prior to 1982 the majority of inguinal hernia repairs in this hospital trust were carried out by Bassini or darn repair), which may part explain the reduction in recurrences. Our figure of 6.9% lies within the quoted 1-10% 9 and is significantly better than earlier published recurrence figures. 15 We do, of course, recognize important limitations to our study, not least of all its retrospective nature. Counting the number of hernia repairs as a proxy for hernia prevalence in a population will undoubtedly miss those patients who do not undergo operation for reasons of patient choice, anaesthetic risk, et cetera.
Conclusion
Abdominal wall hernia repair is a commonly performed general surgical operation. The relative frequency of groin hernia repair has decreased over time, while the frequency of midline abdominal wall hernia repair has increased. The relative frequency of different hernia type is: inguinal; umbilical; epigastric; incisional; para-umbilical; femoral; and finally other hernia types, e.g. spigelian. This contrasts with figures quoted in common reference books and may represent an evolution of disease pattern or surgical practice over the last 30 years. | 2014-10-01T00:00:00.000Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "9ab620cc87662056d096b16e224d9635e50cf441",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1258/shorts.2010.010071",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ab620cc87662056d096b16e224d9635e50cf441",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241868126 | pes2o/s2orc | v3-fos-license | Square Spiral Search (SSS) Algorithm for Cooperative Robots: Mars Exploration
. Abstract: The purpose of this project is to develop and implement an optimal search algorithm into multiple rovers, a.k.a. Swarmies. Swarmies are compact rovers, designed by NASA; these rovers are aimed to mimic Ants behavior for searching simulated Mars objects. TAMIU DustySWARM3.0, compared numerous search algorithm designs, such as the team previous DustySWARM2.0’s Epicycloidal spiral wave, the Fibonacci, and snake path. After multiple trial sutilizing simulation and real-life experimentation, a square-spiral path was developed to be implemented for the 2018 NASA Swarmathon Physical Competition. This paper covers a detailed overview of DustySWARM3.0’ssystems engineering process and code development utilizing computer science techniques.
Original SwarmBaseCode-ROS:
The original code provided by the University of New Mexico (UNM) was the foreground for our search algorithm. By understanding the variables, functions, and code structure, the team reached the threshold of knowledge, which allowed them to invent a search algorithm. However, the provided code dictates the rover to perform a random search. The functions within the searchcontroller.cpp were heavily modified to create a uniform and homogenous search pattern [1].
DustySWARM 2.0 Spiral Epicycloidal Wave (SEW):
Throughout multiple endeavors, the team decided on an original path, different from the previous team, DustySWARM 1.0. The path the Swarmies would follow is a Spiral Epicycloidal wave (SEW), which is a continuous spiral wave, closely related to a spring formation. Their path would allow maximum coverage, but due to the competition field being a square, this caused for some limitations. DustySWARM 2.0 was aware of the corner issue, so it was written off as a constraint/tradeoff to their path. They proceeded with the path since it would be easy to implement for three to six rovers with few complications [2].
A Practical Coverage Algorithm for Intelligent Robots with Deadline Situations:
The algorithm is intended for maximum coverage utilizing the available rovers. The article provided the team with an understanding to compile a code to maximize coverage within a certain time frame. The competition is timed, so the rovers must retrieve, collect, and deliver subjects (AprilTags) quickly and efficiently. The algorithm is best suited for intelligent robots unaware of their surrounding environment. The most powerful coverage algorithms rely heavily on having a complete grid map of the environment. For this reason, the authors utilize Simultaneous Localization and Mapping (SLAM) algorithms to help their robot operate efficiently in an unknown environment. Being in an unfamiliar area requires the robots to be able to handle dynamic obstacles and moving objects. Thus, the new proposed algorithm DMax Coverage is made with these things in mind. However, the competition involves static obstacles and resources to retrieve, so if SLAM were used, adjustments would be necessary [4]. [4] The DMax algorithm works by first using a SLAM algorithm to find out the boundaries of a workspace and to find the position and orientation of the robot within this unknown environment. After the area is mapped, the DMax algorithm computes a minimum bounding rectangle (MBR). An MBR is a rectangle that includes all free areas and areas with obstacles. This rectangle is then simplified into smaller rectangles that do not contain obstacles. The Rectangle Tiling Scheme, a common mathematical algorithm, deconstructs the rectangles into smaller ones, Figure (1) gives a visual example of how SLAM works.
METHODOLOGY & ALGORITHM DEVELOPMENT
The DustySWARM 3.0 team members studied various methods to come up with an improved search algorithm from prior iterations of the competition. This was achieved by following the systems engineering concepts that have learned throughout their academic career and by performing a multitude of research collaboratively in conjunction with the assistance of university resources.
Epicycloidal:
The Epicycloidal search pattern, programmed by DustySWARM 2.0, instructs the rover to perform a circular search wave for multiple iterations. This was done by setting multiple equally distanced points around the arena's home base. Those points then act as origins of a circle that the rovers revolve around. After completion of the first circle, each robot moves to its next point in a clockwise direction. This process was repeated by all three rovers until a complete revolution had been made around the home base as shown in Figure 2 [1]. The code could be changed by making different sizes circles as well as adjusting the number of points and their distances from each other. The drawback to this search pattern is the amount of coverage it would miss due to its circular nature. Since the arena is a square, the circular pattern would have trouble cover corners, with the issue that rovers would want to search beyond the border.
Fibonacci:
The Fibonacci is a pattern of integers that follows an arrangement in which the next integer is equal to the sum of the previous two integers. This pattern can be denoted by the expression: When squares have an area equal to these values that are arranged next to each other, they create a spiral. This golden ratio spiral is the proposed shape that the rovers follow in this particular search algorithm. However, the drawback of the Fibonacci is that after the seventh iteration the numbers substantially increase. This, in turn, led to realize that this search algorithm exponentially reduced the coverage of the field and the corners of the arena would remain undetected as illustrated in Figure (3). Therefore, this sequence was deemed inoperable as the ultimate search algorithm for DustySWARM 3.
Square-Spiral Search (SSS):
Upon consideration of different algorithms and their associated drawbacks during the initial processes of this project, the research led the team to develop the Square-Spiral Search (SSS) algorithm. In the circular or round-shaped search patterns, negligible errors in calibration would result in inaccuracy that exponentially expands making it arduous to use a singular reference point for the three rovers. Propositions for each rover to search the dimensions of a quadrant were considered in order to maximize the coverage area. There are numerous advantages of using a square-spiral pattern. The primary advantage would be to survey the entirety of the arena, searching corners will no longer be a complication and its simplicity would allow the team to make adjustments to maximize the pattern. Additionally, the rovers will each survey their dedicated area rather than searching collectively, leading the rovers to cover a higher percentage of the arena in a fraction of the time. Consequently, this searching algorithm contained a flaw, with only three rovers on the field for the preliminary rounds; one quadrant remained unexamined as shown in Figure (4). Utilizing this search pattern would require the need to move to the missing quadrant floor to gather the AprilTags there, therefore, a combination with the Spiral Epicycloidal Wave would be utilized to complete the search pattern.
Simulation Runs:
The experiments conducted involved simulation and physical trials. The Dell computer used to run the Gazebo simulations was equipped with an Intel Core i5 processor with 8GB of DDR3 RAM and an integrated graphics chip. Due to the specification constraints, the team was limited to do simulations without obstacles and AprilTags for three Swarmies. Also, the team members have a limited number of computers with Ubuntu that our team members can use; this slowed our programing process and limited our simulation time. With more analysis and understanding of the provided base code, our team was able to correlate the different variables and functions to enhance our algorithm performance by modifying the search pattern following the results of our simulation run. The team then created a negligible difference between simulation and physical trials.
Physical Runs:
Once the team finalized a working code that endured simulation tests, physical trials then began. The makeshift arena placement and setup would change depending on the time of the day. During the school semester, the team was limited to two quadrants of a complete arena. Before running physical trials, the rovers were tested for their overall performance and components endurance, which resulted in few discoveries such as damaged ultrasound sensors by an unforeseen event, and two of the rovers having faulty Inertial Measurement Units (IMU). Most of these issues were resolved after multiple troubleshooting and the physical one-on-one sessions with the UNM team, where new IMUs replaced the faulty units. The rover's endurance is limited; during our rigorous trials, the rover's motor shaft would occasionally malfunction. This would prompt the team to address this issue imperatively.
As previously mentioned, during school hours, our test space was severely limited. However, during these trials, the team members discovered some faults; one of the discovered faults was the GPS and Odometer not being synchronized. The differential tolerance, also known as the do-work tolerance, is the difference between the GPS and Odometer. Drift tolerance causes the center to be updated less frequently.
Physical trials became more elaborate during the campus Spring Break week as full experiments with the AprilTags on the arena were conducted. These complete runs, allowed the team to find all the undesired and mismatched behavior between the near-perfect simulated code and physical trials. Some of these undesired behaviors include rovers that did not share any resemblance to the squarespiral path, Graphical User Interface (GUI) errors, and GPS program crashes. The team noticed that the rovers were overheating due to intense Texas heat as they were in direct sunlight for hours at a time. It was assumed that it was the cause behind the GUI and GPS crashes.
RESULTS
After much trials and research on the Q&A forums, the team learned which files were the most important to learn about and alter. These files included obstacle controller, dropoff controller, and the most important file being search controller. The obstacle controller file controlled the autonomous behavior that the rovers use for avoiding obstacles. The dropoffcontroller file affected the behavior exhibited after picking up and when dropping off cubes, while the searchcontroller files used were integrated into the base code.
After minimal changes of other files within the src folder, the team began constructing the square spiral algorithm. Inside the searchcontroller.cpp file, more discoveries had been found; the most important one was that changing the values in searchLocation sections after the initial first_waypoint, which would allow the team to designate movement and direction. First_waypoint can be interpreted as a pinned location on a map; each waypoint is a place within the arena. The team's goal was to create a uniform path; this was done by removing the random base search and applying constant values.
In the searchController.h file, first_waypoint is a Boolean statement declared as true. It is then initialized as false in the searchcontroller.cpp to call/activate the if statement. Once the parameters have been met, it will proceed to the first loop, which constitutes its location within the field, and moves towards the wall. The initial 5.15 value is the maximum distance in meters the rover would travel without running into a wall. After the rover reaches the target distance, it turns 90 degrees to the right, moves linearly along the y-axis, and continues to decrement the size of the square until it reaches the center of the quadrant. Subsequently, the rover would continue its path by shifting its location to the next adjacent quadrant to maximize the arena coverage. Since the code instructs the rovers to move along the x-axis, the team used the searchLocation.x function only. The rover will move approximately 8.0 meters along the x-axis before commencing a reverse spiral. The reverse square-spiral is nearly identical to the square-spiral, per contra, M_PI/2 must be subtracted as it is an increasing square-spiral.
Finally, after this phase is completed, the rover will commence its larger coverage of the field to compensate for either the shifting of quadrants or the size of the arena. The parameters within this loop are similar; the only change is in searchLocation.x, where the values are multiplied by 8 because of the larger field. The largest loop is an exact template of the square spiral, only the variables change. This behavior pattern is shown in Figure (6 and 7).
Figure7. Odometer Depiction of Single Rover Coverage
Lastly, the else statement will make the first_waypoint true, which would restart the code. Initially, the code was several "else if" statements to activate the loops, but when conducting physical trials, the code failed to follow the programmed path. The team placed the 'cout' commands (cout << searchLocation.x and cout <<searchLocation.y) in the code of the first two loops to show the execution of the code. Results from the log folder, showed whenever running the code in the simulation, the x and y coordinates for the first block of code would appear but not for the second. Meaning after the first line of commands, the code would fail to continue to the next. To explore the origin of the complication, the "else if" was changed to an "if" statement to see if the values would change while maintaining a working algorithm, and it did. Not only that, but the team was able to read the x and y parameters for the second block of code. This breakthrough prompted the team to address the "else if" statements, and that was a sign that the code was finally adjusted and completed. Despite all the challenges the team faced, the code was successfully completed within the allocated time. It can be concluded that the Gazebo simulations were the lifeline of the project, due to the numerous endeavors faced with the code and physical trials. Through the team's insistence, determination, and perseverance, physical trials were conducted and marked as a triumph, concluding the project.
CONCLUSION
The construction of an autonomous, synonymous algorithm has strengthened the team members' programming assets. Cohesively, the team created an algorithm, communicated effectively, and critically solved issues that would occur during simulations or physical trials.
As with any code, there is room for improvement. A note that can be passed to future DustySWARM teams is to make sure they have a proper arena with sufficient shading or run physical trials in intervals to prevent overheating. Other notable improvements can be placed in the rovers' behavior when pickingup and dropping-off AprilTags along with the rovers' ability to communicate amongst each other. Other areas of interest can also be in programming the rovers with the ability to stack AprilTags when dropping-off at the home base and also the ability to push AprilTags, the ones close to home, into the home base for a sure point. DustySWARM team 3.0 with their Spiral Epicycloidal Wave (SEW) search algorithm placed the fourth (semi-finalist) in the physical competition and first place in the outreach paper competition among twenty-four (24) participating teams from all over the USA.
ACKNOWLEDGMENT
Thanks to NASA and the University of New Mexico (UNM) teams for all the help and the opportunity to participate in such great competition. Thanks to all our sponsors from Texas A&M International University (TAMIU) and Laredo. | 2020-05-21T09:09:24.519Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "78755ac90e3908a8d239ce1f9cc838458b13b2a0",
"oa_license": null,
"oa_url": "https://doi.org/10.20431/2349-4859.0701003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "77e2ac0febc0b1127e28064b3ade190fe6fab93d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
9464830 | pes2o/s2orc | v3-fos-license | Interactive effects of visuomotor perturbation and an afternoon nap on performance and the flow experience
The present study was designed (1) to clarify the relationship between the flow experience and improvements in visuomotor skills, (2) to examine the effects of rotating the axis of a computer mouse on visuomotor skills, and (3) to investigate the effects of sleep for improving visuomotor skills. Participants (N = 18) responded to Perturbation and nap (PER+Nap), No-perturbation and nap (NoPER+Nap) and Perturbation and rest (PER+Rest) conditions. In the PER+Nap condition, participants conducted a visuomotor tracking task using a computer mouse, which was accompanied by perturbation caused by rotating the axis of their mouse. After the task, they took a 90 min nap. In NoPER+Nap condition, they conducted the same visuomotor task without any perturbation and took a nap. In the PER+Rest condition, participants conducted the task with the perturbation and took a 90 min break spent reading magazines instead of taking a nap. Results indicated (1) the flow experience did not occur when participants’ skills and the degree of the visuomotor challenge were matching, (2) improvements of visuomotor skills occurred regardless of the perturbation, (3) improvements of visuomotor skills occurred unrelated to the flow experience, or to mood states, and (4) improvements of visuomotor performance occurred regardless of sleep. These findings suggest that improvements of visuomotor skills occur regardless of mood status and occur independently of perturbations by axis rotation. The study also suggests that the acquisition of skills is related to merely the time elapsed since learning, rather than to sleep.
Introduction
People find activities they are engaging in to be interesting when they are dedicated to an activity that is moderately difficult. According to the flow theory proposed by Chiksentmihaliy [1,2], task difficulty in relations to an individual's skills affect the feelings of enjoyment in doing a task. Moreover, a sense of enjoyment is experienced when there is a perfect balance between task difficulty and the performer's skills. Then, individuals can feel a deep involvement with the task and they might feel they merge with the task [3]. People also feel that the situation is a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 smooth and flowing and forget the passage of time. Individuals often report positive feelings when engaging in such tasks, which has been named "the flow" experience [1,2].
Individuals believe that they are performing a given task well when they are experiencing the flow because their mental and physical conditions are believed to be at their near best [1]. People devote themselves to doing the task and modify their skills to improve their performance [4], and therefore, theoretically, their ability to execute the task could be improved by the flow experience. This could happen when perceived task difficulties (i.e., challenges) and their abilities perfectly match each other. Contrary to this, when a task is too easy (i.e., the diffeculty level is too low), individuals might easily get bored, which could result in irritation and tiredness, causing distractions. On the other hand, if the task difficulty were perceived as being too high compared to the performer's skills (i.e., the challenge level is too high), he or she might feel negative feelings, including anxiety. As a result, if the challenge level were too high or too low compared to a performer's skills, the flow state might not be facilitated. Therefore, adjusting the task difficulty to correspond to the performer's skill level is critically important for inducing the flow [1,2].
Despite its positive attributes, however, the flow theory has not been well substantiated from the perspective of behavioral science. Practitioners have attempted to apply this theory to educational [5] and sports settings [6][7][8]. However, these attempts have at times been pointless, because the types of abilities and how these abilities are enhanced through the flow experience were not clarified. Examining the flow theory through behavioral experiments under controlled settings is necessary before the theory can be accurately and efficiently applied to practical situations. The aim of the present study was to clarify the relationship between the flow and skills learning.
We selected a visuomotor adaptation task to examine the flow theory [9][10][11]. The task consisted of skills that are fundamental to our daily life. In the visuomotor adaptation task that was used in this study, participants followed a moving dot on a PC screen by using a mouse pointer (no-rotation condition). This is a task that is similar to web-browsing that people do on a daily basis. In the axis rotation condition, however, the axis of the mouse point was rotated between 0 and 120 degrees in a clockwise direction to cause a perturbation and distract participants from following the moving dot by deviating the direction of mouse movements from the expected axis. After repeated practice, participants do acquire the skill of correctly manipulating a mouse with a rotated axis direction by gradually developing their skill to fit the new environment (i.e., rotated axis). This is a typical implicit learning phenomenon, which is known as "visuomotor adaptation" [12].
To compare the effects of visuomotor perturbation on visuomotor skills, performance in the axis rotation condition was compared with that of the no-rotation condition before and after practicing the task. In the visuomotor task, participants were requested to follow the moving dot shown on a LCD screen with their dominant hand by using the mouse pointer, which is a skill that is known to improve with practice [10]. It was assumed that using the mouse with its axis-rotated would deteriorate the performance.
The degree of difficulty involved in using a PC mouse with its axis rotated can be easily and accurately manipulated by increasing the degree of rotation; such that higher the degree of rotation, the more difficult would be the task. This feature of axis rotation is ideal for examining the flow theory. According to the flow theory, the flow would be experienced even when conducting a simple task, as long as the task is matched with the skill level of the performer [1]. An individual would experience the flow whenever he or she attempt to close the gap between intrinsic skills and increased task difficulty.
An additional aim of the present study was to investigate the effect of sleep on learning visuomotor skills. Certain previous studies have reported that skill acquisition in implicit motor learning tasks, such as imagery learning [13] and juggling [14], occurs during sleep [15][16][17][18][19][20]. However, other studies have failed to demonstrate this effect in adults [21][22][23][24][25], or children [26,27]. We assumed that emotional states such as the flow, which is experienced during sports practice might explain the controversial findings regarding the nap effect on visuomotor skills. This assumption is reasonable when considering relevant evidence from previous studies suggesting that emotional states during memory encoding might play an important role in memory consolidation during subsequent sleep [28,29]. Similarly, it was expected that the flow, which is a positive emotional state experienced during task performance, could enhance visuomotor skills during subsequent sleep. In addition, this study was designed to investigate the relationship between the flow experience and acquisition of visuomotor skills during sleep (i.e., a 90 min afternoon nap).
The hypotheses of the present study were: (1) matching own skills to a given difficulty in a visuomotor task would induce the flow experience compared to the non-matching condition in which the task difficulty was constant, (2) flow experience during the task would promote acquisition of visuomotor adaptation skills, and (3) visuomotor adaptation skills that are enhanced by the flow experience would improve during sleep more than in the no-sleep condition.
Participants
Participants were 9 men and 9 women aged 18-35 years (mean = 26.2 years, SD = 5.55). A standardized interview conducted before the experiment confirmed that all the participants were right-handed, they had no current physical or mental health problems, they did not suffer from any sleep disturbances, they were not currently using any medication, they were nonsmokers, and they had not engaged in shift work or traveled to a different time zones within the previous three months. Participants also reported that they have natural or corrected visual accuracy of over 0.8 (18/20 vision). Participants were asked to abstain from food and beverages containing caffeine or alcohol after 18:00 h on the day prior to the experiment and throughout the experimental period. The ethical considerations of the experimental protocol were reviewed and approved by the review board at the National Institute of Advanced Industrial Science and Technology (AIST) of Japan, according to the principles expressed in the Declaration of Helsinki. All participants had given prior written informed consent for participating in the study.
Procedure
Each participant was tested under three experimental conditions: (1) perturbation and nap (PER+Nap), (2) no-perturbation and nap (NoPER+Nap), and (3) perturbation and rest (PER +Rest). In the PER+Nap condition, participants conducted the visuomotor task with perturbation caused by the axis rotation of the mouse. After the task, participants took a 90 min nap. In the NoPER+Nap condition, participants conducted the visuomotor task without perturbation and then took a nap. In the PER+Rest condition, they conducted the perturbation task and instead of a nap they took a 90 min break which was spent reading the National Geographic magazines.
Participants slept in the laboratory on one day to familiarize themselves with the laboratory environment, in order to avoid the first-day effect. The procedure of the experiment was explained on this day and participants practiced performing the tasks and responding to questionnaires. One week later, they performed one of the three experimental conditions. Days between conditions were separated by at least seven days to avoid any carryover effects from the previous experiment. Moreover, any carryover effects of the tasks were controlled by counterbalancing the order of conditions among participants.
Participants arrived in the laboratory at 9:00 on the day of the experiment as shown in Fig 1. After that, the electrodes were attached for polysomnography. At approximately 9:50, the first session (Test 1) of the visuomotor task (with no-mouse axis rotation, i.e., 0 degrees) was conducted to assess their baseline visuomotor skills which took approximately two min. At 10:00, the 5 min psychomotor vigilance task (PVT) was conducted to assess their vigilance level. Then, approximately at 10:20 the main task started, which included 10 consequent 8 min sessions, each of which consisted of 4 trials (i.e., total 40 trials) with one trial consisting of four epochs of 30 sec. After each session, participants responded to a questionnaire for approximately one min. Participants were allowed to take short breaks between trials when executing the main task. The total duration of the main task varied among participants due to different times spent on short breaks, although it was no longer than 100 min. The second test session (Test 2) of visuomotor skills was conducted at approximately 12:00 noon, followed by the PVT at approximately 12:10. The purpose of the second test with no axis rotation was to assess the effects of practice or perturbation on the visuomotor skill, which was assigned to all three conditions to assess differences in skills learning compared to the baseline (i.e., Test 1). After a lunch break (12:20-13:00), participants took a nap or rested in the sound insulated laboratory from approximately 13:00 to 14:30 (i.e., 90 min) with the electrodes mounted. At approximately 14:30, participants took a 10 min break for using the restroom and for light exercise to reduce sleep inertia. The third test session (Test 3) was started at 14:40 and all experimental procedures were completed when it was completed. Test 3 was conducted to assess the effect of taking a nap on skills acquisition. The task had no axis rotation.
Performance task
Visuomotor task. In the visuomotor task, participants were requested to follow the moving dot shown on a LCD screen (diameter: 5 mm, velocity: 14.36 cm/s; range of X and Y axis: -100-100 pixels) with their dominant hand by using the mouse pointer. The mouse axis was rotated in 14-degree increments to make the task progressively more difficult if the mean (3) perturbation and rest (PER+Rest). In the PER+Nap condition, participants conducted the visuomotor task with perturbation caused by the axis rotation of the mouse. After the task, participants took a 90 min nap. In the NoPER+Nap condition, participants conducted the visuomotor task without perturbation and then took a nap. In the PER+Rest condition, they conducted the perturbation task and instead of a nap they took a 90 min break. distance between the moving dot and the mouse cursor improved by 3% to 20% in perturbation conditions (PER+Rest and PER+Nap) compared to the previous trial. Moreover, axis rotation was returned closer to the 0 position in 7-degree increments to make the task easier if the performance deteriorated by 50% to 90% compared to the previous trial. Axis rotation was conducted cumulatively throughout the ten sessions whereas rotation per trial was conducted only once, such that the mean axis rotation was 63 degrees (range 0-112 degrees). Feedback on the degree of axis rotation (i.e., challenge level) was given to each participant before a trial by showing the information on the screen. It was assumed that task difficulty in the perturbation condition (PER+Rest and PER+Nap conditions) would constantly be maintained at a level optimal to the skill level of each participant by using the above procedure. It was theoretically assumed that this setting in which an individual's skill level and task difficulty were matching would enhance the flow experience when performing the task. Task difficulty in the no-perturbation conditions (NoPER+Nap condition) was set to the minimal (i.e., no axis rotation) throughout the experiment. Visuomotor performance was determined by calculating the distances between the target and the point where the mouse cursor was located by using pixels. All stimuli were displayed on a 23-inch LCD monitor with a 1920 × 1080 resolution (HP Elite Display E231) connected to a computer. The participants observed the stimuli at a distance of 57 cm. A program written in MATLAB controlled the experimental schedule, using the Psychophysics Toolbox extensions [30,31].
Psychomotor vigilance test. The PVT [32] was used to assess the degree of vigilance in participants. The PVT is a simple visual reaction time (RT) test that requires a participant to respond as fast as possible (by using a key press) to a red target (a red number indicating the time in ms) appearing in the center of a screen. Interstimulus intervals varied between 2-10 s and participants performed the task for 5 minutes per session. Median RT was calculated for each participant and condition.
Questionnaire
Flow checklist. The Flow Checklist (FCL), which was originally developed in Japanese by Ishimura (2008) [33], was used to assess the flow state. The FCL consists of 10 items that are rated by using a 7-point Likert scale ranging from 1 (does not apply at all) to 7 (highly applicable). FCL items are categorized into three independent factors: "Confidence in competence," "Rising to the challenge," and "Positive emotions and immersion," with each factor consisting of 2-4 items. Items in Factor 1 (Confidence) include, "Everything is going well," "I am able to control situations," "I am confident in managing matters," and "I am in control of my behavior/movements." Items in Factor 2 (Challenge) include, "I feel my work is challenging" and "I am making progress toward reaching my goals." Items in Factor 3 (Immersion) include, "I feel time flies," "I am in a state of complete concentration," "I am completely immersed," and "I am enjoying my work." The reliability of the FCL with three factors has been confirmed in a previous study using factor analysis that demonstrated adequate Cronbach's alpha coefficients [33]. Moreover, the reliability of the FCL in the present study was confirmed by adequate Cronbach's alpha (α = 0.98 for Confidence; α = 0.98 for Challenge; α = 0.87 for Immersion). FCL was used to assess the flow experience during the main task of this study. Participants were asked to circle a number indicative of their current feelings after each session of the main task using the 7-point scale. Mean scores for each factor were calculated for each participant and condition. The scores obtained from the first and last sessions were omitted to avoid potential biases.
Mood status. Visual analogue scales (VAS) were used to assess participant's mood during the main task on dimensions of "anxious," "sleepy," "fatigued," "apathetic/vigorous," "confused," "angry," and "sad." Participants drew a line on the 100 mm scale to indicate their current moods after each session. Mean scores were calculated for each participant for each mood item. The scores obtained from the first and last sessions were omitted to avoid potential biases.
Polysomnography
The EEG (at Cz referenced to linked electrodes at the earlobes), the electrooculogram (EOG, from electrodes at the outer canthi) and the electromyogram (EMG, from electrodes at the chin) were recorded for standard polysomnography. The sampling rate of all signals was 1000 Hz (24-bit AD conversion) with time constants of 0.3 s for the EEG, 3.2 s for the EOG, and 0.03 s for the EMG. Electrode impedance was maintained below 10kO. Electrophysiological data were recorded with a portable digital recorder (PolymateV AP5148, Miyuki Giken Co., Ltd, Japan).
Sleep architecture was determined according to standard criteria [34,35] using the EEG recordings at Cz for succeeding 30-sec epochs. Total sleep time and time spent in the different sleep stages (wake-W, sleep stages 1, 2, 3, 4 -S1-S4; slow wave sleep-SWS, sum of S3 and S4, REM sleep) was calculated in min for each day.
Statistical analysis
Recognition performance was analyzed by a two-way Condition (3) × Time (3) analysis of variance (ANOVA). To control for the Type 1 error associated with violation of the sphericity assumption, degrees of freedom greater than one were reduced by the Huynh-Feldt ε correction. Paired t-tests were applied as post hoc analyses. All analyses were conducted with SPSS 1 system for Windows, version 22.0.
Flow and mood during the task
Flow experience scores during the task were grouped and averaged for each of three factors: "confidence," "challenge" and "immersion," and compared among conditions. The results indicated no significant differences among conditions as shown in Table 1
Sleep stages
There were no significant differences in sleep stages between PER+Nap and NoPER+Nap conditions (Table 3). This finding suggests that visuomotor perturbation did not affect the quality of sleep after the task.
Discussion
Results indicated that (1) the flow experience did not occur in the visuomotor task used in this study, which matched levels of skills and challenge, (2) improvements in visuomotor skills occurred regardless of the task perturbation, (3) improvements in visuomotor skills occurred unrelated to the flow or mood states, and (4) improvements in visuomotor performance were unrelated to sleep. These findings suggest that improvements in visuomotor skills resulting from the fit between the internal skill and the environment, as well as the development of new skills, are independent of perturbations from axis rotation, and are also independent of mood, including the flow experience. Results also indicated that elapsed time rather than sleep was related to visuomotor skills acquisition. Contrary to the hypotheses of this study, we did not find any significant differences in flow and mood status among the conditions. This suggests that visuomotor perturbation has no influence on subjective experiences including the flow experience. According to the flow theory, the flow occurs even under low challenges, when task difficulty and skill level are matching [1,2]. However, this contention was not supported by this study. Moreover, performance improvements observed in Test 3 suggest that the flow experience might be independent of the acquisition of skills, at least in visuomotor rotation tasks. Previous studies have attempted to apply the flow theory in applied settings, such as sports and education. These studies have reported that people with a higher level of motor skills requiring implicit learning, such as racing car drivers [36], soccer players [6], and piano players [3] experience the flow during these activities. However, we did not manage to induce the flow experience in this study by using a simple visuomoter task under an experimental situation. As previous studies have demonstrated, the flow experience can more easily occur in applied situations than in experimental settings, such as those of the present study. In addition, flow experience is likely to be induced when people engage in more complicated task, i.e., when people find organized complexity in a task [2] than when engaging in simple tasks, such as that in this experiment. Feedback mechanisms could provide another possible explanation of the non-significant flow experience found in this study: a person's performance in the present task might not have induced sufficient sense of achievement and reward to induce the flow, because feedback was weak compared to situations examined in previous research. In sports settings, for instance, a person can get clear feedback regarding activities and is able to know how well the task was performed [1]. Feedback is a critical aspect of the reward system inducing the flow experience [1] [37]. Brain imaging studies using positron emission tomography (PET) have shown that the dopaminergic function in the striatum is related to individual differences in the flow experience (i.e., flow proneness) [37]. Feedback on performance is also related to the reward system, and therefore the flow experience could result from the following process: (1) Perception of clear feedback about performance, (2) activation of the dopaminergic system in the brain and then (3) generating subjective experiences including the flow. In the present study the lack of the first process, i.e., feedback on performance caused by an obscure rule for judging success might have resulted in a less intense flow experience during the visuomotor task, which could have resulted in the non-significant difference between conditions in this study. It is suggested that these possibility should be investigated in future studies.
The important finding of the present study was that the effect of a nap on visuomotor skill acquisition was no different from that of resting (see Test 3 in Fig 2). Moreover, performance in NoPER+Nap condition failed to improve in Test 3 compared to Test 2, which also suggested that the nap had no skill enhancing effects (there is a floor effect in skill improvement, or participants' adaptation to the task might be an alternative explanations of the missing nap effect). Thus, the results of the study suggest that visuomotor skill acquisition did not depend on sleep, but rather on elapsed time. This finding is consistent with that of previous studies [21][22][23][24][25] and reinforces the assumption that sleep is irrelevant to improving implicit learning tasks including visuomotor learning [12]. We have reported previously that contextual learning, an example of implicit learning, is not enhanced by a 20 min afternoon nap taken after the task [38]. Taking these findings into consideration, it is suggested that implicit skill learning, including visuomotor and contextual skills, is not related to sleep itself, but rather to the elapsed time.
The effect of the perturbation on visuomotor performance was observed in Test 2 (see Fig 2), because task performance improved in the NoPER condition than in the PER conditions. This difference, however, disappeared in Test 3, which showed no significant differences among the three conditions. These results suggest that performance improvements during the PER conditions in Test 3 could have been caused by unmasking the aftereffects of perturbation. In the PER conditions, participants finished their practice sessions with approximately 60 degrees of axis rotation on average. Then, participants manipulated the mouse with zero degrees axis rotation (the normal mouse axis) in the test session that was conducted immediately after perturbation trials, which might have resulted in inertia, or aftereffects, following internal skill fitting [39]. The participants had to readapt to zero degrees, but they might not have adapted to this sudden change quickly, which could have resulted in aftereffects of axis perturbation. It is known that aftereffects of using a different internal skill (i.e., performance deterioration immediately after executing a different axis rotation task) disappear with elapsed time [40].
Participants in the axis rotation conditions had to develop a new internal skill for adapting to new combinations of visuomotor input, because new internal skills must be used to convert visual information to motor action when executing a visuomotor task. Additionally, participants have to use basic skills for executing the task, which they already possessed for controlling the mouse. The newly acquired skills and basic skills might independently contributed to mouse manipulation. The newly acquired skill for task rotation might have caused aftereffects and masked the basic skills improvement in Test 2 under perturbation conditions. In fact, increased performance improvements were observed in Test 3 compared to Test 2 (see Fig 2), suggesting that aftereffects could have disappeared with elapsed time, regardless of sleep or rest conditions.
A major limitation of this study, however, was there was no NoPER+Rest condition to control for the NoPER+Nap condition. As a result, the effect of the nap on the acquisition of visuomotor skills could not be precisely identified, although previous studies have indicated that the effect of this control condition is negligible [21][22][23][24][25]. Moreover, there is also the possibility that the visuomotor task was not appropriate for inducing the flow experience, because the perturbation to the control task difficulty might have induced an attentional interference or distraction. The distraction might have impeded participants' concentration on the task and prevented the flow experience. Unfortunately, in the present study we did not measure the degree of subjective distraction in executing the perturbation task, or subjective task difficulty. It is suggested that this issue should also be considered in future studies.
In conclusion, (1) the flow experience does not occur when fitting skills levels and challenges, (2) flow experience is independent of learning visuomotor skills of using a mouse and (3) sleep is unrelated to visuomotor skill acquisition. It is suggested that these findings make an important contribution to discussions on the relationship between the flow experience and implicit learning. | 2018-04-03T04:28:25.730Z | 2017-02-09T00:00:00.000 | {
"year": 2017,
"sha1": "73a7f8684d16a0aec458895974c34f3e7509a79c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171907&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73a7f8684d16a0aec458895974c34f3e7509a79c",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
44092993 | pes2o/s2orc | v3-fos-license | Deep Brain Stimulation: A Potential Treatment for Dementia in Alzheimer's Disease (AD) and Parkinson's Disease Dementia (PDD)
Damage to memory circuits may lead to dementia symptoms in Alzheimer's disease (AD) and Parkinson's disease dementia (PDD). Recently, deep brain stimulation (DBS) has been shown to be a novel means of memory neuromodulation when critical nodes in the memory circuit are targeted, such as the nucleus basalis of Meynert (NBM) and fornix. Potential memory improvements have been observed after DBS in patients with AD and PDD. DBS for the treatment of AD and PDD may be feasible and safe, but it is still preliminary. In this review, we explore the potential role of DBS for the treatment of dementia symptoms in AD and PDD. Firstly, we discuss memory circuits linked to AD and PDD. Secondly, we summarize clinical trials and case reports on NBM or fornix stimulation in AD or PDD patients and discuss the outcomes and limitations of these studies. Finally, we discuss the challenges and future of DBS for the treatment of AD and PDD. We include the latest research results from Gratwicke et al. (2017) and compare them with the results of previous relevant studies, and this would be a worthy update of the literature on DBS for dementia. In addition, we hypothesize that the differences between AD and PDD may ultimately lead to different results following DBS treatment.
INTRODUCTION
Dementia refers to a group of brain disorders that affect memory, reasoning, judgment, executive function, praxis, visuospatial abilities, and language that are not ascribed to delirium or another major psychiatric disorder (Bouchard, 2007). Various etiological subtypes of dementia exist, but two of the most common subtypes are Alzheimer's disease (AD) and Parkinson's disease dementia (PDD). It is estimated that AD affects 25 million people worldwide (Reitz et al., 2011). Dementia arises in 75% of patients with Parkinson's disease (PD) at 10 years after diagnosis and up to 83% at 20 years, according to the Sydney Multicenter Study (Hely et al., 2008). Given the immense burden that dementia places on patients and the health system, the search for effective treatment for dementia is paramount (Reitz et al., 2011). Numerous studies have demonstrated that damage to memory circuits may lead to dementia (Greicius et al., 2004;Junqué et al., 2005). Recently, the discovery that deep brain stimulation (DBS) may modulate activity in memory circuits has opened a new field of application of DBS, for the treatment of dementia (Freund et al., 2009;Kuhn et al., 2015a). The use of different DBS targets in the treatment of AD in humans has already shown some preliminary positive effects, such as a slowing of cognitive decline and increased connectivity in the brain (Laxton et al., 2010;Lozano et al., 2016). In this review, we discuss DBS treatment of the symptoms of dementia (including in AD and PDD) in detail.
DAMAGE TO MEMORY CIRCUITS MAY LEAD TO AD AND PDD
Although the pathogenesis of AD and PDD is still not completely known, studies indicate that dysfunction in memory circuits may explain AD and PDD (Greicius et al., 2004;Junqué et al., 2005). The fornix and hippocampus are part of the Papez circuit (Figure 1a). There is degeneration in the Papez circuit in AD (Toda et al., 2008). The default-mode network includes the medial prefrontal cortex and posterior cingulate cortex, with strong connections to the hippocampus and amygdala, whose activity is closely associated with episodic memory processing (Andrews-Hanna et al., 2014, Figure 1b). Compared to in individuals experiencing healthy aging, activity in the defaultmode network in patients with AD is decreased (Greicius et al., 2004).
Thus, AD and PDD are systemic disorders that affect memory and cognition through a connective network of cortical and cortical-related regions.
DBS FOR AD AND PDD
DBS is a surgical procedure that involves implanting electrodes into the brain. These electrodes can then be used to deliver electrical impulses into a specific area. DBS has been used to treat disorders in patients who are refractory to medications, including patients with PD, dystonia, depression, obsessivecompulsive disorders, and other psychiatric disorders (Lozano and Lipsman, 2013). DBS targeted to the subthalamic nucleus (STN) has positive effects on motor symptoms in PD. The incidence of PDD after STN-DBS is similar to that associated with PD patients receiving drug therapy (Aybek et al., 2007). As for AD, there is no definitive and effective treatment for PDD. The success of STN-DBS for the treatment of motor symptoms in PD has encouraged researchers to explore DBS for treating dementias. There is preliminary evidence to suggest that DBS may be a novel mechanism of memory neuromodulation in vivo in humans, via the targeting of critical nodes in the memory circuit such as the NBM and fornix (Table 1) (Freund et al., 2009;Bohnen and Albin, 2011;Kuhn et al., 2014).
NBM Stimulation
The downregulation of NBM cholinergic input leads to protein aggregation, which causes the pathophysiological cascade of cognitive decline in AD and PDD (Schliebs and Arendt, 2011). Regulation of the ascending basal forebrain projections of the NBM may augment cholinergic tone in the cortex. Thus, there is a rationale for targeting the NBM with electrical stimulation in order to influence memory function (Gratwicke et al., 2013). Turnbull et al. (1985) first implanted an NBM-DBS electrode into an AD patient, with no significant clinical benefit; however, after 6 months, they observed a partial arrest in the decline of cortical metabolic activity in the stimulated hemisphere compared with the unstimulated hemisphere. The limited clinical effect observed in the study by Turnbull et al. (1985) may be due to discontinuous NBM stimulation and inaccurate electrode placement, at least compared to current standards. The concept of NBM-DBS for dementia was shelved until more recently, when Freund et al. (2009) published a case report of a 71year-old man with severe PDD. The patient was implanted with two electrodes in the STN, to treat motor symptoms, and two electrodes in the NBM, as an experimental treatment for the symptoms of dementia (Freund et al., 2009). STN-DBS improved his motor symptoms, while NBM-DBS improved his global cognitive functions, such as memory, attention, concentration, alertness, drive, spontaneity, and social communication (Freund et al., 2009). The mechanism for these improvements may be related to the stimulation of a largely degenerated nucleus, as low-frequency stimulation (20 Hz) can excite residual NBM neurons (Nandi et al., 2008;Wu et al., 2008). In the study by Turnbull et al. (1985), high-frequency unilateral stimulation (50 Hz) of the NBM in a patient with AD did not improve memory function, perhaps because of the unilateral nature of the stimulation. These studies led to a renewed interest in the potential of NBM-DBS as a symptomatic treatment for dementia. Gratwicke et al. (2017) recently conducted a randomized, double-blind, crossover clinical trial that involved evaluating the results of six patients with PDD who were treated with NBM-DBS. Low-frequency stimulation in the CH4i subregion of the NBM was safe in patients with PDD; however, there was no improvement in cognitive function in these patients (Gratwicke et al., 2017). The reasons for the differences compared to the results of the Kuhn's research group conducted a series of trials of NBM-DBS in patients with AD. Kuhn et al. (2015a) conducted a pilot Phase I study, recruiting six patients with mild to moderate AD, who underwent bilateral low-frequency NBM-DBS. During a 4week double-blind sham-controlled phase and a subsequent 11month follow-up period, the primary outcome was assessed using the Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-Cog). After 1 year of stimulation, ADAS-Cog scores decreased by a mean of 3 points (95% CI = −6.1 to 12.1 points, P = 0.5). This indicated that the progress of the disease was rather slow, as an increase of >3 points on this scale is required in order for the improvement to be considered clinically significant. The authors hypothesized that DBS of the NBM may have a role in the observed effects by enhancing plasticity (by causing the release of neurotrophic factors) and stabilizing oscillation activity in memory-related circuits (Kuhn et al., 2015a).
Further research suggests that younger patients and those at earlier stages of the disease may be more likely to benefit from DBS (Kuhn et al., 2015b;Hardenacke et al., 2016). This may be related to the regulation of the cholinergic system. Deposition of fibrillar forms of amyloid beta (Aβ) protein contributes to AD (Querfurth and LaFerla, 2010). The activation of the cholinergic muscarinic M1 receptor decreases the levels of total Aβ in cerebrospinal fluid in patients with AD (Nitsch et al., 2000). Thus, upregulation of the cholinergic system may inhibit pathological protein aggregation. The cholinergic system is involved in the neurodegenerative process from disease onset and this system degenerates progressively over time, so early intervention to prevent cholinergic degeneration may result in better outcomes (Hardenacke et al., 2016). Imaging studies also suggest that patients with less atrophy benefit more from NBM-DBS, and the benefits of surgical intervention may be related to preserved fronto-parieto-temporal interplay (Baldermann et al., 2017). In addition, NBM-DBS may play a role in sensory memory through sensory gating of familiar auditory information, according to a two-case study (Dürschmid et al., 2017).
NBM-DBS improved cognitive function in a pilot Phase I study in patients with AD, while in an expanded PDD trial, NBM-DBS failed to improve cognitive function (Kuhn et al., 2015a;Gratwicke et al., 2017). We speculate that the differences between AD and PDD may ultimately lead to different DBS results. NBM cell loss and cholinergic deficits occurred earlier and were more widespread in patients with PDD compared to similar patients with AD (Bohnen et al., 2003;Gratwicke et al., 2013). As patients at earlier stages of the disease and with less atrophy benefit more from NBM-DBS (Hardenacke et al., 2016;Baldermann et al., 2017), it cannot be disregarded that the negative result for PDD may be due to the PDD patients having more widespread degenerative changes. We still need more evidence to confirm our hypothesis. In both trials, a limitation was that the patients continued acetylcholinesterase inhibitor therapy, so the potential physiological effects of NBM-DBS on the cholinergic system may have been partially disguised (Kuhn et al., 2015a;Gratwicke et al., 2017). However, this continuation of acetylcholinesterase inhibitor therapy was necessary for ethical reasons. In pilot Phase I study in patients with AD, limitation maybe that spatial range of target areas extended and not only restricted to the CH4 area because of vascular alterations such as intraparenchymal hemorrhage resulting from lesions to small vessels (Kuhn et al., 2015a). However, CH4 area of the NBM may be localized well through intraoperative magnetic resonance imaging (MRI). The expanded PDD trial did not include a randomized control group of patients with PDD who did not undergo surgery (Gratwicke et al., 2017). Further trials should allow patients treated with DBS to be compared with patients who have not undergone surgery to determine the effects of NBM-DBS on the natural history of PDD. However, an unexpected finding was the reduction in complex visual hallucinations after NBM-DBS (Gratwicke et al., 2017). The effects of NBM-DBS for the treatment of neuropsychiatric symptoms in Lewy body-related dementias need further research to confirm.
Fornix Stimulation
The fornix is a core white matter bundle in limbic circuits; it conveys cholinergic axons from the septal area to the hippocampus and plays a significant role in memory functions (Thomas et al., 2011). Hamani et al. (2008) were the first to report that stimulation of the fornix and hypothalamus may improve memory, although only one patient underwent DBS (to treat obesity) in their study. DBS did not affect the patient's appetite, but the patient felt an unexpectedly reproducible feeling of déjà vu, and detailed autobiographical memories were evoked . On the basis of this case report, a Phase I study of DBS for AD was performed, involving six patients with mild to moderate AD who underwent bilateral DBS targeting the fornix (Laxton et al., 2010). Bilateral fornix stimulation was safe and well-tolerated. The patients' cognitive outcomes indicated a reduced decline according to the Mini-Mental State Examination (MMSE) during the year after surgery in 5/6 patients, while 4/6 patients showed improvement in ADAS-Cog scores at 6 months after surgery. Besides there was an increase in temporoparietal glucose metabolism and fornix-DBS was able to activate the brain's default-mode network (Laxton et al., 2010). After a year of DBS, increased cerebral glucose metabolism was observed in two orthogonal networks: a frontal-temporal-parietal-striatalthalamic network and a frontal-temporal-parietal-occipitalhippocampal network, indicating increased connectivity in the brain (Smith et al., 2012). In addition, structural MRI indicated that fornix-DBS may increase the hippocampal volume after 1 year of treatment, suggesting the potential for long-term structural plasticity invoked by fornix-DBS (Sankar et al., 2015). Another group of researchers used restricted inclusion criteria, with nine patients that fulfilled the criteria, but only one patient accepted the operation (Fontaine et al., 2013). Increased mesial temporal lobe metabolism was observed after surgery, although cognitive scores remained stable (Fontaine et al., 2013). Based on these preliminary findings, researchers undertook a Phase II study involving a 12-month, sham-controlled trial of fornix-DBS in 42 patients with mild AD (Lozano et al., 2016). Positron emission tomography (PET) imaging revealed significantly increased cerebral glucose metabolism at 6 months, but the difference was not significant at 12 months. In addition, there were no significant differences in the primary cognitive outcomes at 12 months. Interestingly, there was an interaction of stimulation effects on cognition with age. In patients aged ≥65 years (patients with late-onset Alzheimer's disease [LOAD]) there was a trend of clinical benefit, while there was a trend of faster cognitive deterioration in patients <65 years old (patients with early-onset Alzheimer's disease [EOAD]). The cause of these age differences may be that younger AD patients had greater brain atrophy and metabolic deficits, which may make them less able to respond to DBS (Lozano et al., 2016). Another potential source of these differences may be that patients with autosomal dominant mutations, which are more common in EOAD, have an atypical and more aggressive disease progression (Viaña et al., 2017). Initial surgical outcomes from the Phase II study (Lozano et al., 2016) showed that accurate targeting of DBS to the fornix, without direct injury to it, was safe at 90 days in patients with mild AD (Ponce et al., 2016). The mechanism of cognitive improvement remains unknown, but it may be due to DBS-induced hippocampal neurogenesis (Toda et al., 2008). DBS-induced changes in neurotrophic factors may lead to the observed dendritic arbor growth and enhanced nerve growth, which may contribute to the DBS-induced memory improvement (During and Cao, 2006;Tillo et al., 2012;Begni et al., 2017). A larger Phase III study is required to obtain more clinical evidence.
Although the Phase I trial of fornix-DBS (Laxton et al., 2010) showed improvement in cognitive function, increased cerebral glucose metabolism, and increased hippocampal volume, two of the six participants had worse performance after surgery. In the Phase II trial report of fornix-DBS in patients with AD, patients with LOAD showed a trend of clinical benefit, while there was a trend of faster cognitive deterioration in patients with EOAD (Lozano et al., 2016). Thus, in the design of subsequent clinical trials, the optimum AD stage for DBS intervention and the subgroup of patients with AD who are most likely to benefit need to be particularly considered. As some effects can only appear after long-term stimulation (Laxton et al., 2010), it is necessary to inform patients with AD about the possible timeframe in which improvements could occur. One study on fornix DBS including one patient provided information on potential AD pathology based on cerebrospinal fluid levels of tau and Aβ (Fontaine et al., 2013), while the Phase I and II trials of fornix-DBS lacked this. During recruitment of patients for future clinical trials, more information on AD stage (such as information related to hippocampal brain volume and cerebrospinal fluid levels of tau and Aβ) might be provided.
Ethical Challenges
Ethical challenges are always present when patients have dementia, as dementia symptoms often mean that informed consent cannot be obtained from the patients. Therefore, investigators need to select patients very carefully in order to make sure that the selected patients can consent to and tolerate such treatments. As EOAD patients with autosomal dominant mutations have atypical and more aggressive disease progression, requesting informed consent for genetic testing in EOAD patients should also be carefully considered (Viaña et al., 2017).
CONCLUSION
It is hypothesized that DBS could potentially be an effective treatment for AD and PDD by modulating activity in memory circuits. Two primary DBS targets that are being explored for the treatment of dementias are the fornix and the NBM. Fornix-DBS may stabilize activity in the Papez circuit and defaultmode network (Laxton et al., 2010), while NBM-DBS may excite residual NBM neurons and stabilize oscillation activity in memory-related circuits (Kuhn et al., 2015a).
There is no comprehensively effective treatment for AD and PDD. Due to the inability to reverse the natural history of neurodegeneration in humans, DBS may serve as a supplemental treatment by regulating memory circuits. Optimal DBS parameters for treating dementias need to be based on experience from the DBS used in animal studies and for treating other diseases. For example, NBM-DBS frequency is selected based on the frequency used in previous animal studies. Lowfrequency (20 Hz) stimulation was applied in patients with dementia, which excited residual NBM neuron cell bodies and increased acetylcholine release in the hippocampal region (Freund et al., 2009;Gratwicke et al., 2017). Fornix stimulation depends on the current density, rather than on the frequency of stimulation. In clinical trials, AD patients were stimulated with 2.5-3.5 V (Laxton et al., 2010;Fontaine et al., 2013), which is usually considered to be medium voltage when using DBS to treat psychiatric disorders. Stimulation of the fornix may enhance hippocampal-dependent neurogenesis (Toda et al., 2008).
Evidence for the use of DBS to treat dementias is preliminary and limited. Preliminary studies indicate that using DBS for the treatment of AD and PDD may be feasible and safe. However, the evidence of clinical efficacy remains uncertain, with some results being negative. The major limitation of NBM-DBS studies discussed in this review was the small sample sizes used; the largest study that we reported on had a sample size of just 10, which gives limited statistical power. Sufficiently persuasive large-scale studies are needed. Moreover, precise intraoperative orientation allows patients to achieve better results and avoid unnecessary injuries. Finally, a framework for obtaining consent should be considered before surgery, which could involve requesting that EOAD patients sign an informed consent form for genetic testing and communicating to the patients that DBS may not be immediately effective at improving cognitive function. Future development of DBS might also lead to the most appropriate intervention time, the most effective stimulation parameters being identified and a better understanding of the underlying neurobiological mechanisms.
AUTHOR CONTRIBUTIONS
XW was the guarantor of integrity of the entire study. AD was responsible for the study concepts and design. YL and GL were in charge of literature research. QL prepared for the manuscript. WW edited the manuscript.
ACKNOWLEDGMENTS
XW is funded by National Natural Science Foundation of China (81071065# and 81671103#). | 2018-05-29T13:06:16.321Z | 2018-05-29T00:00:00.000 | {
"year": 2018,
"sha1": "68fe31a5936c401459ba7aacaa3f507894a4e383",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00360/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68fe31a5936c401459ba7aacaa3f507894a4e383",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10648993 | pes2o/s2orc | v3-fos-license | Revisiting the Saffman-Taylor experiment: imbibition patterns and liquid-entrainment transitions
We revisit the Saffman-Taylor experiment focusing on the forced-imbibition regime where the displacing fluid wets the confining walls. We demonstrate a new class of invasion patterns that do not display the canonical fingering shapes. We evidence that these unanticipated patterns stem from the entrainement of thin liquid films from the moving meniscus. We then theoretically explain how the interplay between the fluid flow at the contact line and the interface deformations results in the destabilization of liquid interfaces moving past solid surfaces. In addition, this minimal model conveys a unified framework which consistently accounts for all the liquid-entrainment scenarios that have been hitherto reported.
We revisit the Saffman-Taylor experiment focusing on the forced-imbibition regime where the displacing fluid wets the confining walls. We demonstrate a new class of invasion patterns that do not display the canonical fingering shapes. We evidence that these unanticipated patterns stem from the entrainement of thin liquid films from the moving meniscus. We then theoretically explain how the interplay between the fluid flow at the contact line and the interface deformations results in the destabilization of liquid interfaces moving past solid surfaces. In addition, this minimal model conveys a unified framework which consistently accounts for all the liquid-entrainment scenarios that have been hitherto reported. What liquid should be used to clean a hydrophilic container filled with an organic fluid? This seemingly trivial question turns out to be of major importance in a number of industrial process, including enhance recovery of the so-called heavy oils. An elementary thermodynamic reasoning would suggest using an aqueous liquid making the smallest possible contact angle with the container walls. In this letter we show that the answer is actually more subtle when the dynamics of the fluid interfaces is considered.
From a fundamental perspective, liquid-liquid interfaces driven past solid substrates have been extensively used as a proxy to investigate nonlinear-pattern formation such as Laplacian growth processes [1][2][3][4]. Until now the overwhelming majority of the experiments have been performed in the drainage regime, where a low-viscosity fluid displaces a high-viscosity fluid which preferentially wets the solid. From the Saffman-Taylor fingers growing in Hele-Shaw cells [1,2,5] to the fractal patterns found in porous media [3,4,6], the salient features of all the drainage patterns are very well captured by coarsegrained front-propagation models that discard the very details of the interactions between the liquid and the solid walls. Conversely, the experiments on imbibition dynamics, where the less viscous phase preferentially wets the solid walls, have been scarce and have yield somehow puzzling results [7][8][9]. The first quantitative experiment in a prototypal Hele-Shaw geometry was performed only one year ago with colloidal liquids [9]. Confocal imaging revealed an instability of the contact line. However, the resulting entrainement of a thin liquid sheet does not qualitatively modify the shape of viscous fingers. In contrast, imbibition experiments in porous media had revealed a marked qualitative change in the morphologies of the invasion patterns [4,7,8] Here, we revise the seminal Saffman-Taylor experiment using water to mobilize viscous oils filling hydrophilic microfluidic channels. We demonstrate a novel type of liquid-entrainement instability and the subse- quent growth of unanticipated imbibition patterns. We first quantitatively characterize their shape and propagation dynamics. We then theoretically explain how the intimate coupling between the short-scale molecular interactions with the solid and the large-scale flows results in the destabilization of the two-fluid interface. This model conveys a unified framework to consistently accounts for all the liquid entrainement scenarios that have been reported so far [9][10][11].
The experiment is thoroughly described in a supplementary document [12]. Briefly, it consists in injecting a coloured aqueous solution in a microfluidic Hele-Shaw channel filled with silicon oil of viscosity η oil ranging from 5 cp to 3500 cp. The invasion patterns are observed with a CCD camera with a spacial resolution of 12 µm/pxl. To gain more knowledge about their 3D morphology, we also convert the transmitted-light intensity into the local water-pattern thickness with a resolution of 5 µm [13]. The channels are made by bonding two glass slides with a double-sided tape. Prior to assembly, a thin film of thiolene-based resins is deposited on the two glass slides (NOA 81, Norland Optical Adhesives). Using NOA81 surfaces, the advancing contact angle of the aqueous solution immersed in silicon oil can be continuously varied from θ 0 = 120 ± 2 • down to θ 0 = 7 ± 2 • by means of a UV exposure [14]. The width and the length of the main channel are W = 5 mm, and L = 4.5 cm respectively, Fig. 1. The channel height is constant over the entire device, and is left unchanged as the fluids flow, H = 100 µm. In order to avoid any possible modification of the wetting properties, we make sure that the main channel does not contact any aqueous liquid prior to the imbibition experiment. To do so, the device is filled following a systematic sequence of injection steps described in [12]. In addition, the chips are not recycled. More than 50 chips were used to produced the dataset introduced below.
We first show in Fig. 1A the result of a standard drainage experiment, where silicon oil of viscosity η oil = 10 3 cp is displaced in an hydrophobic channel (θ 0 = 120 • ). The wedge-shape entrance of the main channel promotes tip splitting in this typical Saffman-Taylor pattern [15]. The colors of Fig. 1A code for the local water thickness, the two fingers clearly fill the gap of the shallow channel. We also note that they grow along the side walls which they partly wet. The very same type of finger shapes were observed over a decade of flow rates : 0.2 µl/min < Q < 1.8 µl/min. In contrast, Fig. 1B and supplementary movie 1 correspond to an imbibition experiment (θ 0 = 7 • ) performed at small flow rate. The marked difference between these two fingering patterns clearly reveals the impact of θ 0 on the water-front dynamics. The branching level is significantly increased while the width of the fingers is reduced compared to the drainage regime. More surprisingly, increasing the water flow rate above Q = 0.4 µl/min, the imbibition dynamics does not reduce to the mere propagation of a sharp water front any more, see Figs. 1C and 1D and supplementary movies 2 and 3. Thin water films are entrained from the finger tip throughout the oil phase, and merge to form complex interconnected patterns. Increasing the flow rate, the number of narrow thin films increases. Using a microscope and a 20x objective we found that the films propagate along the top and bottom walls. At this point we shall note that this latter observation is at odds with the entrainment dynamics reported in [9], where thin films were entrained in between the two confining walls. Fig. 2A conveys a clear picture of the interface dynamics at large scales. The imbibition-pattern thickness averaged over the y-direction, h(x, t) y , is plotted as a function of time and of the x-position along the channel. At t = 0, the flow rate is smaller than Q , and a branched finger grows at a constant speed. As Q is increased above Q , a thin water film is entrained and forms a rim. This rim is separated by the initial finger by an even thiner flat film. The rim moves at a constant speed ahead the initial thick finger, which keeps on growing at a constant, yet smaller velocity. The main water finger slowly meanders in the channel following the interconnected track left by the entrained films thereby traping small oil pockets in the channel. The topology of the resulting holey imbibition pattern, Fig. 1D is not akin to the branched structure emerging from a Laplacian growth process as observed in all the drainage experiments.
To further characterize the pattern heterogeneities, we measured the instantaneous distribution P(h(x, y), t) of the film-thickness field. P(h, t) was found to be stationary, in agreement with the constant speed of the two fronts separating the three regions (finger, flat film and rim) in Fig. 2A. P(h) is typically composed of four peaks, that may overlap, Fig. 2B. The leftmost peak corresponds to the edges of the pattern where the water thickness is by definition vanishingly small. The second peak corresponds to the flat-film regions. The third and broadest peak is centered on the typical rim-thickness value. The rightmost narrow peak located at h = H corresponds to the main finger. The strong increase with Q of the lefmost-peak amplitude reflects the increase of the perimeter-over-surface ratio at high water injection rate. This ratio is plotted in Fig. 2B inset both for the main finger and for the entrained film region. As Q exceeds Q , this quantity drops discontinuously for the main finger as liquid entrainement suppresses branching. Oppositely, in the entrained-film region the ratio jumps to higher values as the holes increase the pattern perimeter while reducing its area. Fig. 2C also demonstrates that the mean thickness of the films h(x, t) x,y decreases linearly with Q.
To gain more physical insight, we henceforth describe the imbibition process in term of the three dimensionless numbers that control the interface dynamics: the advancing contact angle θ 0 , the viscosity ratio of the two fluids η ≡ η oil /η water , and the capillary number Ca = η water V /γ, where V is the interface velocity, and γ is the surface energy of the two-fluids interface which we estimate to be γ ∼ 20 mPa.s. Here we focus on the roles of Ca and η for a small contact angle value. Repeating the same experiment with oils of different viscosity, we measured the z-averaged tip velocity of the fingers from which a water film is entrained. These measurements define the experimental phase diagram shown in Fig. 2D. Unexpectedly, the critical capillary number Ca above which the meniscus is unstable, undergoes nonmonotonic variations with η and displays a maximum for η ∼ 100. This observation rules out a naive scaling argument which would consist in comparing the magnitude of the Laplace pressure and of the viscous stress in the oil (reps. in the water) phase at the macroscopic scale H. Such estimates would result in a scalings Ca ∼ 1/η (resp. Ca ∼ 1), none of which is experimentally observed. To go beyond this oversimplified description, we now introduce a minimal model which accounts for the interplay between the fluid flows and the meniscus shape at all scales.
For sake of simplicity we ignore curvature effects in the xy-plane, and focus on steadily moving interfaces that are translationally invariant along the y-direction. The meniscus shape is determined by the local balance between the Laplace pressure and the normal-stress discontinuity across the fluid interface. Introducing the curvilinear coordinate along the interface, s, and the local interface curvature κ, the unit-vector normal to the surfacê n, it takes the compact form: where, the σ is the stress discontinuity at the interface. This equation couples to the Stokes equations for the two fluid flows. To solve this demanding problem, we built on [16,17], and make an additional ansatz which has proven to yield excellent agreement with lattice Boltzman simulations [18]. In the frame moving with the contact line, the velocity and the pressure fields in both phases are assumed to be locally given by the Hu and Scriven [19]. Within this approximation, the stress discontinuity in Eq. 1 is readily computed, and the shape of the interface is fully prescribed by completing Eq.1 with the boundary conditions: θ(s = 0) = θ 0 , θ(s = /2) = π/2, where is the curvilinear length of the meniscus. Eq. 1 is then recast into a 4-dimensional dynamical system, and this boundary value problem is effectively solved using an iterative collocation method as explained in [12]. The evolution of the meniscus shape with the capillary number is shown in Fig. 3A for η = 10 3 . Increasing Ca increases the meniscus length and reduces of the apparent contact angle value. More quantitatively, we show in Fig. 3B that θ app , measured here at the point of minimal curvature, decays to 0 for a finite value of Ca above which no stationary solution is found for the meniscus profile: in agreement with our experimental findings, a low-viscosity-liquid film is entrained along the walls.
However, when considering the case of moderate and small viscosity ratios, we found another instability mechanism, see e.g. Fig. 3C for which η = 10 −2 . As Ca increases, the apparent curvature of the meniscus decreases and changes sign. As a result the apparent contact angle increases toward π. Above a critical Ca value, again, no stationary solution is found. However, this dual instability yields a meniscus shape opposite to the one found for large η: a liquid sheet grows upstream between the two plates. The interface profile shown in Fig. 3C exactly corresponds to the one one reported in [9] for colloidal liquid with moderate viscosity contrast, η = 2.7. Therefore our numerical results solve the apparent contradiction between [9] and our experimental findings: viscousfinger menisci can experience two qualitatively different liquid-entrainment instabilities.
To further check the consistency of our predictions. We conducted experiments with silicon oil having an ultralow viscosity η = 0.65. Even though this viscosity ratio prevents the formation of viscous fingers, we did observe a strong change in the liquid motion at sufficiently high Ca. Again, above a critical capillary number (open symbol in Fig. 2D), a liquid sheet is entrained between the two plates ahead the main front, and subsequently re-wets the confining wall. As a result oil droplets are trapped on the two solid surfaces, see supplementary movie 4. This observation is akin to the ones reported both in [9], and in [17] for air entrainment in a liquid bath. Together with our first experimental findings, this last experiment unambiguously confirms that, thin films can be entrained from a driven meniscus according to two different scenarios set by the magnitude of the viscosity ratio, . We stress that both scenarios echoes the intricate coupling between the two fluid flows at the contact line. Even when it is associated to the smaller viscosity, the flow in the wetting phase significantly alter the stability of the meniscus upon imbibition dynamics. Sufficiently close to the contact line, due to the geometrical divergence of the strain rate, σ water compares to σ oil . In the absence of any intrinsic length scale for the interface dynamics in Eq. 1, and in the Stokes equation, the local modification of the meniscus curvature by the flows at the tip of the liquid wedge propagates up to the macroscopic scales.
These two entrainment scenarios define the stability diagram plotted in Fig. 4A. The stable meniscus region in the (η, Ca) plane is bounded by two critical curves, that meets at η = η . Below η the entrained films propagate at the center of the gap, whereas above η entrainement occurs along the confining walls. This prediction capture well the salient features of the experimental phase diagram shown in Fig. 2D. However we did not achieve a quantitative agreement. For instance η was predicted to be of the order of 100, yet it was measured to be close to unity. Needless to say that this discrepancy is not really surprising given the simplification of the meniscus geometry in the y-direction, and potentially due to pining effect at the contact line which we have ignored.
Two last comments are in order. Firstly, we provide a simple criteria to distinguish between the two liquidentrainment scenarios. To do so, we consider the flow in a perfect wedge of angle θ 0 , which is a reasonable approximation in the very vicinity of the contact line. Below η , in the low-viscosity liquid, the stream lines have a simple V-shape, Fig. 4C. Conversely above η they split into two recirculations, Fig. 4B. Simultaneously the radial velocity of the fluids at the interface changes its sign: η is defined as the viscosity-contrast value at which the radial component of the interface velocity vanishes. Secondly, looking now at the pressure field in the oil phase, we can gain additional physical insight into the high η > η regime. Fig. 4B indeed reveals that the tip of the liquid wedge is pulled downstream by a marked depression spot located at z = 0 in the oil phase, thereby promoting entrainment past the solid wall.
In summary, we have demonstrated a novel class of forced imbibition patterns. They stem from the entrainment of thin films out of the interface between a wetting fluid and a high viscosity fluid when driven past solid surfaces. In addition, we have introduced a minimal theoretical framework which accounts well for all the imbibition-induced meniscus instabilities that have been reported so far. Our findings should provide useful guidelines for the formulation of effective additive for cleaning, and enhance oil recovery applications.
We thank B. Andreotti, J. Snoeijer and E. Santanach Carreras for illuminating discussions, and C. Odier for help with the experiments. D.B acknowledge support from Institut Universitaire de France. The experiment consists in injecting an aqueous solution (water, SDS 1 wt% and food dye) in a microfluidic Hele-Shaw channel filled with silicon oil (Rhodorsil Oils of viscosity ranging from 5 cp to 3500 cp). The channels are made by bonding two glass slides with a double-sided tape cut with a precision plotting cutter (graphtec robo). Prior to assembly, a thin film of thiolene-based resins is deposited on the two glass slides (NOA 81, Norland Optical Adhesives). Using NOA81 surfaces, the advancing contact angle of the aqueous solution immersed in silicon oil can be continuously varied from θ 0 = 120 ± 2 • downto θ 0 = 7±2 • by means of a deep-UV exposure [14]. The geometry of the resulting channels is sketched in the supplementary figure 1. Their height is constant over the entire device, H = 100 µm, and does not change as the fluids are flown. The width and the length of the main channel are W = 5 mm, and L = 4.5 cm respectively. In order to accurately control both the wetting properties of the walls, and the initial shape of the water-oil meniscus, the liquids are injected as follows. First, the channel is filled with silicon oil by applying a constant 200 mbars pressure which is maintained over the entire experiment. Then, the aqueous solution is flown at a constant flow rate with a precision seringe pump (Nemesys, Cetoni). The two fluids meet at the T-junction and form a flat interface, supplementary Figure 1. Once the interface reaches a stationary shape, the T-junction outlet is closed to trigger the invasion the Hele-Shaw cell by the aqueous solution. In order to avoid any possible modification of the wetting properties of the hydrophilic surfaces by the water, the surfactant, or the dye molecules the chips are not recycled.
B. Observations and measurements
The invasion patterns are observed with a 60 mm macro lens (Nikkor f/2.8G, Nikon) mounted on a 8 Mpxls, 14bit CCD camera (Prosilica GX3300), which yield a spacial resolution of 12 µm/pxl. We also convert the transmitted-light intensity into the local water-pattern thickness with a resolution of 5 µm.
The local thickness h(x, y, t) of the water films relate to the image intensity I(x, y, t) via the Beer-Lambert absorption law: h(x, y, t) = − c ln I(x, y, t) I 0 (x, y, t) where c 0 is the dye concentration, is the absorptivity, and I 0 is the transmitted intensity for a channel filled with silicon oil. was determined by performing experiments with colored water only, in channels of known height using solutions with increasing concentrations. Note that the measure of I 0 (x, y) allowed us to correct the spatial heterogeneities of the observation setup. We also performed a systematic correction of the (minute) temporal fluctuations of the light source by bringing the average light intensity outside the channel to the same value for all images. We benchmarked this method by measuring the increase of the volume of aqueous solution injected at constant flow rates e.g. for the three imbibition experiments corresponding to Fig. 1 (Q = 0.5 , 1.0 and 1.5 µL/minrespectively). These three curves shown in supplementary figure 2 are perfectly fitted by straight lines, the slope of which indeed corresponds to the imposed flow rates within a 3% error (best linear fits: Q measured = 0.486 , 1.01 and 1.517 µL/min respectively). | 2014-04-09T09:03:48.000Z | 2014-04-09T00:00:00.000 | {
"year": 2014,
"sha1": "be70aeffe5779d3d07dfaa9208aa6c7b865551c2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.2397",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "be70aeffe5779d3d07dfaa9208aa6c7b865551c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
119204763 | pes2o/s2orc | v3-fos-license | A comprehensive study of the magnetic, structural and transport properties of the III-V ferromagnetic semiconductor InMnP
The manganese induced magnetic, electrical and structural modification in InMnP epilayers, prepared by Mn ion implantation and pulsed laser annealing, are investigated in the following work. All samples exhibit clear hysteresis loops and strong spin polarization at the Fermi level. The degree of magnetization, the Curie temperature and the spin polarization depend on the Mn concentration. The bright-field transmission electron micrographs show that InP samples become almost amorphous after Mn implantation but recrystallize after pulsed laser annealing. We did not observe an insulator-metal transition in InMnP up to a Mn concentration of 5 at./%. Instead all InMnP samples show insulating characteristics up to the lowest measured temperature. Magneotresistance results obtained at low temperatures support the hopping conduction mechanism in InMnP. We find that the Mn impurity band remains detached from the valence band in InMnP up to 5 at./% Mn doping. Our findings indicate that the local environment of Mn ions in InP is similar to GaMnAs, GaMnP and InMnAs, however, the electrical properties of these Mn implanted III-V compounds are different. This is one of the consequences of the different Mn binding energy in these compounds.
I. INTRODUCTION
After more than two decades, the study of dilute magnetic semiconductors has developed into an important branch of materials science. The comprehensive investigation of dilute magnetic III-V semiconductors has been stimulated by successful demonstrations of several phenomenological functionalities in these types of materials. For instance, III-V:Mn semiconductors exhibit properties like spin injection [1] and the control of magnetism by means of an electric field [2,3]. It has been demonstrated (with anomalous Hall signals) that the ferromagnetism in III-V semiconductor is carrier-mediated [4]. These properties of III-V:Mn semiconductors make them highly suitable for spintronic device applications [5].
Despite several outstanding achievements, the origin and control of ferromagnetism in III-V semiconductors is one of the most controversial research topics in condensed-matter physics today. GaMnAs is the most studied and well understood III-V dilute magnetic semiconductor. Currently there are two main competing theories under discussion for explaining the ferromagnetism in III-V semiconductors, particularly in GaMnAs. The first one states that a strong hybridization of Mn 3d-electrons with the GaAs valence band occurs. As a result, the Mn-derived band (states) merges with the GaAs valence band, giving rise to hole-mediated ferromagnetism through the p − d exchange interaction [6,7]. The second type states that the Mn states are split off from the valence band and they lie in an impurity band in the bandgap about 110 meV above the valence band maximum. In this scenario, the ferromagnetism is explained by the double-exhange interaction [8][9][10][11][12].
The Mn concentration and the hole concentration are the key factors in controlling the Curie temperature and the strength of the exchange interaction in III-V semiconductors, e.g. GaMnAs, GaMnP [4,13]. Mn-doped GaAs exhibits insulator-metal transition at a certain Mn concentration [13]. This is due to the isolated acceptor states of the Mn ions in GaAs being only 110 meV above the valence band maximum. With the introduction of the Mn ions, an impurity band forms and with increasing Mn concentration the band broadens, which results in a valence band like conduction [14]. In the case of GaMnP, the Mn isolated acceptor state is around 440 meV above the valence band, four times greater than that of GaMnAs, therefore the band broadening and the thermal energy might not be sufficient to induce metal behaviour in GaMnP even up to a high Mn concentration [4,15]. Recently, we have shown that ferromagnetic order can be induced in InMnP with a Mn concentration 3 which is comparable to that of GaMnAs and GaMnP [16]. The aim is to study a system that has the Mn acceptor level in between GaMnAs and GaMnP compounds in order to shed light on the impurity versus valence band debate in III-V semiconductors. InMnP is the most favourable choice to study this effect as it has a bandgap of 1.34 eV and an isolated Mn energy level of 220 meV [14].
In this work, we show how the variation in Mn concentration in InMnP modifies its magnetic, transport and structural properties. This is the first time that such a detailed and systematic study of the role of Mn in InMnP is carried out. This work contributes to a comprehensive understanding of impurity versus valence band picture in Mn-doped III-V semiconductors.
The implantation energy of Mn + ions was chosen in such a way that the penetration depth remains near 100 nm in InMnP. After implantation the samples were annealed by a XeCl excimer laser using an energy density of 0.40±0.05 J/cm 2 for a single pulse duration (30 ns). It is well known that after annealing a Mn-rich surface is created [4] which should be removed for further sample characterization. We have used a (1:10) HCl solution to etch the Mn-rich top surface from InMnP samples. The Mn-concentration measured by Auger electron spectroscopy (not shown here) in sample D was approximately 5 at.%. The measured (estimated) Mn concentrations in samples A, B, C, and D are 1, 2, 3 and 5 at.%, respectively. A SQUID-VSM (Superconducting Quantum Interference Device-Vibrating Sample Magnetometer) was used to measure the magnetization of the InMnP samples while magnetotransport measurements were performed using a Lakeshore system. X-ray Absorption Spectroscopy (XAS)/X-ray Magnetic Circular Dichroism (XMCD) measurements were performed at the beamline UE46/PGM-1 at BESSY II (Helmholtz-Zentrum Berlin). High resolution x-ray diffraction measurements were performed at the European synchrotron radiation facility (ESRF-BM20), in Grenoble, France. To locally analyze the microstructure of the Mn-implanted InP, transmission electron microscopy (TEM) investigations were performed using an image-corrected Titan 80-300 microscope (FEI). Besides bright-field imaging, selected area electron diffraction (SAED) was used to obtain structural information. Since the smallest available selected area aperture of 10 µm covers a circular area with a diameter of about 190 nm, both the Mn-implanted surface layer as well as the InP substrate contribute to the SAED patterns. Prior to each TEM analysis, the specimen mounted in a double tilt analytical holder was placed for about 30 s into a Model 1020 Plasma Cleaner (Fischione) to remove organic contamination. Classical cross-sectional TEM specimens were prepared by sawing, grinding, dimpling, and final Ar ion milling. Ultra-Violet Raman measurements were performed in the backscattering geometry using a 325 nm line of a He-Cd laser in ā z(ý ,ý)z back-scattering configuration.
III. RESULTS AND DISCUSSION
In this section, the magnetic, structural and transport properties of four InMnP samples will be discussed in detail. An overview of the samples including their Mn concentration, Curie temperature, magnetization, activation energies, strain, magnetoresistance, value of the constant C (see Eq. 2) and XMCD signals is given in table I.
A. Structural properties
The microstructure of the InMnP sample D with a Mn concentration of 5 at.% was investigated by transmission electron microscopy (TEM). Figure 1(a) shows a cross-sectional bright-field TEM image of the as-implanted sample. The gray color of the approximately 90 nm thick surface layer on the single crystalline InP substrate indicates InP amorphization due to Mn implantation. However, the remaining diffraction contrast points to crystalline inclusions within the amorphous layer which were confirmed by selected area electron diffraction (SAED). In particular, Figure 1(b) presents a SAED pattern with diffraction information from both, the InP substrate and the surface layer. Since the diffraction rings with uniformly distributed intensity are crossing the spots, which are caused by the single crystalline InP substrate, the inclusions in the surface layer are randomly oriented InP nanocrystallites.
Laser annealing of the as-implanted sample changes the microstructure of the surface layer, as can be seen in the bright-field TEM micrograph in Figure 1(c). A SAED pattern including diffraction information from the layer and the substrate indicates a quasi-epitaxial InP regrowth during laser annealing. However, stacking faults are introduced during this growth process, which can be inferred from the TEM micrograph in Figure 1(c) and the lines in the SAED pattern of Figure 1(d). Hence, the crystalline quality of the InMnP epilayer is comparable to that of a laser-annealed GaMnP epilayer [18]. Furthermore, the angle between two satellite peaks (2∆θ p , as shown in Fig. 2) is used to calculate the thickness of the InMnP epilayer, as this method has already been used for other layered crystalline systems [19]. The thickness of the InMnP epilayer is calculated by using the relationship L = λ/ (2∆θ P .cosθ B ), where 2∆θ P and θ B are the angle between two satellite peaks and the Bragg angle, respectively. The thickness of sample D using this formula is found to be 95±5 nm, in agreement with SRIM simulation (not shown here) and TEM (see Fig. 1(a)) results. Note that a shift in the InMnP peak with the Mn concentra- tion is not as significant as in GaMnAs [20]. Note also that the Mn interstitials and the In/Ga vacancies can induce the expansion of the lattice constant [20]. This expansion can compensate the shrinking induced by the substituted Mn in InP. upon increasing Ar irradiation doses was observed before and was attributed to an increase crystal lattice distortion [25]. It should be mentioned that even for Mn concentration of 5 at.% we do not observe the specific spectrum of amorphous InP in the annealed samples, which is a proof of the good crystallinity. Moreover, we do not observe the occurence of the coupled plasmon-LO mode which was reported in the case of GaMnAs. This indicates that the Mn ions incorporated in the crystal lattice do not contribute to a significant increase in free carriers density. This observation is in agreement with the high insulating behaviour and the hopping transport mechanism discussed in the section C.
The incorporation of Mn ions into the GaAs lattice is more effective than in the InP lattice.
As a result, these two compounds have different electronic and magnetic characteristics such as T c , magnetization, carrier concentration, mobility etc. Furthermore, the coupled plasmon-LO phonon mode strongly depends on the free carrier mobility and concentration in III-V semiconductors [24,26]. Therefore, we do not expect identical behaviour of the LO semiconductor compounds [27,28] and have also been predicted theoretically [29]. Das Sarma et al . have theoretically shown that the concavity of the M-T curve is related to the ratio of the carrier concentration to the impurity concentration [29]. If the carrier/impurity ratio is low, the shape of the M-T curve will be more concave and vice versa. We expect that Mn ions replace indium ions at their lattice sites and generate holes in the InMnP system.
But due to compensation centers, interstitial Mn or antisites as in GaMnAs [30,31], which generally contribute electrons to the system, the free hole concentration in InMnP will be reduced. Consequently, the carrier(holes)/impurity ratio reduces significantly and hence a concave shaped M-T curve is observed in most of the InMnP samples. and the formation of a magnetically inert layer with a large Mn concentration due to surface segregation during the pulsed laser annealing [34]. Furthermore, a self-compensation process (interstitial Mn, antisites) as in GaMnAs [30,31] can further depress the ferromagnetism in InMnP. A magnetic anisotropy perpendicular to the sample plane e.g., the magnetic easy axis is out-of-plane, has been observed in our InMnP samples (see inset (a) to Fig. 5 for an example). It could be due to the following reason. The substitution of Indium by manganese ions results in a smaller lattice constant of InMnP compared to its bulk InP. Therefore, the InMnP layer is under a tensile strain. Due to the biaxial tensile strain, the valence band splits and the lowest valence band assumes a heavy-hole character [35]. The hole spins are oriented along the growth direction when only the lowest valence band is occupied, since in this case it can lower their energy by coupling to the Mn spins and hence a perpendicular magnetic anisotropy is expected in this case [36][37][38][39]. The tensile strain in the InMnP layer is confirmed by XRD results and it increases with the Mn concentration. On the other hand, the magnetic anisotropy in ferromagnetic semiconductors also depends on temperature [36] and on hole concentration [40][41][42][43]. [45] for more details). It is well known that an oxide surface layer is formed in Mn doped III-V semiconductors [44,46]. Therefore, a HCl solution is used to remove the oxide layer prior to XMCD measurements. For the XMCD measurements in this particular study, all Mn concentration, see Fig. 6(b). The XMCD sum rules provide information on the degree of the spin and orbital moments in the system [48,49]. The spin moment calculated using XMCD data and the sum rules in sample D is ∼ 1±0.1 µ B /Mn while the orbital moment is negligibly small.
Both the Curie temperature and the XMCD at the Mn L-edge increase with Mn concentration. Indeed, the XMCD signal also shows a similar temperature dependent behavior as the magnetization measured by SQUID-VSM [16]. Both methods probe the same ferromagnetic phase. Therefore, we can prove that the ferromagnetism in InMnP is intrinsic and due to substituted Mn ions.
C. Transport properties
The carrier mediated nature of ferromagnetism and the conduction mechanism in InMnP can also be studied by transport methods (magnetoresistance, anomalous Hall effect and resistivity). The four samples A, B, C, D were used for magneto-transport measurements.
We carried out temperature dependent resistivity and magnetoresitance measurements using a van der Pauw geometery under a magnetic field perpendicular to the sample plane. Figure 7 represents the sheet resistance as a function of inverse temperature for samples A, B, C and D measured under a zero magnetic field. All samples exhibit semiconductor like sheet resistance up to the Mn concentration as high as 5 at.% and to the lowest measurable temperature. The temperature dependent sheet resistance of InMnP is quite similar to that of GaMnP [4], but different from that of GaMnAs with comparable Mn concentration [50].
It is worth noting that two different slopes, at low and high temperatures, in the temperature dependent sheet resistance of all four samples hint to different conduction mechanisms at low and high temperatures. Therefore, to describe thermally activated conduction processes at low and high temperatures, we have used a model [4] which is given in Eq. 1, where pre-exponential constants ρ 1 , ρ 2 and activation energies
InMnP.
We have performed magnetoresistance (MR) measurements on samples A, B, C and D in order to investigate a magnetic field response of the resistivity. Particulary, the low temperature magnetoresistance data also provides information on the transport mechansim in the system. Figure 8 table I). The negative magnetoresistance in InMnP can be explained as follows [52]: when a magnetic field is applied, it results in an antiferromagnetic coupling between the Mn ion and the hole (a ferromagnetic one between two Mn ions). Consequently, the wavefunctions of Mn-hole complexes expand and the increased overlapping of these wavefunctions results in a negative magnetoresistance of InMnP. We have used a model in high-field limits i.e. λ≪a, where λ and a are the magnetic length and localization radius, respectively, which has been used for insulating GaMnAs previously [52]. The model is given in Eq. 2.
where λ is the magnetic length. This model is used to fit the magnetoresistance data at low temperature for three InMnP samples and the results are shown in Fig. 8(a). The fit results show that the model is applicable to explain the magnetoresistance of InMnP in the high-field range of 1-5 T. The fitting parameter C has a negative sign due to the expansion of the wave functions in a magnetic field which increases with the Mn concentration from 12910 for sample B to 20465 for sample D (see table I). The dependence of the fitting parameter C on the Mn concentration indicates that under a magnetic field the overlapping of the wave functions of Mn-h complexes increases which results in a large negative magnetoresistance of sample D. At low temperature the magnetization reaches its saturation, the decrease in resistivity under a magnetic field is very weak. As at low temperature there remains certain conductivity in InMnP samples which could be related to the hopping conduction in the impurity band. The high insulating characteristics of InMnP samples (low mobility) also indicate that the Fermi level resides in the localized impurity band. In this scenario, the hopping conduction is dominating in the impurity band because it needs very low energy (few meV). Therefore, we can conclude that hopping within the impurity band is the main conduction mechanism in InMnP at low temperatures which is also an indication of a separated Mn-impurity induced band within the bandgap of InP.
IV. CONCLUSION
We have prepared a dilute ferromagnetic semiconductor InMnP by Mn ion implantation and pulsed laser annealing. Transmission electron microscopy with a combination of electron diffraction is used to study the microstructural changes produced by Mn ions in InP. From the micrographs it is seen that InP samples become almost amorphous after Mn implantation but recrystallize after pulsed laser annealing. The thickness of the InMnP epilayers is found to be 95±5 nm as estimated by SRIM and confirmed by HRTEM and XRD results. The Curie temperature of InMnP samples depends on the Mn concentration and reaches to 40±2 K when the Mn concentration is as high as 5 at.%. The shape of the M-T curves do not follow the usual mean-field theory instead they reflect a strong compensation contribution of Mn ions in InMnP epilayers. The saturation magnetization depends on the Mn concentration and reaches to 10 emu/cm 3 for sample D.
The large XMCD signal in InMnP samples reflects a strong spin polarization at the Fermi level in this system. A comparison of XAS or XMCD results obtained from InMnP and GaMnAs compounds indicates that Mn ion has a hybridized ground state (Mn 2+ ) in InMnP as in GaMnAs.
However, transport results suggest that inspite of a similar chemical environment in III-V semiconductors (InMnP, GaMnP, GaMnP, InMnAs), the degree of Mn ions incorporation in these compounds is different. These compounds, therefore, have different electrical and magnetic properties. The transport mechanism in InMnP is investigated by varying the Mn concentration. We did not observe an insulator-metal transition in InMnP up to a Mn concentration of 5 at.% instead all InMnP samples show insulating characteristics.
Magneotresistance results obtained at low temperatures support the hopping conduction mechanism in InMnP. We find that the Mn impurity band remains detached from the valence band in InMnP up to 5 at.% Mn doping. Our findings indicate that the local environement of Mn ions in InMnP is similar to those of GaMnAs, GaMnP and InM-nAs. It also seems that an unmerged Mn impurity band is formed in the bandgap of InP. This work might be helpful in understanding the family of III-V:Mn semiconductors, i.e., the different Mn binding energy in different III-V compounds should be considered.
The authors thank Stefan Facsko for AES measurements. The work is financially supported by the Helmholtz-Gemeinschaft Deutscher Forschungszentren (VH-NG-713). | 2015-01-15T08:23:05.000Z | 2015-01-15T00:00:00.000 | {
"year": 2015,
"sha1": "3efacded2137a0721fb68af37275fadf88741fa1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1501.03597",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3efacded2137a0721fb68af37275fadf88741fa1",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119460981 | pes2o/s2orc | v3-fos-license | Helicity locking in light emitted from a plasmonic nanotaper
Surface plasmon waves carry an intrinsic transverse spin, which is locked to its propagation direction. Apparently, when a singular plasmonic mode is guided on a conic surface this spin-locking may lead to a strong circular polarization of the far-field emission. Specifically, an adiabatically tapered gold nanocone guides an a priori excited plasmonic vortex upwards where the mode accelerates and finally beams out from the tip apex. The helicity of this beam is shown to be single-handed and stems solely from the transverse spin-locking of the helical plasmonic wave-front. We present a simple geometric model that fully predicts the emerging light spin in our system. Finally we experimentally demonstrate the helicity-locking phenomenon by using accurately fabricated nanostructures and confirm the results with the model and numerical data.
the plasmonic vortices (PV), with singularity coinciding with the cone center, the behavior is radically different. [5][6][7] The sharpening cone leads to the decrease of the effective index and the mode accelerates until it detaches from the surface due to the full momentum matching with the free space. At this specific point an intriguing polarization anomaly can be observed. The radiation emitted from the metal nanotip appears to be fully polarized in one circular state corresponding to the vortex topology. Here we experimentally demonstrate this unique phenomenon and analyze it using recently discovered plasmonic property -the transverse spin. 8 The ability to control and analyze the polarization state in nanoscale shall play a pivotal role in nanophotonics, optical encryption and quantum optics. Moreover, local excitation of chiral optical field may be utilized for a single molecule circular dichroism probing. 9 A coupling of the circular polarization handedness to the orbital angular or linear momentum of SPs was previously widely discussed in terms of the "plasmonic spin-orbit interaction". 10,11 This interaction resulted in intriguing spin-based phenomena such as the plasmonic spin-Hall effect, 12,13 spin-dependent plasmonic routing 14 and guiding, 15 spin-based imaging, 16 excitation of spin-dependent PVs 10,15 and spin-dependent far-field beaming. 17,18 These phenomena stemmed from a Doppler-like transfer of a longitudinal optical spin (polarization handedness) to the plasmonic orbital angular momentum manifested by its helical phase-front. Nevertheless, it was recently shown, that a plasmonic wave could also carry a transverse spin angular momentum (TSAM) 8,19-21 whose role in light-SP coupling might be crucial. The TSAM of the surface wave propagating in x direction on a metal-air interface is given as where k = k SPx + iκẑ is the the complex valued evanescent wave vector, κ = k 2 SP − k 2 0 , k 0 = 2π/λ 0 is the vacuum wavenumber and k SP is the in-plane plasmonic wavenumber. 8 This transverse spin results from the rotation of the resultant of the vectorial plasmonic field, E SP = E p (ẑ − iχx) in a transverse plane with respect to the propagation. Remarkably, the TSAM is independent of the polarization and solely arises from the amplitude ratio between the longitudinal and the transverse field components that is directly obtained from Maxwell's equations, s ⊥ = χ = κ k SP . Accordingly, s ⊥ is locked to the SPs propagation direction and can appear with a single handedness. This property has been already utilized for spin-dependent unidirectional plasmonic excitation, 22,23 for nanoparticle tweezing [24][25][26][27] and for study of quantum plasmonic effects. 20 Although the spin-orbit interaction reported in previous papers referred solely to the longitudinal spin-to-orbital AM transfer we note that considering the transverse AM is essential in non-paraxial systems. Specifically, when an SP mode is guided along a smooth 3D surface and then perfectly impedance-matched to the free space, its TSAM can be fully coupled to a pure circular polarization (CP) state. Here we experimentally observe a gigantic symmetry breaking in the CP state of light emerging from the tapered nano-cone placed in the center of a plasmonic vortex lens (PVL). One of the circular components experiences almost an order-of-magnitude suppression that results from an adiabatic acceleration of the PV along the nanotip followed by a perfect matching to the far-field. This phenomenon is explained using purely geometric consideration on the TSAM transfer along the tip and shown to be inherent in any smooth 3D SP-guiding system. Therefore the striking importance of the discovered effect in the fundamental physics as well as in a wide field of nano-photonic and quantum applications is evident. The SP launching grating consists of spiral slits engraved in a 300nm metal layer. The spiral radii are given by R m (ϕ) = R 0 + m · ϕ/k SP , where R 0 is the smallest radius of the groove, m is the topological order of the spiral and ϕ is the azimuthal angle. The tip is located at the center of the spiral as schematically presented in Figure 1a. Figure 1b. The structure is illuminated from the bottom with CP light whose spin number is denoted as σ i = +1 for the right handed and σ i = −1 for the left handed state. The incident beam excites a PV whose E z field component is characterized by a helical phase front 10,15,17,18,28 exp(ilϕ), with the topological charge l = m + σ i . First, we consider a spiral with m = 2 that generates PVs with l = 3 or l = 1 depending on σ i . The propagation of these plasmonic modes along the tip is calculated using COMSOL multiphysics R (see the insets of Figure 1b). We note that the l = 1 mode beams out exactly at the tip end while the mode with l = 3 detaches at some cut-off height. This behavior can be explained by the gradual phase velocity increase of the mode until the full momentum matching to the free space. 28 We calculate the transverse field distribution slightly above the detachment points (shown by the yellow dashed line in the insets). The complex field values are used to calculate the local polarization ellipse that is graphically presented on the top of the intensity distribution ( Figure 1b). The emerging polarization handedness, σ o = +1 is shown in red while σ o = −1 is in magenta.
Note that the emerging modes are both right-handed and very close to the circular state. In other words our system emits σ o = 1 independently on the incident handedness. Apparently, most of the previously discussed axially symmetric scattering architectures, such as circular or coaxial apertures were shown to couple PVs to radially polarized beam, that naturally consisted of almost equal amounts of right and left CP. 10,15,17 Here we link the emission of a single-handed polarization to the TSAM of the plasmonic mode. The inset on the right shows the local tangential frame with the mode's propagation vector β and the azimuthal wavenumber l/ρ. Therefore we look closer at the geometry of the system which is shown in Figure 2.
A spiral phase-front of the PV (represented as the blue line) propagates on a smoothed cone with a local wave vector k SP . The transverse and the longitudinal components of the local plasmonic field are denoted in the Figure as E t and E l , respectively. As can be seen the incidence plane follows the phase-front and gets tilted as the mode propagates upwards. Accordingly, the z projection of the local TSAM (green arrow) grows. To simplify the analysis we treat the conic surface as being comprised of short cylindrical sections of a constant radius ρ(z). 7 In this geometry the plasmonic mode propagation constant β can be determined by separately solving the Helmholtz equation in cylindrical coordinates in the dielectric and metal regions, and imposing the continuity of the tangential components of the fields. 5,29 From the calculated mode we then derive the effective refraction index using N ef f = β/k 0 .
The dependence of N ef f on the cylinder radius is depicted in Figure 3a. The mode with l = 0 experiences a darting slowing as the tip radius decreases, which corresponds to the well-known effect of energy localization at the apex. 3,4,30 This is the manifestation of the plasmonic "black hole" as the energy does not leave the tip, but concentrates around the high index tip apex. For PVs with l > 0 the index decreases towards the apex and the modes accelerate up to the free-space phase velocity (N ef f = 1) where they finally detach.
In a local tangential reference frame (e z , e ϕ ) the complex plasmonic wave vector is represented as k l = βe z + l ρ e ϕ + iκ l e ρ . By substituting the k l in Equation 1 the TSAM of the plasmonic wave can be calculated. In order to study the far-field helicity we consider only the z component of the TSAM, where κ l = β 2 + l ρ 2 − (2π/λ 0 ) 2 and k 2 SP = β 2 + (l/ρ) 2 . The mode detaches from the tip where the effective index becomes unity. The emerging spin at that point is then s z = − [k 2 0 (ρ/l) 2 + 1] −1 . In Figure 3b we consider the mode l = 1 propagating on a cylinder and analyze the transverse fields at some height. We use the calculated fields in the circular basis, E ± = |E ± iZ 0 H| (Z 0 is the vacuum impedance) to derive the local field helicity current, 31 P ± = ±ǫ 0 k 0 Im E * ± × E ± / |E ± | 2 and depict its integrated value in Figure 3b. Figure 3b as yellow rhombuses. We note that the values of s z calculated at the real tip fully correspond to the ones extracted from the modal analysis of the cylinder. Moreover, it is clearly visible that for the small radius both, the far-field ellipticity P + and the s z become unity, which indicates the emission of a pure circular polarization. On the other hand, for large radii the s z tends to zero. This indicates that if the field was emitted from this point its polarization would contain equal amounts of right handed and left handed CP. The latter behavior is expected from the metal due to its non-duality and was widely discussed in. 32 Figure 4a and Figure 4c for each polarization combination (σ i , σ o ). In Figure 4b and Figure 4d we show the calculated intensities.
In the case of m = 0, a plasmonic helical mode with l = ±1 is excited as expected from the AM conservation in a circular symmetry. 10,15 The PV then propagates up to the end of the cone where it emits a purely circularly polarized light with σ 0 = σ i as can be elucidated from the strong contrast between the diagonal and anti-diagonal distributions in Figure 4a.
Nevertheless, by using a spiral with m = 2 a PVs with l = 1, 3 are generated depending on the incident spin. According to our model the emerging s z should approach unity and a single CP is expected in the far-field. The experimental and the calculated intensity distributions in this system are shown in Figure 4b. Here the upper row intensity clearly exceeds the ones in the lower panels. Note, that both presented structures generated PV with l = 1 at the right CP state although they were initially excited by different circular components.
The result of the two experiments clearly shows that the emerging light helicity is locked to the PV handedness, σ o = sgn(l), due to the non-trivial 3D geometry of the guiding surface.
The overall average ratio of the maximum distribution intensities of the distinct states was found to be 7 as opposed to the ratio of 20 expected from the simulations. We associate this discrepancy to a tiny step at the basis of the fabricated nano-tip, resulting in a weak scattering of light.
In summary, we experimentally presented and theoretically analyzed the helicity-locking during a beaming of a plasmonic mode from an adiabatically tapered metallic cone. This behavior was attributed to the coupling of the plasmonic TSAM to the far-field due to the smooth geometry of the tip and the effective mode acceleration. This phenomenon, however, obeys the angular momentum conservation prescribed by the 3D geometry of our system. We note that in contrast with previous works [22][23][24][25][26][27]33
Supporting Information Available
The following files are available free of charge.
Fabrication
The fabrication of the samples is based on a procedure described by De Angelis et al 1 .
The principle relies on FIB-generated secondary-electron lithography in optical resists and allows the preparation of high aspect ratio structure with any 3D profile. The final structure comprises of a 6.2µm high base-smoothed gold tip on a 150 nm gold layer where PVLs are milled. In order to prepare such a complex architecture a multi-step fabrication process have been optimized. First of all a 5 / 23 nm Ti / Au bilayer has been deposited, by means of sputtering, on a 100 nm thick Si3N4 membrane. On this conductive layer, s1813 optical resist has been spun at 1500 rpm and soft-baked at 90 • C for 8 minutes. The resist thickness of 11µm is achieved by tuning the concentration, spinning time and velocity. On the back of the membrane a thin layer of silver (about 10nm) is then deposited by means of sputtering in order to ensure the necessary conductibility of the sample for the successive lithographic step. The membranes are then patterned from the backside using a Focused Ion Beam (Helios Nanolab600, FEI company), operated at 30keV (current aperture: 80pA, dwell time: 500µs). The tip-like shape has been obtained by patterning successive disks with decreasing diameter and correcting the dose applied for every disk, thus resembling the expected tip profile. (To note that the first milled disk present a high thickness (around 80nm) that will be filled in the successive metallic growth). Due to the high dose of low-energy secondary electrons induced by the interaction between the ion beam and the sample, a 30nm thick layer of resist, surrounding the milled disks, becomes highly cross-linked and insoluble to most solvents. After patterning, the sample is developed in acetone, rinsed in isopropanol and dried under gentle N 2 flow. The back side silver layer has been then removed by means of rapid HNO 3 rinse. At this stage we get a high dielectric tip surrounded by a metallic substrate. Since we need a base-smoothed tip on a 150nm thick gold layer, an additional layer of metal has been grown of the substrate by means of galvanic deposition (0.12Amp DC).
The galvanic layer is grown up to the tip base so ensuring a very smooth geometry. After the galvanic deposition, a 40nm thick layer of gold is deposited by sputtering the sample, tilted 60 • with respect to the vertical and rotated, guaranteeing an isotropic coating on both the sidewalls and the base. (In order to avoid any possible direct transmittance from the tips, the back of them has been filled, by means of electron beam induced deposition, with a 200nm thick layer of platinum). Finally, in order to prepare the sample with the desired m-order PVL surrounding the tip, a FIB milling process has been performed on the sample creating the spiral gratings without affecting the quality of the metallic tip. To maximize the structure symmetry we use a set of m spirals, each one is rotated by 2π/m with respect to the next one, in such a way that the radial distance between the two adjacent grooves stays λ SP = 2π/k SP , thus improving the coupling of normally impinging light.
Optical Setup
The transmission far-field measurements are performed using free-space optical setup consisting of alluminating CW pigtail laser operating at λ 0 = 785nm. The laser beam is collimated and polarized by a set of vertically oriented LP followed by a QWP rotated at 45 • . The light is pre-focused on the back side of the PVL by means of 20X microscope objective (NA = 0.25). The imaging was performed by using an infinity-corrected 60X objective (NA = 0.85) followed by a 100 mm tube lens and an additional 1.5X magnification telescope. The emerging spin-state was tuned by an additional QWP-LP set placed in the image path. The resulting image was captured by a PIXELINK CMOS industrial camera (PL-B771U, MONO | 2017-03-07T08:46:33.000Z | 2017-03-07T00:00:00.000 | {
"year": 2017,
"sha1": "0b2adb8c893aba4eedbbf6aa883c3158bb4e9f4b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0b2adb8c893aba4eedbbf6aa883c3158bb4e9f4b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267090580 | pes2o/s2orc | v3-fos-license | Immune and molecular landscape behind non-response to Mycophenolate Mofetil and Azathioprine in lupus nephritis therapy
Lupus nephritis (LN) represents one of the most severe complications of systemic lupus erythematosus, leading to end-stage kidney disease in worst cases. Current first-line therapies for LN, including mycophenolate mofetil (MMF) and azathioprine (AZA), fail to induce long-term remission in 60–70% of the patients, evidencing the urgent need to delve into the molecular knowledge-gap behind the non-response to these therapies. A longitudinal cohort of treated LN patients including clinical, cellular and transcriptomic data, was analyzed. Gene-expression signatures behind non-response to different drugs were revealed by differential expression analysis. Drug-specific non-response mechanisms and cell proportion differences were identified. Blood cell subsets mediating non-response were described using single-cell RNASeq data. We show that AZA and MMF non-response implicates different cells and regulatory functions. Mechanistic models were used to suggest add-on therapies to improve their current performance. Our results provide new insights into the molecular mechanisms associated with treatment failures in LN.
Main
Systemic Lupus Erythematosus (SLE) is a heterogeneous autoimmune disease with a wide range of severe clinical manifestations.Lupus nephritis (LN) represents one of the most severe complications affecting up to 50% of patients and can lead to end-stage kidney disease, being an independent risk factor for mortality 1,2 .LN is a clinically silent disease mostly detected when irreversible kidney damage is already installed, so effective treatment on time is crucial to stop further progression of the disease.
Immunosuppressant drugs including mycophenolate mofetil (MMF) and azathioprine (AZA) are widely used as induction and/or maintenance therapies for LN, along with initial high-doses of standard of care drugs (SOC), including glucocorticoids (GC) and hydroxychloroquine (HC).Belimumab and calcineurin inhibitors are also prescribed for LN.However, the e cacy of this therapy varies enormously between patients, and 60-70% of LN patients have not reached a long-term remission and a complete renal response one year after the treatment 3,4 .Additionally, chronic exposure to SOC leads to serious side effects due to drug-induced toxicity 5 , although immunosuppressive drugs potentially enhance renal recovery and facilitate quick tapering of corticosteroids 3,4 .Therefore, there is an urgent need to delve into the molecular knowledge-gap behind the non-response to these drugs with the goal of reducing therapeutic failure and improving long-term prognosis.
Treat-to-target approaches in which personalized molecular patterns guide therapeutic decisions are rapidly growing in the medical eld, primarily in oncology 6,7 , but remain largely unmet in clinical rheumatology 8 .In this context, some gene variants have been proposed to be used to adjust AZA doses in individual patients 9 while inosine monophosphate dehydrogenase activity has been used as biomarker of MMF e cacy following organ transplantation 10 .In this regard, mycophenolic acid (MPA) levels in blood have been correlated with disease state and with the appearance of ares, being associated with persistent remission rates for concentrations higher than 3.5 mg/L.It has also been observed that even if MMF doses are increased, the concentration of MPA does not always increase, with no direct correlation between the two 11 .Therefore, individual differences should always be considered, including race, age, body weight or even individual cellular or molecular patterns for a potentially more personalized therapeutic dosing 12 .
Omics-based personalized approaches offer a major promise towards high-de nition medicine, allowing to dissect the heterogeneity behind the disease, de ning new generation biomarkers to tailored treatment strategies [13][14][15][16] .Molecular dysregulation in SLE uctuates with a non-linear clinical course and unpredictable patterns of ares, hindering the development of effective and robust predictive biomarkers for both diagnosis and drug responsiveness in cross-sectional cohorts 17 .
In the present study, a longitudinal cohort of responder and non-responder patients to LN drugs was retrospectively analyzed in order to ll the knowledge-gap behind non-response mechanisms combining transcriptomic, cellular and clinical frameworks.Our results can provide support to a future personalized medicine that is increasingly closer.The possibility to anticipate therapy failures to help to re ne the rstline choice of treatment for LN patients can be decisive in reducing the progression of nephritis and the consequent chronic kidney damage.
Patients and clinical information
Gene expression, serological, demographic and clinical information were longitudinally collected for responder and non-responder patients to MMF, AZA, HC and SOC (HC and HC + GC).The treatment scheme followed is summarized in Fig. 1a.The number of patients and samples for each group along with patient characteristics are presented in Table 1 and expanded in Supplementary Table 1.No differences were found in age and sex in both groups, but non-responders to MMF showed a signi cantly higher disease activity and an enrichment in African-American ancestry.Higher doses of MMF, prednisone and acetylsalicylic acid (ASA) were observed in non-responders to MMF increased by standard medical decisions in the face of ineffective response to lower doses.Responders to HC and SOC showed an enrichment in non-steroid anti-in ammatory drugs (NSAID) usage.The serological pro les showed differences in C3 and C4 levels, previously associated to renal damage 18 , and anti-dsDNA titers for all drugs (Table 1).Interestingly, anti-dsDNA titers were increased in non-responders, except for MMF nonresponders, who showed increases in anticardiolipin IgA antibodies.Regarding disease activity-related clinical components, a signi cantly higher incidence of SLEDAI proteinuria and other renal manifestations were observed in non-responders considering all visits 19 Initially, lists of differentially expressed genes (DEG) between responder and non-responder samples to each immunosuppressant drug were compared using the Systemic Lupus Erythematosus Responder Index (SRI-4) and the protein/creatinine ratio in urine as response measurements by gene set enrichment analysis (GSEA) 20 .These two response measurements gave highly signi cant signatures between responder/non-responder groups of patients, and both signatures were similar when using either measurement (enrichment score (ES) = 0.93 and p-value = 4.39e-11 for up-expressed genes and ES = -0.94 and p-value = 5.31e-9 for down expressed genes) (Supplementary Fig. 1a).SRI-4 was used henceforth due to greater data availability.A total of 46, 157, 24 and 11 DEGs between responder and non-responder samples to MMF, AZA, HC and SOC, respectively, with a Bonferroni-corrected p-value < 0.05 were obtained (Fig. 1b).DEG for HC and SOC were extensively shared (Fig. 1b), while up and down-regulated DEG for MMF were down and up-regulated for AZA, respectively, suggesting opposite gene-expression patterns between non-responders to these two medications (Fig. 1c).Only 2 genes were found signi cant differentiating response and non-response for both drugs, CLEC4C and C15orf54 (Fig. 1b), but in opposite directions.
CRIP1, CD180 and several tubulin-related genes, and on the other hand, LILRA5, NME8 or S100P were the genes most up and down regulated, respectively, in non-responders to MMF (Fig. 1d and Supplementary Fig. 1b).The ratio between mean expressions of up and down regulated genes signi cantly differentiated responder and non-responder patients to MMF, being these expressed in the opposite direction to the gene expression in patients responder or non-responder to AZA, SOC and HC (Supplementary Fig. 1c), suggesting that the gene-signature is exclusively associated with MMF treatment.For AZA, we found genes BANK1 or TLR10 are most down-regulated, and some interferon type I (IFN-I) regulated genes are up-regulated in non-responders (Fig. 1e and Supplementary Fig. 2a).Most of DEGs for SOC and HC were shared (Fig. 1f-g.and Supplementary Fig. 3a-b), mainly because patients with SOC are treated with GC in combination to HC, highlighting TRIM51 or MUC20 in responders.Expression ratios for AZA DEGs signi cantly and speci cally distinguished responders from non-responders to AZA, not to other drugs (Supplementary Fig. 2b), and similar conclusions were obtained for SOC and HC (Supplementary Fig. 3cd).
Top10 DEGs based on adjusted p-value were used as features to build machine learning (ML) based models with nested 10-fold cross validation to predict response to each drug.As described in Supplementary Fig. 4a, we obtained Matthews Correlation Coe cient (MCC) of 0.7, 0.81, 0.63 and 0.56 for MMF, AZA, HC and SOC (Supplementary Fig. 4b), respectively.Thus, these gene-signatures accurately predicted drug response to each drug, but better for AZA and MMF.
The functionality of DEG was investigated by the quantitative set analysis for gene expression modular analysis (QuSAGE).This analysis revealed over-regulation of B cell and dendritic cell (DC)-related processes, and an under-regulation of NK, CD4 + T cells and IFN-I signaling in non-responder patients to MMF.IFN-I and DC-related functions were over-represented in non-responders to AZA, while B cell and T cell activation and differentiation were under-represented for this drug.For SOC and HC, B cell functions were down-regulated in non-responders, and more general biological processes, like cell division and regulation of immune signaling were up-regulated (Supplementary Table 2).So, DEGs for each drug revealed differences in the immune processes occurring in different cell populations.
Cellular pro le in uence on response rates
In silico deconvolution of bulk transcriptomic data was performed to obtain the proportions of 20 different blood cell types in the samples, showing signi cantly lower CD8 + T cell and higher memory B cell proportions in non-responder patients to MMF (Fig. 2a), in line with the previous functional analysis on DEGs obtained.Memory B cells and plasma cells (PC) were increased in AZA (Fig. 2b) and HC nonresponder patients, in addition to a decrease in CD4 + T cells and NK cells for non-responders to HC (Fig. 2c).Next, samples were strati ed based on their cell proportions (see Methods).Certain cell proportions contributed signi cantly to response to each drug.Signi cantly higher proportions of responders were associated with poor numbers of memory B cells, PCs and DCs, while the greater the proportion of T and NK cells, the greater the response ratios (Fig. 2d-g).
To further dissect blood cell types and their in uence on the response to each drug, public single-cell RNAseq data from PBMC of 41 SLE patients was analyzed.First, cells were clustered and the major blood cell types were identi ed (Supplementary Fig. 5a-b).Second, clustering rounds were performed for each major cell type.Using the AddModuleScore function from the Seurat R package 21 , maximum gene-expression scores for up and down-DEG were calculated across subclusters within each major cell type for each drug, in order to identify major cells contributors to the non-response (Fig. 2h).Interestingly, the nonresponse up-regulated DEGs (up-DEG) for MMF and AZA were expressed in different cell subsets.This suggests that different cell subsets are involved in non-response to each drug.For MMF, non-response up-DEGs were mainly expressed in PCs, B cells, NK cells, plasmacytoid dendritic cells (pDCs) and CD14 + monocytes, either for all cells or for some subclusters of cells within them.For AZA, megakaryocytes, CD14 + and CD16 + monocytes showed the highest scores.On the other hand, non-response up-DEG for HC and SOC were not primarily expressed by any speci c cell type, while only pDCs and CD14 + monocytes were expressing the genes up-regulated in responders.
Cell subpopulations behind non-response to LN drugs at single-cell level Now, clusters associated to each major cell type were subdivided to increase granularity.B cells were divided into 6 clusters (Fig. 3a).The non-response signature for MMF and to a lesser extent for AZA, was mainly expressed by the Bcell_cl2 (Fig. 3b-c).Bcell_cl2 was identi ed as a cluster of cells phenotypically similar to age-associated B cells (ABCs, also called DN2 cells) (Fig. 3d), characterized by the expression of CXCR3, ITGAX and TBX21.The top-10 DEGs between clusters are shown in Fig. 3e.Bcell_cl2 together with Bcell_cl5 (with a DN3 phenotype) over-expressed IFN-I stimulated genes (ISG) such as IFIT3, IFI27 and IFITM (Fig. 3e-f).Of the 3 clusters of PCs (Fig. 3g), the non-response signature to MMF was expressed in all, but more in PC_cl1 (Fig. 3h), which in turn showed greater IFITM and ISG expression scores (Fig. 3i).In the case of pDCs, most cells expressed the MMF-non response signature (Fig. 3j).
Additionally, big differences between MMF and AZA signatures was observed in the myeloid compartment.CD14 + cells were divided into 8 clusters (Fig. 4g).A high-MMF non-response signature was observed in CD14 + _cl2 and CD14 + _cl6 (Fig. 4h).CD14_cl6 showed a high score for adhesion functions and intermediate monocyte phenotypes (Fig. 4i).Since these cells strongly express CD1C, CLEC10A and class I HLA genes, they likely contain type 2 conventional dendritic cells (cDC2) (Fig. 4k).CD14 + _cl2 re ected a CD16 + non-classical monocyte phenotype and complement-mediated phagocytosis (Fig. 4i), expressing complement proteins such as C1QA and C1QB (Fig. 4k).Functionally, these cells are ready to adhere and migrate to the kidney tissue to get differentiated to macrophages and to interact with immune-complexes 22 .An independent and quite large cluster of CD16 + monocytes was de ned (Supplementary Fig. 5a-b), showing the exclusive and importantly increased expression of the AZA nonresponse signature (Fig. 4j).AZA non-response signature was also expressed in CD14 + _cl4, showing antigen presentation and migration functions (Fig. 4h-i).Differences regarding IFN were also found.AZA non-response-related monocyte clusters showed high-IFITM and ISG genes, but only high-IFITM gene expression was observed for clusters expressing the MMF non-response signature (Fig. 4h).The same occurred for CD8 + T clusters, although the MMF non-response score in CD8 + T cells was weaker (Supplementary Fig. 6a-c).The AZA non-response signature was also highly expressed in a non-IFN related subcluster of megakaryocytes (Supplementary Fig. 6d-g).Thus, we showed that clusters expressing MMF and AZA non-response signatures co-expressed ISG and IFITM gene signatures (Supplementary Fig. 7).
Finally, the HC and SOC non-response signatures were not particularly expressed in any speci c subclusters.Instead, the expression scores were distributed across cells from all subclusters.On the other hand, non-response up-regulated genes for HC and SOC were highly expressed in cDC2 and in pDCs (Supplementary Fig. 6h-k)
Druggability of regulatory networks of cells in uencing nonresponse
As certain speci c cell types express the non-response signatures to MMF and AZA, we aimed at identifying regulatory signaling across these cell subsets as potential therapeutic targets.We used CellChat R package 23 to identify regulatory signaling networks between cell clusters speci cally related with non-response to MMF and AZA followed by the analysis of their potential druggability using Hipathia R package 24 (See Methods).Here, a theoretical response score was estimated for each patient from our cohort comparing changes at transcriptome level before and after inhibition of targets from each identi ed regulatory network.The CC-chemokine ligand (CCL) signaling network was found regulating the non-response signature to AZA, that is CD14+_cl4 and CD16 + monocytes (Fig. 5a).For clusters related with MMF non-response, the BAFF signaling network was identi ed as the best signaling route candidate (Fig. 5a).Interestingly, 63 percent of non-responder patients to AZA achieved a favorable estimated response by CCL inhibition against 40 percent for non-responders to MMF (Fig. 5b).BAFF inhibition reported favorable response for 74 and 56 percent of non-responders to MMF and AZA, respectively (Fig. 5b).In both cases, for MMF and AZA non-responders, response ratio was importantly increased to up 20 percent when inhibiting drug-speci c non-response mechanisms.So, refractory patients for each drug could bene t from adding a tailored second therapy.
Discussion
This study revealed different molecular and cellular mechanisms behind non-response to MMF and AZA by analyzing retrospectively a longitudinal cohort of responder and non-responder SLE patients to both drugs.
The course of the disease is complex and unpredictable, alternating periods of inactivity, disease ares and progression to organ damage, with different underlying molecular mechanisms which may potentially differ between patients.This heterogeneity particularly hinders the effective discovery of robust biomarkers for both disease progression as for treatment responses 17 .Cross-sectional studies of patients with active disease limit the different scenarios to analyze, reducing reproducibility in other cohorts and/or disease conditions.Therefore, a longitudinal cohort was selected, with samples representing different disease states, with different clinical manifestations and treated with different routine treatments and doses.Robust non-response gene signatures to MMF and AZA were obtained across all the clinical and molecular heterogeneity of the disease.Maintenance drugs including HC and HC plus GC were analyzed demonstrating that MMF and AZA non-response patterns were drug-speci c, not in uenced by secondary SOC therapies.In addition, drug signatures were used to build ML-based models to predict drug responses obtaining high performance results (balanced accuracies higher than 0.75 in all cases).
One main limitation of our study is the small number of patients treated using for some speci c drugs (mainly for AZA), making more di cult the interpretation of the AZA-associated data.A larger interventional clinical trial would be required in order to validate responsiveness and non-responsiveness mechanisms to the drugs alone and to test the predictive capacity of the non-response signatures de ned.In lupus, it is particularly di cult to obtain public longitudinal transcriptome data and more so if a single drug is to be studied.SLE patients take, in most instances, combinations of multiple drugs, and response outcomes are often not shared.Validation could bring us closer to more personalized medicine, supporting more effective rst-line therapy choice for LN patients.
Despite this, we obtained revealing and encouraging results.Analyzing cell pro les, we observed a depletion of T cells in non-responder patients and a worse response ratio was consistently observed for patients poor in various T cell subpopulations.In a previous study, T lymphocyte exhaustion was associated with LN 25 , but differences comparing response and non-response to drugs have never been reported before.Perhaps insu cient or abnormal T cell function could be in uencing the lack of response 26 .For MMF, the non-response was mainly mediated by PCs, pDCs and ABCs, in line with the fact that the worst response ratios were obtained for patients showing rich memory B cell pro les.ABCs are a class-switched, antigen-speci c memory-like B cell population expanded in SLE that contributes to autoimmunity through the production of autoantibodies and cytokines and regulating in ammatory T cells acting as APCs 27 .Their differentiation is driven by the toll-like receptor (TLR) 7 in an interleukin-21mediated mechanism 28 .Recently, expansion of ABCs has been observed in the kidneys of LN patients 29 and in SLE mouse models 30 , underscoring the importance of these cells.The question remains as to why are ABCs remaining high and if this might be due to resistance of these cells to MMF, mechanisms that would need to be experimentally tested.
The MMF non-response signature was also expressed in NKT cells, which regulate Th1/Th2 balance 31 .In fact, cross-regulation between Tregs and NKT cells was previously reported.Activated NKT cells modulate Treg function through IL-2-dependent mechanisms, whereas Treg can suppress proliferation, cytokine release and cytotoxic activity of NKT cells by cell-contact-dependent mechanisms 32 .
CD1C + cDC2 and non-classical monocytes also over-expressed the non-response signature to MMF.
HLA class II genes, expressed by APCs and importantly expressed by the relevant non-response-related cell subtypes, modulate the interaction of T and B cells in the production of autoantibodies.The genetic association of the HLA class II genes with autoantibody production in SLE is well established, and our results suggest that CD1C + cDC2 may be importantly involved in this context 34 .These clusters seem to be playing an important role in renal damage control, showing functions related to complement-mediated phagocytosis 22 .Complement cascade proteins bind immune-complex deposits in the kidney glomerulus driving immunopathology leading to long-time scars 35 .
For AZA, the most notable nding is the exacerbated expression of a non-response signature in CD16 + and CD14 + monocytes with genes involved in migration related functions.The accumulation of CD16 + monocytes in the blood could re ect either an increase in their differentiation, which would lead to greater amounts of them migrating to the target tissue, or just the opposite, a de cit in the correct migration processes to the tissue 36 .Deconvolution of cell types from bulk transcriptome did not allow identi cation of CD16 + monocytes in blood, so future analyses would be necessary to validate the increase or lack of migration of these monocytes to the tissue in the AZA therapy context.
Therefore, we revealed different molecular signatures and different cellular subtypes associated with them for non-response to MMF and AZA.In fact, in silico inhibition of targets from regulatory networks regulating clusters associated to MMF or AZA non-response identi ed different response ratios for refractory patients for each drug.CCL2 inhibition has been previously proposed to reduce tissue in ltration of monocytes, minimizing the in ammatory phenotypes 37 , while belimumab, an anti-BAFF drug, is currently approved for SLE and LN.BAFF inhibition leads to a reduction in autoantibody production, depleting the differentiation of PCs from B cells 38 .In fact, growing studies show the effectivity of combining belimumab with other immunosuppressant drugs 3 .Here, we presented potential evidence that anti-BAFF could be more bene cial for non-responders to MMF by in silico analysis.Detailed analysis is required to test the e cacy of belimumab as an add-on therapy to MMF in real world terms.
Finally, there is extensive evidence showing the importance of IFN-I in SLE and other autoimmune diseases 39,40 .We herein report the co-expression of IFN-related genes and non-response signatures to LN drugs in the same cell subsets.Speci cally, at least a handful of genes from the ISG and IFITM families of genes showed high expression scores in subsets expressing AZA and MMF non-response signatures, both of them for AZA, and IFITM genes particularly for MMF.
The IFITM-family of genes codify 3 anti-viral subfamilies of proteins, one of which is immune-related, including, in turn, 3 main proteins, IFITM1, IFITM2 and IFITM3 41 , all of which evolved evolutionarily through their expansion and interaction with viral infections.Despite their protein sequence similarity IFITM1, 2 and 3 have different cellular localization and function, and different anti-viral speci city through mechanisms still poorly understood.While IFITM1 is exposed on the cell surface (former Leu-13 antigen-expressing cells, now CD225), IFITM2 and 3 are localized in endosomes and lysosomes.
Interestingly, IFITM1 and IFITM3 have been found as part of the B cell signaling complex in the plasma membrane together with CD19 and CD21, as well as CD81.Upon B cell activation, IFITM3 protein is increased moving from the endosomes to the lipid rafts containing the B cell signaling complex.Most interestingly, several studies have addressed the role of IFITM3 in B cell activation with expansion and a nity maturation in germinal center B cells through ampli cation of the PI3K signaling pathway 41 .In B cell malignancies, expression of IFITM3 is associated with poor outcomes 42 .In addition, IFITMs expression is induced by IFN-I primarily in monocyte-derived macrophages.Transcription is induced by various pro-in ammatory cytokines and Toll-like receptors agonists.The IFITM1-3 genes have an IFN response element that confers responsiveness to type I and II IFNs.So, IFITM and IFN-I regulate each other.What the function of these genes and others identi ed in non-responders is in the context of SLE, requires further investigation.
This new knowledge shed light on the molecular and cellular patterns associated to the non-response to LN therapies, opening a new scenario for further investigation of the regulatory mechanisms between implicated cell subsets, the genes and cells involved, and the development of new therapeutic strategies for LN and drug response prediction.
Study population
Lupus nephritis patients were recruited and followed for over 2 years at the Johns Hopkins University School of Medicine following the SPARE study protocol (Study of biological Pathways, disease Activity and Response markers in patients with systemic lupus Erythematosus) 43 .All patients gave written informed consent.Adult patients ful lling the revised American College of Rheumatology classi cation criteria 44 and ranging from 18 to 75 years were considered eligible.Patients were treated according to standard of care (OS and/or HC) and those treated with rituximab or other biologics at any visit were excluded.The doses were adjusted for each case according to the criteria of the physician.Starting from a retrospective analysis of 301 patients studied longitudinally and having gene expression data, we selected those who have been treated with MMF, AZA, HC or SOC, and with information for at least two visits since the start of treatment, allowing drug response follow-up.Samples treated with other immunosuppressant drugs in conjunction with MMF or AZA were discarded.These selection criteria led to the de nite identi cation of 34, 11, 56 and 73 responder patients to MMF, AZA, HC and SOC, comprising 103, 24, 133 and 173 longitudinal samples, respectively, and 10, 9, 14 and 25 non-responsive patients to MMF, AZA, HC and SOC, comprising a total of 27, 30, 40 and 64 samples, respectively (Table 1).All selected patients showed historical abnormal ndings in renal biopsies.
All clinical information was pseudo-anonymized.The medical history of the patients was collected including demographic information, medications used and autoantibody titers.To assess disease activity, the Safety of Estrogens in Lupus Erythematosus: National Assessment (SELENA) version of the Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) and the Physician Global Assessment (PGA) 45 were completed at each visit.Urinalysis, anti-dsDNA and plasma concentration of complement components 3 (C3) and 4 (C4) were also calculated at every visit.Response to drugs was de ned using the SRI-4 46 considering at least at 3 months from the rst visit with the speci c drug, but only patients who maintained the response over time while the drug was being used were considered responders (16.25 months on average).For MMF and AZA, a second response outcome was de ned over time according to whether the protein/creatinine ratio in urine was reduced and maintained below 500 mg/g from at least 3 months until the last visit under treatment.
Data preparation
Peripheral blood samples were collected at each visit using the PAXgene blood RNA system and gene expression pro les were measured using Affymetrix GeneChip HT HG-U133 + arrays.The experimental protocol from data preparation to gene expression data preprocessing has been previously reported 43 .
Expression values were transformed to logarithmic scale and transcripts were annotated from probes to o cial gene nomenclature (Gene Symbol).Duplicated genes were merged assigning their mean expression value and genes with at expression pro les were ltered out.
Differential expression and functional analysis
Transcriptome analysis was used to identify the genes and molecular mechanisms behind drug response and non-response to each therapy.First, clinical and demographic confounders were identi ed using the swamp R package 47 .Samples from the same patient, doses of MMF and prednisone, disease activity, race, and sex were the variables that explained the greatest variance in the data, in decreasing order.
DEGs obtained comparing response and non-response were analyzed by linear mixed models using the limma R package 48 adjusting expression values for sex, patient, SLEDAI, prednisone and MMF or AZA dose.Thus, we obtained genes with signi cant differential expression between responders and nonresponders, independently of treatment and doses used, sex, and conserved longitudinally across different visits, different disease states and disease activity uctuations.Genes with a Bonferronicorrected p-value < 0.05 were considered signi cant.Data were not adjusted for race because a signi cant imbalance in the distribution of race between both groups of patients was observed for some therapies (Table 1).
The functional role of DEGs was investigated using qusage R package 49 using a set of blood immunerelated gene-modules previously described 50,51 .
Machine learning-based predictive models
Differential gene expression signatures from longitudinally sampled SLE patients were used as features to build ML-based models to predict responses to MMF, AZA, HC and SOC, independently.In detail, nested k-fold cross validation was implemented 52 (Supplementary Fig. 4a).First, the entire dataset was divided into 5 class-balanced folds selecting
Cell pro ling
Blood cell subtype proportions were deconvoluted from gene expression data using CibersortX 54 .A reference panel with markers for 22 different cell types were downloaded from the Cibersort website.
Macrophage and mastocyte proportions were discarded as they are not blood circulating populations.
Following deconvolution, patients were labeled as rich/poor for each individual cell type based on the median value of the cell type across all patients (rich or poor if the cell proportion is higher or lower than the median proportion, respectively) 55 .
Single-cell analysis
Raw single-cell RNA-seq data from peripheral blood mononuclear cells for 41 SLE patients was downloaded from The National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO) database 56 (ID: GSE135779) 39 .All the analyses were carried out with R mainly using the Seurat package 21 .Cells with percentage of mitochondrial counts > 25%, percentage of ribosomal counts > 25%, number of unique features or total counts outside 0.5-99.5% range of all cells, number of unique features < 200 or Gini or Simpson diversity index < 0.8, were discarded.In addition, mitochondrial and ribosomal genes and genes expressed in fewer than 5 cells were removed.Doublets were also removed using scDblFinder R package 57 .Total counts per cell were normalized and xed to 1000 and gene counts were log transformed.Feature values were standardized by mean centering and standard deviation scaling and then, values per cell were adjusted correcting by cell cycle scoring and mitochondrial counts.
Finally, data integration across cells was performed using Harmony 58 .Louvain algorithm and Uniform Manifold Approximation and Projection (UMAP) 59 were used to cluster and to visualize the clusters of cells.Cluster stabilities were measured using the clustree R package 60 .
Cells were annotated with major blood cell type labels by correlation with cell markers previously de ned by Nehar-Belaid and colleagues 39 .To identify speci c cell subtypes or subclusters within each major cell type, the entire process was carried out from the start excluding remaining cells not cataloged with that particular major cell type.In this way, adequate resolution is reached to cluster minor cell types.Genemarkers for each subcluster were obtained comparing each subcluster with the rest of clusters within the same cell type using the FindMarkers function from Seurat.Cell tagging was performed using published cell-marker annotations [61][62][63] .The average expression levels for gene-signatures (expression score) for each cluster were calculated using AddModuleScore function from Seurat R package, subtracted by the aggregated expression of randomly selected control gene sets, to identify the speci c cell clusters in which a certain gene-signature was particularly represented or was more expressed.
Statistical analysis
The Wilcoxon Mann-Whitney and Fisher's exact tests were used to identify signi cant associations between response/non-response in continuous and categorical clinical variables, respectively.
Demographic variables and medical history were analyzed at patient level and variables that change over time, as the SLEDAI or the serum component levels, were analyzed by sample considering all visits.
Regarding cells, the Wilcoxon-Mann Whitney test was also used to de ne the signi cance when comparing cell proportions between responder and non-responder patients.Signi cant differences in response rates (percentage of responder samples/total of samples) comparing two groups of patients were obtained by Fisher's exact test.
GSEA was used to compare the similarity between DEG lists obtained for each drug 20 .A similarity score was obtained for each pair of drugs according to if DEGs for a drug were randomly distributed, at the top (positive score) or at the bottom (negative score) throughout the sorted gene list (by fold change, comparing responder and non-responder samples) of the second drug.
Inference of druggability through targets-inhibition
Intercellular communication networks were inferred from single-cell data using CellChat 23 and major signaling input and output processes between previously de ned cell clusters were revealed.Then, we focused on signaling networks that speci cally regulated non-response-related subclusters as potentially druggable networks.Targets for each druggable network were extracted from CellChat internal database (list of genes from each signaling network).We used Hipathia R package 24 to estimate the effect of target inhibitions on gene expression on patients from our cohort, following the instructions provided by the authors.A response score for the inhibition of targets from each druggable network was calculated for each patient as the absolute change of gene expression before and after the target inhibition.The expression of the targets was multiplied by 0.1 to simulate inhibition (http://hipathia.babelomics.org),and expression changes on the whole transcriptome were imputed using mechanistic models based on biology-based knowledge.Anticipated favorable response to inhibition of a speci c druggable network in a patient was de ned as a response score equal to or greater than the mean response scores of all patients.Percentage of patients having favorable response score was calculated from the total of nonresponder patients to MMF and AZA independently.
Declarations
The study protocol was approved by the Johns Hopkins University School of Medicine Institutional Review Board.SLE patients were enrolled from the Hopkins Lupus Cohort following informed consent.
Adult patients were eligible if they were aged 18 to 75 years and met the de nition of SLE as de ned by Figures Gene -signatures behind response and non-response to LN therapies.a, Therapeutic scheme followed for the patients.b, barplots show the number of DEGs for each drug (set size) and number of shared genes between drugs (intersection size).c, GSEA scores obtained comparing up and down-expressed gene sets for each drug (columns) with the full lists of genes ranked by fold-change for the rest of drugs (rows).ES: enrichment score; NES: normalized enrichment score.d, e, f, g, volcano plot distribution of p-values and fold-changes for genes comparing responder and non-responder samples for MMF (d), AZA (e), HC (f) and SOC (g), respectively.
Figure 2 Cell
Figure 2
Figure 3 Non
Figure 3
Figure 4 Non
Figure 4
Table 1
Characteristics of the patients included in the study.Data is presented as the number of patients or samples (and percentage) for categorical variables or means (± standard deviation) for numerical variables.P-values were calculated using the Wilcoxon-Mann Whitney test and Fisher's exact test for quantitative and categorical measurements, respectively.P-values < = 0.05 were assessed as signi cant and marked with asterisks based on signi cance magnitude (* = <0.05;** = <0.005;*** = <0.0005;**** means p-value lower than 0.00005).Treatments used, SLEDAI, C3 and C4 levels and antibody titers were analyzed by sample considering all visits, while demographic information and autoantibody positivity (+) 53 and 20% of the samples as training and test sets.Samples from the same patient were forcibly assigned to the same group (train or test).Hyperparameters for models were tuned by inner 10-fold cross-validation for each training set, repeated 5 times with internal random initialization, where 90 and 10% of the samples were assigned to internal train and test sets.A total of 11 different classi cation algorithms were tested including gaussian linear model, linear discriminant analysis, extreme gradient boosting, random forest, k-nearest neighbors, linear and radial super vector machine, neural networks, naive Bayes, boosted classi cation trees and boosted generalized additive model, covering the main ML approaches53.Model performances were calculated in each separated outer test fold and the algorithm prioritization was based on the average of MCC values obtained across outer folds to give an unbiased measurement of model accuracy.R code used to build ML-based models was available at https://github.com/jordimartorell/pathMED. | 2024-01-24T05:07:23.313Z | 2024-01-12T00:00:00.000 | {
"year": 2024,
"sha1": "09adf3a5e82abb32b65c64e2c7c16f32c7a27a09",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-3783877/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "09adf3a5e82abb32b65c64e2c7c16f32c7a27a09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16183340 | pes2o/s2orc | v3-fos-license | WebCSD: the online portal to the Cambridge Structural Database
The new web-based application WebCSD is introduced, which provides a range of facilities for searching the Cambridge Structural Database within a standard web browser. Search options within WebCSD include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching.
Introduction
WebCSD is a novel web-based application developed by the Cambridge Crystallographic Data Centre (CCDC). The software provides access to the information stored within the Cambridge Structural Database (CSD; Allen, 2002) using only a standard internet browser. WebCSD offers tools for searching, browsing and viewing crystal structures without the need to install any local software. WebCSD will allow simpler and more efficient dissemination of the CSD's collection of small-molecule crystal structures, which is comprehensive for the published literature and contains many otherwise unpublished structures. It also provides access to the latest information through weekly updates to the public WebCSD servers. WebCSD provides an intuitive interface to fast, straightforward searches of the database, rather than attempting to replicate the extensive functionality of the (locally installed) CSD system software for structural analysis. This new application also contains some additional capabilities not accessible through the installed software, including similarity searching.
Overview
The software for searching crystal structure knowledge provided by the CCDC, such as ConQuest (Bruno et al., 2002) and Mercury (Macrae et al., 2008), has been focused on providing sophisticated and flexible tools for crystallographers, structural chemists and the drug design community. As the use of crystal structure information has broadened, more users are extra-or multi-disciplinary. This means that the demand for a more accessible and collaborative CSD user environment has increased. The Web provides the ideal medium for large companies and academic departments where communication and collaboration as well as software distribution can be challenging (Williams, 2008).
Areas where WebCSD is designed to be useful are in the medicinal chemistry and pharmaceutical arenas. Providing easy-to-use webbased tools for searching both in-house and CSD structures gives the chemist almost instant access to a wealth of valuable structural and conformational information (Taylor, 2002) without the need for locally installed software or lengthy start-up times sometimes associated with more complex tools. WebCSD is available via the CCDC's public server and also as an intranet version that supports the use of in-house databases.
Another particularly important application of web software is in the area of chemical education. Knowledge of the three-dimensional nature of chemical compounds is fundamental to the education of every chemist (Bodner & Guay, 1997). Without this knowledge, concepts such as conformation, stereochemistry, chirality and the shapes of metal coordination environments cannot be properly understood. The CSD is therefore an essential resource for chemistry teachers. Not only does the CSD allow students to visualize and examine molecules in three dimensions, but it also provides an opportunity to work with real measured data, complete with experimental errors and statistical variations. Raw data mined from the CSD challenges students to think critically about the fundamental topics of bonding and molecular structure and also encourages them to consider the limitations and advantages of experimental structures. The ease of access offered via WebCSD should make it ideal for teachers and students to use the CSD in the classroom. In addition a 500-structure teaching subset of the database, containing a diverse set of molecules as well as a range of illustrative teaching exercises, is freely available online (Battle et al., 2010).
Replacing the underlying database
Since the late 1980s, data within the CSD have been stored in the ASER format that was developed for use with the QUEST search program (Allen et al., 1991), the precursor to ConQuest. The ASER format was designed to store bibliographic, connectivity and threedimensional coordinate data together in a single record that was machine readable and efficient in terms of rapid access and low storage space. However, the structure of the ASER records has limited extensibility and it has become desirable to include additional data to address the needs of new areas of research.
WebCSD therefore uses the embedded relational database management system SQLite (http://www.sqlite.org). The SQLite system allows the addition of extra relational data tables, in which information can be sorted, that are linked to the CSD entries. These indexed data tables mean that searches can be sped up significantly by using a binary search algorithm which makes progressively better guesses to narrow down the search. The new system also allows easy and flexible storage of molecular fingerprints and database bit screens (Allen et al., 1991) which are used in both substructure and similarity searching.
This paper describes WebCSD -the new web interface developed for searching and browsing the CSD -but the re-design of the underlying database system also allows access to the data via other web services. This will allow easy integration of the CSD with other online databases.
Multi-threading
WebCSD's server-based software was designed from the outset to utilize multi-threading -this means that each individual search process and database query operates independently in a parallel computational process. The WebCSD server can therefore obtain maximum benefit from the computational resources available, which allows it to run faster on multi-core or multiple CPU systems. Another major advantage of this system, even using a single CPU, is that it allows the application to remain responsive to input whilst simultaneously executing multiple tasks. This means that users accessing WebCSD servers can run multiple, complex searches whilst concurrently browsing the results of an earlier completed search.
WebCSD search functionality 4.1. Server implementation
The search software that runs on the WebCSD servers is written in C++, using functionality provided by the CCDC's C++ Toolkit (Bruno et al., 2002). This same software is central to the CCDC's Mercury (Macrae et al., 2008), Mogul (Bruno et al., 2004), IsoStar (Bruno et al., 1997) and enCIFer (Allen et al., 2004) applications.
Substructure searching
Substructure searching in WebCSD, and also in ConQuest, is achieved by decoding the search query into chemically meaningful information (e.g. contains Cl, or has a C N bond), screening out structures that cannot possibly match (i.e. structures that do not contain the required component), and then comparing the connectivities (or molecular graphs) of the query and the remaining structures to determine matches.
Substructure searching (i.e. subgraph isomorphism) in the Toolkit was originally performed (Chisholm & Motherwell, 2004) with an inhouse implementation of the Ullmann (1976) algorithm for depthfirst searching. A re-implementation of the substructure searching code using a breadth-first backtracking approach has improved the performance when searching for particularly complex structures. This new implementation also stores less data at each stage of the search, giving a further improvement in performance. Additional optimizations were required for searches of large macrocyclic compounds, for example detecting whether the query contains a ring assembly larger than any that occur in the structures being searched: if so, there cannot be a match. Many new screens have also been introduced, which improve search speed noticeably by reducing the number of structures needing to be extracted from the database and searched.
The Toolkit's substructure searching implementation was designed to be very flexible, making it easy to add new types of constraint. This has allowed WebCSD to offer some novel search options within the query sketcher, for example in dealing with cyclicity. In WebCSD it is possible to apply constraints with respect to the size of the smallest ring an atom or bond is involved in, such as the maximum, the minimum or a custom-defined range.
The WebCSD user interface currently allows two-dimensional searching (no intermolecular interactions or other three-dimensional constraints) and the software can identify matches very quickly as a result of the new algorithm. Three-dimensional searching will be added in a future version of the software.
Similarity searching
Alongside the substructure search tool is a complementary structure-based search option which determines the similarity of molecular components in the CSD to a defined query molecule. There are two main aspects to the calculation of similarity between molecules in two dimensions: first the method by which the molecules are represented (commonly as 'fingerprints' or binary strings) and secondly the way in which the similarity between these representations is quantified (the similarity coefficient). The effectiveness of any similarity searching tool for a particular problem will be dependent on both of these aspects (Willett, 1987;Johnson & Maggiora, 1990).
The similarity calculation in WebCSD uses molecular fingerprints that are determined using the chemical features of the molecules, including atom types, bond types and bonded paths through the molecule. Similarity fingerprints in the CSD are similar to those used in Relibase (Hendlich et al., 2003). Flow chart explaining the algorithms that create the molecular fingerprints for similarity searching using bonded paths of up to 10 atoms. connectivity) in a crystal structure a molecular fingerprint of 2040 bits is generated. The molecular fingerprint is created using all atom and bond paths of up to ten atoms in a molecule. The algorithms used are summarized in Fig. 1. These algorithms are applied to a given molecule to set bits in the fingerprint. The approach is similar to others used in common chemical search systems [for example, Daylight fingerprints (James & Weininger, 2008)]. The fingerprints of all the unique connectivities in the CSD are pre-calculated and stored in a relational database so that searching the information is extremely quick.
The quantification of similarity between molecular fingerprints is performed using standard similarity measures. Many articles discuss and compare coefficients for database screening (Whittle et al., 2003;Haranczyk & Holliday, 2008). Currently, the Tanimoto (1957) and Dice (1945) coefficients are presented to users, both of which produce coefficient values between zero and one (a value of zero indicating no similarity and a value of one indicating identical fingerprint representations). These coefficients have been found to be of the most use for the CSD problem domain, but other measures could be added in the future, such as the Ochiai/cosine similarity (Ochiai, 1957) or Hamming (1950) distance.
The utility of the similarity search tool can be illustrated by taking the top ten selling small-molecule drugs [based on 2006 sales figures in USD (Humphreys, 2007)] and running a similarity search for each of these. For these ten molecular structures (see supplemental material 1 ) a similarity search was performed and the top ten chemically distinct matches, based on the Tanimoto coefficient, were recorded. To determine how relevant the results of these searches were, we can use the 'bioactivity' field in the CSD records as a marker for activity (albeit an imperfect one). On average, three out of the ten similarity search results (a total of 30) were listed in the CSD as having bioactivity. The 30 'hits' were manually inspected and 21 of these 30 'hits' had activities obviously related to that of the drug (e.g. the similarity search based on Protonix, or pantoprazole, found two other drugs with known anti-ulcerative activity; Fig. 2).
As with all similarity fingerprints, the fingerprints used for CSD similarity searching have certain strengths and weaknesses. The example above shows that for typical feature-rich drug-like molecules, fingerprints can show reasonable retrieval of related compounds. Because of the nature of the fingerprints, the similarity search will tend to find matches that contain closely related scaffolds. There are, however, a number of caveats with the fingerprints and similarity calculations as currently implemented. First, molecules that contain fewer atoms are less well defined, and as such are more prone to low similarity. Secondly, the fingerprints do not account for cyclicity; for example, hexane and cyclohexane are indistinguishable. Thirdly, the fingerprints are element based. Consequently, atoms with related properties, such as halogens, are treated as distinct from one another. This can cause a similarity search to miss what would appear to be chemically reasonable hits. Different transition metal elements are treated as distinct even if they occupy chemically similar environments. For example, consider a search for the molecule shown in Fig. 3(b). One would expect this complex to be highly related to the equivalent Co 2+ and Cu 2+ complexes (Figs. 3a and 3c; Knuuttila, 1982), as they have chemically identical scaffolds, but at the moment these would not be listed with high similarity coefficients. In future versions of WebCSD we will address such issues by providing alternative generalized fingerprints to address such cases.
Text/numeric searching
One of the benefits of using SQLite is that the text fields in the database can be fragmented (or 'tokenized') and stored in a relational database format using the built-in module 'FTS3' for full text search indexing. This tokenization and indexing using 'Google-like' technology (Hipp, 2006) means that text searches are extremely fast, as the search engine does not need to read all the text, but simply look up the matching entries in the relevant table. Standard bibliographic information is easily searchable in WebCSD including author, journal, publication year, journal volume and page number. The real power of the full text search indexing is more apparent though when using the 'all text' search -searches for an exact string such as 'antibacterial' (786 hits) or 'agonist' (215 hits) take less than a second to search through nearly half a million entries in the CSD.
A number of new specific fields are also available for flexible and/ or combined searching, such as bioactivity, habit, phase transitions and polymorphism. The interface is designed such that any of the text/numeric search types can be combined into composite queries, so it is simple to design a search, for example, where the habit field contains the string 'needle' and the phase transitions field has any defined value. Further options in the text/numeric interface include a range of advanced date search fields in addition to the year of publication. Users can therefore search based on when structures were added to the CSD or when they were last modified; this means it is simple to re-run an old search, e.g. for all bioactive compounds with a plate habit, but restricted to only the entries added since the user last accessed WebCSD.
Reduced cell searching
The use of the Niggli reduced cell (generally referred to as 'the reduced cell') can be problematic as a result of mathematical
instabilities. Whilst the reduced cell can be uniquely defined for any specific lattice, the angles of the reduced cell can vary a great deal with only small changes in the lattice parameters (Andrews et al., 1980). This problem was avoided in ConQuest by using only the reduced cell lengths for searches. The reduced unit cell search algorithm used in WebCSD is a new implementation which uses a more advanced methodology involving 'nearly Buerger reduced cells' (Andrews & Bernstein, 1988). Essentially the reduced cell and a set of closely related cells are determined and the database is searched for any matches to this set. The result is that the search tool in WebCSD takes full account of the reduced cell angles and thus produces fewer false positive hits when searching the database. This facility is ideal for experimental crystallographers. The unit cell of a sample is usually determined through analysis of a small data set prior to starting a full experiment. This unit cell can then be compared with entries in the CSD using the reduced cell search in WebCSD. This should ensure that organic and metal-organic crystal structures are not redetermined unknowingly.
Embedded three-dimensional viewing
The crystal structure information accessed either through searches or simply by browsing the CSD is inherently focused on three-dimensional information -i.e. the crystal structure coordinates. This means that it is very important to provide the ability within the results browser for users to access basic three-dimensional molecular and crystal-packing visualization functionality. WebCSD allows the use of either of two different three-dimensional viewers as embedded Java applets in the interface: Jmol (2009) or OpenAstexViewer (2009). These molecular viewers both provide a range of atom/bond style options as well as atom labelling and tools to measure distances, angles and torsion angles. Jmol also supports some crystallographic options such as the display of a full unit cell or a packing range of 3 Â 3 Â 3 unit cells. Fig. 4 shows an example of a 3 Â 3 Â 3 unit cell packing range for a nanoporous dipeptide crystal structure (CSD refcode XUDVOH;Gö rbitz, 2002) where channels can be seen running along the crystallographic c axis.
The embedded viewers allow WebCSD to be used without the need for additional client-side applications. Users can, however, still choose to export crystal structures from WebCSD into Mercury (either one structure as a CIF or many as a list of refcodes) for more advanced structure viewing and analysis tools. In this way WebCSD can act as a springboard for more complex studies -allowing very fast searches with links to CCDC applications, or Image of the WebCSD results browser showing a 3 Â 3 Â 3 unit cell packing range for CSD refcode XUDVOH (l-alanyl-l-valine) in the Jmol embedded viewer. Channels through the crystal structure formed in the middle of hydrogen-bonded helices can be observed down the crystallographic c axis.
Figure 5
Hyperlinking within the results browser for CSD entry ATDZDX. The red arrow indicates the 'thiadiazine' section of the compound name which could be used as the basis of a further text/numeric search. exporting of files for other programs, to allow further investigation of the results.
Hyperlinking of textual results
The results browser in WebCSD provides, alongside the threedimensional viewer, a two-dimensional molecular diagram and a display of the textual and numeric information. Within the 'Details' tab of the results browser, bibliographic as well as chemical, crystallographic and experimental information relating to the displayed structure is given. A range of the categories within the 'Details' tab are parsed by WebCSD to identify useful pieces of text from which to hyperlink. For example, all authors listed for an entry are hyperlinked, such that clicking on their name will launch an author name search. Similarly, compound name and synonym sections are also hyperlinked so that when looking at the structure of 5-amino-2H-1,2,6-thiadiazine-1,1-dioxide (CSD refcode ATDZDX; Albrecht et al., 1979), for example (Fig. 5), it is possible to simply click on 'thiadiazine' and, in less than a second, find all 154 structures in the CSD with the string 'thiadiazine' in their compound name. This facility makes browsing between entries with similar chemistries particularly efficient. When present, the DOI (data object identifier) is hyperlinked, allowing access to the original publication of the crystal structure.
Documentation, availability and environment
WebCSD includes a set of FAQs and pop-up help messages within the interface. Access to a 500-structure subset of the CSD using WebCSD is freely available from the CCDC website (http://www.ccdc.cam. ac.uk/free_services/teaching/) for demonstration or teaching purposes. WebCSD is currently fully supported on the following web browsers: Internet Explorer (version 7.0 and later), Mozilla Firefox (version 2.0 and later) and Safari (version 4.0 and later). The software has also been tested and shown to work on a number of other common browsers including Google Chrome. WebCSD access is provided via a public server hosted at the CCDC to CSD system subscribers holding unlimited seat licences. For more information about access, contact admin@ccdc.cam.ac.uk.
We thank all of the developers at the CCDC who have contributed to the WebCSD project, in particular Greg Shields, Lucy Purkis and Matt Towler for their help and technical advice. We would also like to acknowledge all the members of the technical and scientific support teams at the CCDC who have worked on version 1.0 of WebCSD, especially Gary Battle, as well as the editors and deposition coordinators who helped to test the program. Bob Hanson and the rest of the Jmol team are thanked for their help in modifications and integration of the Jmol viewer into the WebCSD interface. The CCDC is a not-for-profit, charitable institution dedicated to the maintenance and distribution of the CSD. The financial contribution of its subscribers to the work is gratefully acknowledged. | 2014-10-01T00:00:00.000Z | 2010-02-12T00:00:00.000 | {
"year": 2010,
"sha1": "54e9062d4bc6dc0d78ec193ee04fd8c4e0995117",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/j/issues/2010/02/00/kk5057/kk5057.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "54e9062d4bc6dc0d78ec193ee04fd8c4e0995117",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
216615152 | pes2o/s2orc | v3-fos-license | Smartphone Self-Monitoring by Young Adolescents and Parents to Assess and Improve Family Functioning: Qualitative Feasibility Study
Background The natural integration of mobile phones into the daily routines of families provides novel opportunities to study and support family functioning and the quality of interactions between family members in real time. Objective This study aimed to examine user experiences of feasibility, acceptability, and reactivity (ie, changes in awareness and behaviors) of using a smartphone app for self-monitoring of family functioning with 36 participants across 15 family dyads and triads of young adolescents aged 10 to 14 years and their parents. Methods Participants were recruited from 2 family wellness centers in a middle-to-upper income shopping area and a low-income school site. Participants were instructed and prompted by alarms to complete ecological momentary assessments (EMAs) by using a smartphone app over 2 weeks 4 times daily (upon waking in the morning, afternoon, early evening, and end of day at bedtime). The domains assessed included parental monitoring and positive parenting, parent involvement and discipline, parent-child conflict and resolution, positive interactions and support, positive and negative affect, sleep, stress, family meals, and general child and family functioning. Qualitative interviews assessed user experiences generally and with prompts for positive and negative feedback. Results The participants were primarily white and Latino of mixed-income- and education levels. Children were aged 10 to 14 years, and parents had a mean age of 45 years (range 37-50). EMA response rates were high (95% to over 100%), likely because of cash incentives for EMA completion, engaging content per user feedback, and motivated sample from recruitment sites focused on social-emotional programs for family wellness. Some participants responded for up to 19 days, consistent with some user experience interview feedback of desires to continue participation for up to 3 or 4 weeks. Over 80% (25/31) of participants reported increased awareness of their families’ daily routines and functioning of their families. Most also reported positive behavior changes in the following domains: decision making, parental monitoring, quantity and quality of time together, communication, self-regulation of stress and conflict, discipline, and sleep. Conclusions The results of this study support the feasibility and acceptability of using smartphone EMA by young adolescents and parents for assessing and self-monitoring family daily routines and interactions. The findings also suggest that smartphone self-monitoring may be a useful tool to support improvement in family functioning through functions of reflection on antecedents and consequences of situations, prompting positive and negative alternatives, seeding goals, and reinforcement by self-tracking for self-correction and self-rewards. Future studies should include larger samples with more diverse and higher-risk populations, longer study durations, the inclusion of passive phone sensors and peripheral biometric devices, and integration with counseling and parenting interventions and programs.
Background
Research demonstrates that family processes in daily routines and settings have significant impacts on children's development and well-being [1][2][3][4]. The feelings, actions, and interpersonal interactions of individuals are structured by daily routines that influence the household and family. Thus, family routines provide a bridge between individual and systemic levels of the multilevel family system [3,4]. Key factors in daily family routines include parent-child communication and family interactions. Lack of parent-child communication has been associated with low life satisfaction for adolescents [5,6]. In contrast, parent-child conflict and perceived lack of support have been associated with negative psychological, social, and health risks for children (ie, depression) [7,8]. Conversely, positive family interactions have been linked to decreases in internalizing emotional distress [9,10]. Emotional states such as affect, conflict, and stress can also be transmitted between parents and their children [9,11,12]. Family stress can also negatively impact peer relationships of adolescents and school domains [13]. Fostering positive interactions, communication, support, and conflict resolution within families may better protect families from maladaptive outcomes such as depression, behavioral and school problems, lower self-esteem, and poor social skills [8].
Engaging families in therapeutic activities addressing family processes in real time during daily routines is a persistent challenge in interventions and research [14]. The broad proliferation of mobile phones creates novel opportunities for interventions and research modalities that are integrated into daily routines and are widely scalable. Self-monitoring is one strategy that can be easily implemented via smartphones. Early research on self-monitoring recognized reactivity to self-assessments as a means to support self-regulation and behavior change through feedback and goal-setting processes [15][16][17][18]. One form of self-monitoring is daily diaries and ecological momentary assessment (EMA). EMAs are repeated self-reports conducted multiple times throughout a day to assess behaviors, attitudes, states, and experiences in real time, in the natural environments of subjects [19]. EMA has greater ecological validity, fewer recall biases compared with observational or global questionnaire methods, and the capacity to elucidate within-and between-person processes and temporal dynamics [19]. For example, utilizing EMA in family interventions can allow researchers to examine the relationships between intrapersonal processes (ie, mood), interpersonal processes (ie, supportive or hostile exchanges), and broad family-level contexts (ie, family conflict, cohesion) that may address more complex and nuanced questions about sequential processes that influence behavior and affect in the daily lives of individuals [20].
EMA and diary methods have been used to study family experience in daily routines across multiple domains such as parent-child interactions [8], family relationships [21], family conflict [13], and stress [10,22]. Notably, the intensive nature of daily diaries and EMA may result in reactivity (ie, changes in awareness and behavior, particularly in populations motivated to change [19,23]). This is a methodological nuisance of basic behavioral research but presents a potential opportunity for self-monitoring as an ecological momentary intervention [23]. The little research done previously on reactivity has favored minimizing reactivity and its related effects [19], including in family research [24]. In general, EMA and diary research does not address reactivity routinely or robustly [23]. Most important to family research, EMA allows for real-time collection of data from multiple informants (ie, multiple family members) who often share the same natural environments, while also experiencing similar events (ie, family meals, arguments) [20]. Using EMA as an assessment tool in families allows different perceptions of the same experiences and the ability to identify discrepancies in perception.
Objectives
This paper examines the user experiences of families on feasibility, acceptability, and perceived benefits of self-monitoring and reactivity to smartphone EMA and daily diaries for assessment, self-monitoring, and as a potential tool for intervention to seed and support behavior change.
Sample and Recruitment
This study enrolled 36 participants across 15 families consisting of 15 children in 9 family dyads (all mother and child) and 6 family triads (mother, father, and child). The child participants included 6 boys and 9 girls aged between 10 and 14 years. Participants were recruited through a family wellness center's e-mail newsletter and website. The family wellness centers, funded by the Robert Wood Johnson Foundation, provided social and emotional learning and physical activities in a
XSL • FO
RenderX metropolitan US community at a shopping area marketplace-based site (average income of US $67,000), and at a middle school located in a low-income neighborhood (average income of US $27,000) serving primarily Central American and Korean immigrant populations. Over one-third (n=6) of participant families came from the low-income site, 5 of which were Latino. Only 1 family from the middle-income site was Latino, whereas the rest were white, Asian, or African American. Prospective participants were informed that this was a pilot study to develop and test a smartphone app designed to enrich our understanding of ways to improve daily family routines and well-being.
Families who called the study contact were screened for the following eligibility criteria: parent and child coresided for some portion of the 2-week study period, at least one parent agreed to participate, the child was aged 10 to 14 years and gave assent to participate, and participants were fluent in English. Families with multiple children had the option to enroll again to participate with another child in the family (2 families exercised this option). Participants signed informed consent forms according to the university's institutional review board-approved protocols.
Procedures
After consent at an in-person meeting, participants were issued a smartphone (Samsung Galaxy S) for the study on which they completed EMA and daily diary surveys 4 times per day for 2 weeks. The study coordinator gave participants a brief training on how to use the smartphone and a step-by-step instructional manual on how to use the smartphone app platform to complete EMA surveys. All participants were given the study coordinator's phone number in case they had questions or experienced any difficulties while using the smartphone. At the end of the EMA period, qualitative interviews lasting approximately 40 min assessed the user experiences of parents and children, reactions to using the smartphone app, the obtrusiveness of the monitoring, any technical problems they encountered, relevance and usefulness of the EMA and diary questions, and perceived effects of study participation on them and their family. The semistructured interview guide first queried for general feedback and experiences, followed by prompts for "what was useful or helpful?" then "what was not helpful, or annoying?" and finally, suggestions for changes or improvements to the protocol and app. Participants also completed web-based questionnaires on demographic characteristics and family functioning at the start and end of their study participation. Participants received gift cards valued up to US $150 for completion of the different components of the study.
EMA and diary data were collected using Ohmage, an open-source mobile survey app supported by a web platform that supports the collection, storage, analysis, and visualization of EMA or self-monitoring data streams. Ohmage is a feature-rich and extensible platform that facilitates the collection of multidimensional, heterogeneous, and complex personal data streams. The software was programmed using time-based reminders to display question sequences and response choices on the smartphone screen. EMA survey responses were automatically timestamped, geotagged, and linked to the participant's assigned study identifier used as their login ID. Web interfaces were available for researchers to access and view participant data. The Ohmage user interface was designed based on feedback from behavioral and technology researchers focusing on group participants and end users of the system [25].
Participants were prompted to respond 4 times daily to EMA/diary surveys on the following domains: parental monitoring and positive parenting, parent involvement and discipline, parent-child conflict and resolution, positive interactions and support, positive and negative affect, sleep, stress, family meals, and general child and family functioning. Although many family assessment tools are widely available to researchers, clinicians, and families, none directly measure daily routines in real time. EMA/diary domains were chosen based on systematic reviews of standardized family functioning measures [26][27][28][29], which consistently assess communication, conflict, problem solving, cohesion or bonding, affect or emotion, organization, or regulation (eg, roles, rules, leadership, monitoring, and stress; see Table 1). EMA/diary questions were adapted from retrospective or global self-reported family measures. Domain and measure selection decisions were also informed by their use in intervention research with high-risk adolescents and the desire to balance with domains linked to resilience and wellness. Table 1 shows the EMA/diary domains and global/retrospective self-report measures that were adapted for EMA format. EMA/diary question contents are available as Multimedia Appendices 1-3.
The timing of the EMA vibration/ring prompts was scheduled by the participants and the study coordinator at times convenient for their individual schedules as follows: (1) morning upon awakening, (2) before school/work, (3) between the end of the school or work day and dinner, and (4) before bedtime. Upon hearing the reminder, participants were instructed to stop their current activity and complete a short (less than 5 min) EMA. Families received 1 phone call on the third day of the EMA period from the study coordinator to inquire about technical problems with the smartphone and app and answer any other study questions. All Single rating of current stress level (1=not, 5=very) Stress All Positive and negative affect schedule [30] and personal affect measure [31] Affect and mood 3 × (not wake-up) Stattin and Kerr parental monitoring questionnaire [32,33] Monitoring/positive parenting 3 × (not wake-up) Alabama parenting questionnaire [34,35] Parent involvement and inconsistent discipline 3 × (not wake-up) end of day only Issues checklist [36,37] and network of relationships inventory (child) [38] Parent-child conflict 3 × (not wake-up) Conflict tactics scale, resolution subscale [39] Conflict resolution End of day only Network of relationship inventory, companionship subscale [40] Positive interactions End of day only Who do you eat with and doing other activities? Family meals End of day only Outcome rating scale and child outcome rating scale [41] Overall functioning
Data Analysis
Descriptive analyses for demographic characteristics were conducted using simple frequency distribution statistics in Stata 15.1 (StataCorp). The qualitative user-experience interviews were audio recorded and transcribed verbatim. Of the 36 participants, 31 had audio recorded interviews available for transcription (5 audio files were inadvertently erased before transcription), and transcripts were redacted to remove personal identifying information and uploaded to the Dedoose web-based mixed methods analysis platform (version 4.5.91, Sociocultural Research Consultants 2013). A grounded theory inductive approach was used to code the data to identify key themes that emerged from the data [42,43]. The coding scheme was developed by the lead anthropologist with 2 research assistants. The research assistants engaged in initial discussion around substantive codes emerging from the data and analytic categories that evolved into tangible themes; the generated codes were organized into broader, more conceptual themes. The lead anthropologist reviewed all themes identified by the research assistants for the discrepant cases. The codes were shared with the research team and revised over several iterations. Codes and excerpts were retained for analysis when there was agreement between the coders and authors.
Results
Children were on average aged 12 years (SD 1.44), mothers were 46.25 years (SD 3.81), and fathers were 40.33 years (SD 3.51). Approximately half were white (n=20), one-third were Latino (n=11), 9% (3/35) were Asian, and 3% (1/35) were black. Tables 2 and 3 present more demographic results for children and parents, respectively. Response rates were high overall, including some participants who completed more EMAs than scheduled (prompted), either by reporting for more than 14 days or reporting more on some days to compensate for missed EMAs (typically for the previous day). Overall, the response rate excluding more than 4 EMAs in a day and more than fourteen days of reporting (ie, the on-time and per protocol response rate) was 96.2% (1941/2016), with children slightly lower at 95.1% (799/840) and parents slightly higher at 97.1% (1142/1176). Overall, 69% (25/36) of the participants had 100% or greater response rates; 60% of the children and 76% of the parents. The lowest response rate among children was 70% (39/56) and 79% (44/56) among parents. A total of 6 parents and 3 children responded for 16 to 19 days. In terms of missed EMAs, children tended to miss the morning and noontime EMAs, whereas parents tended to miss the late afternoon/early evening EMAs followed by the morning EMAs.
Qualitative results from the analysis of user-experience interviews are presented below based on 2 broad code themes and several subthemes that emerged from the data. The first broad code theme was feasibility, acceptability, and suggestions for the future, with the subcodes desire for feedback, seasonality, technical problems/challenges, survey burden, timing and frequency (weekends and duration), and global/recall web surveys. The second broad code theme was self-reflection, awareness, and seeds of change, with the subcodes decision making, parental monitoring, quality and quantity of time spent together, communication, self-regulation of stress and conflict, discipline, rewards and punishments, and sleep. Table 2. Demographic characteristics of children at baseline (N=15).
Feasibility, Acceptability, and Suggestions for the Future
Participants found the smartphone self-monitoring feasible and acceptable and provided feedback for changes and improvements. Most participants reported enjoying their participation in the study:
Technical Problems and Challenges
Participants reported a few minor technical problems associated with the mobile phone, such as slow uploading of data or having to power the phone off and on to upload data. Some participants reported problems with the app freezing or force closing when they were trying to complete a survey. A few participants reported that they were not receiving reminders (alarms) to complete the surveys after a period.
Seasonality
Some participants noted that the survey questions needed to be geared toward the time of year. For example, for a number of participants who participated during the summer school break, questions about school and homework were inappropriate: There were parts that didn't seem to apply because it seemed like the survey was sort of designed to assess children when they're in school. So, since it's summer, sometimes, you know, the questions didn't seem to apply, and then particularly with the, the final assessment, there was a lot of stuff about school. So, some things we didn't know how to respond to… [mother, family 6]
Survey Burden: Timing and Frequency
With regard to the smartphone EMA, 10 participants reported finding the end of day bedtime survey burdensome because it was long, and several noted being tired: The only thing that I would say is, I thought the evening one was really long. And Probably about a week longer, but any more than that, I think it would have gotten like really repetitive and the answers would have all been the same.
Desire for Feedback
Participants noted that they would have liked the feedback from their participation in the study to know the findings from the survey and how these could help them and their families improve their relationships. Parents were very interested in getting more feedback from the study to improve their parenting skills and strategies.
Self-Reflection, Awareness, and Seeds of Change
Over 80% (25/31) of the participants reported increased awareness of their relationship dynamics with their child/parent, their own behavior, or their communication styles. Of the 5 participants who did not report changes in awareness of their family routines, 4 were children. For many participants, study participation provided novel opportunities to reflect on their family routines in general: These themes of reflection-seeding behavior change are represented throughout participants' feedback on their experiences in more specific domains (described below).
Decision Making
Several parents reflected on decision making in the family as a result of self-monitoring. For example, 1 noted:
Quality and Quantity of Time Spent Together
Participants reflected that the quantity and quality of time spent together were both important. One parent reported becoming aware of how seldom her family ate together: It makes you think about stuff. Like, are we talking? Are we eating together? And, it's kind of embarrassing like, "Ugh," we eat alone, or so and so ate with so and so. You know? That was cool for us as a family to think, "Oh my gosh. We don't eat together." [mother, family 5] In addition to self-reflective functions, families noted how EMA/diary self-monitoring helped them to be more accountable or consistent with their values or goals for their families. For example, 2 parents indicated an element of accountability from self-monitoring in regard to time with family: The process and the idea behind it, I thought was actually really good, '
Communication
Participants noted reflection, awareness, and some changes in patterns of communication in the family, including themes of limited time, positive or negative tone, praise or critique, and openness. For example, a mother stated: The son in this family also noted reactivity to self-monitoring and becoming aware of limited communication with his mother and that most often it involved giving him instructions: Another parent noted how reactivity to self-monitoring functioned by modeling questions that deepen levels of communication with her son around the domains assessed: So, there is that element of both of us are doing these surveys and then, for me, part of it was, "Okay, I want to start asking you these questions since I don't really ask you these questions." So, I would actually ask him more about the ins and outs of his day more, and then he would talk to me more about it 'cause I was asking him for this information. …So, in a way, I was getting more information from him, and we were discussing more between the two of us than we would, normally, if I wasn't doing the survey. [mother, family 3] Similarly, her son reflected that their improved communication made him feel closer to his parent: I enjoyed getting closer to my mom throughout the study because she'd always ask me questions that she wouldn't really normally ask me, and we got a little closer through that. [male child, 13 years, family 3]
Self-Regulation of Stress and Conflict
EMA self-monitoring was also reported to support the self-regulation of stress and staying calm, including during conflicts and their resolution. For example, 1 parent noted that self-monitoring helped her focus on trying to stay calm when interacting with her child: Well, I tried to become more of being calm (laughter) and not yelling. And, that was really it. …I mean, obviously, if you're tired or you're, somethin' else is going on, it's hard to do that. But, when I was in a relaxed state, it made me mindful of, "Okay, when she does something that annoys you, just be calm about it, and try and work through it." It doesn't always happen. But, it did make me more cognizant to try and just be more patient and talk with herdepending on what it was. [mother, family 10] Similarly, a child noted that self-monitoring helped her and her mother identify what was making them angry and enabled them to resolve conflict: One child became aware that she could not remember why she was angry with her parent: I think before I took the survey, I just wouldn't think about why I was mad at her. I'd just be so mad. But then when I sat down and took the survey, and it was sayin' what was it about -I was like, "Wait. I don't even remember, anymore, what it was about." [female child, 10 years, family 9] One mother noted that the EMA self-monitoring helped her communicate more calmly with her child when they were in conflict by reflecting on and using conflict resolution strategies represented in the EMA response options: Well, there's a certain question, for example, about, "Did you try to speak to your child calmly?"…The first time that I'm reading through them, I was like, "I don't know. Did I even try? (laughter) Did I just start yelling? Did I?" It…literally had me stop and, and take a step back and remember the whole scenario. You know, literally picture by picture, and break it down. And then, I caught myself, it's like, "I didn't even try." …and so after that, it was like, "Okay, let me try to speak to her calmly. Let me try and explain, you know, why the chair is yellow, and why the sky is blue." (laughter) ….And, towards the end of these last two weeks, I wasn't trying, I was speaking to her calmly. [mother, family 2] This mother also noticed her child using more positive reinforcement in her communication with a sibling, based on her initial reactivity to EMA and then her daughter modeling the behavior:
Discipline, Rewards, and Punishments
Although parents were generally not very comfortable talking about disciplining their children, many discussed becoming more aware of patterns of discipline or rules in their families. Some parents were more comfortable talking about rewarding or praising their children and were consciously working on improving their positive reinforcement of their children's behavior. For example, a mother spoke about becoming more aware of limits and ground rules: This mother also noted how reactivity to the EMA question and response content influenced her parenting behaviors. Another parent reported: About the [questions], what happened when she did a good job or behaved well this morning? …About half-way through the week, I realized that, I guess I'm really hugging and kissing her when I'm feeling like she behaved well, but I'm not hugging and kissing her and saying, "Oh, you did such a good job." [mother, family 9] In this example, reactivity to self-monitoring resulted in moving away from awareness to changing her behavior to more actively reinforce her child's good behavior.
Sleep
Finally, both children and parents noted that the study made them realize the importance of sleep in their daily routines and self-monitoring: Another parent gave an example of reflecting on the potential relationship between family conflict and sleep: Like was there more conflict when I was sleep deprived? (laughter) [mother, family 1] In this example, self-monitoring reactivity first resulted in a strong awareness of sleep and functioning. One mother noted how responding to sleep questions seeded motivation: I thought the sleep one was a very good question, 'cause then when I would answer it, I thought, "Ooh, yeah. I better get some more sleep." [mother, family 11] Overall, these results demonstrate how reactivity to EMA self-monitoring typically begins with increased awareness of behavior, followed by associations with antecedents or consequences. Then, some participants experienced motivation to change, with behavior changes supported by reminder functions of EMA prompts (alarms), accountability to subsequent reporting, and tracking of goal progress and outcomes.
Principal Findings
The results of this study support the high feasibility and acceptability of using a smartphone EMA by young adolescents and parents for assessing and self-monitoring family daily routines and interactions over 2 weeks, as evidenced by high response rates of 95% and greater and in user-experience interviews. Some participants suggested that a third or fourth week of self-monitoring would further enhance the behavioral changes that they initiated. Some participants also reported preferring fewer surveys each day and fewer questions, particularly when considering a longer duration of self-monitoring beyond 2 to 3 weeks.
Our findings also suggest that smartphone self-monitoring may be a useful tool to support improvement in family functioning through functions of reflection on antecedents and consequences of situations, prompting positive and negative alternatives, seeding goals, and reinforcement by self-tracking for self-correction and self-rewards. These functions are core elements of self-regulation [14][15][16][17], which may now be enhanced by smartphone integration into daily routines. Reactivity in self-monitoring has been documented for a wide range of clinically relevant behaviors and may make an adjunctive contribution to intervention efforts [44]. The portability and convenience of smartphone integration into daily routines is creating novel opportunities to reinvigorate research on self-monitoring. Participants in this study reported increased awareness of their family routines, and many also reported behavioral changes in terms of decision making, parental monitoring, quantity and quality of time together, communication, self-regulation of stress and conflict, discipline, and sleep.
Our primary findings also suggest there was a potential indication of ethnic and/or income differences among parents in discussing discipline, rewards, and punishments that may warrant further exploration in future research. However, the small sample size does not warrant inferences as a primary result. White parents, from the higher-income site, seemed to be more comfortable discussing parenting practices with an emphasis on rewards and expectations and de-emphasizing punishments or negative reinforcements. Latino parents, from the low-income site, were more inclined to focus on parenting as a job to keep their children safe and seemed to be more comfortable discussing consequences or negative reinforcements. Some research has conceptualized that parenting practices and discipline may be moderated by the interaction of parental beliefs and ethnicity [45]. However, the small sample and recruitment sites confound not only ethnicity and income but also neighborhood safety, as the low-income neighborhood is noted for gang violence.
Limitations
This pilot study had several notable limitations. First, the sample size was small in terms of the number of families. Nonetheless, the overall number of participants is typical and acceptable for a qualitatively focused pilot study focused on user experience and feedback, and saturation of themes was achieved. The sample lacked representation from African American and Asian-American families, lower-income white families, and higher-income Latino families. Second, the family wellness center recruitment sites attracted families primed for motivation to improve their family functioning, as did the recruitment material framing the study as seeking support in developing and testing mobile apps to assess and improve family functioning. Third, our measure/domain selection reflects assumptions for successful parenting and interactions that are prevalent in family assessment tools and evidence-based interventions for risky adolescents, while also including domains reflecting resiliency. Fourth, it is important to note that the steep rise in mobile phone use among children and adolescents has also raised concerns about possible adverse effects such as addictive tendencies, depression, anxiety, sleep disruption, and cyberbullying [46,47]. Fifth, the smartphone app did not employ passive monitoring of smartphone usage, which was a separate module in the Ohmage smartphone app platform and was not used for both privacy and battery power preservation concerns. Notably, the study protocol and app did have an audio-sensing module, which continuously and passively monitored and classified the audio environment for speech versus nonspeech (eg, including discrimination of background audio from televisions) in small snippets of privacy-preserving nonaudio data, but the module failed to function outside of the laboratory because of data not being transferred off the phones quickly enough through a mobile connection (as opposed to Wi-Fi in the lab), causing the phones' operating system to crash. Only the first study family had Audiosens data, and for only several days before the phones crashed. Since the time of the study in 2013, improvements in smartphone technology, memory, and wireless data speeds would now make this component more feasible, and in fact, potentially advancing classification from speech/nonspeech to emotion detection. Finally, the study was not powered for statistical analyses; the primary aims were feasibility, acceptability, user experience, and preliminary perceived efficacy of smartphone app self-monitoring for assessing and potentially improving family routines.
Future research should evaluate reactivity to EMA and diary self-monitoring as a tool to improve family routines in larger and more diverse samples of families, with statistical power to robustly examine behavioral, symptom, state, and functioning changes. Further research could also consider recruiting families coping with challenges such as chronic illnesses, substance abuse, or conduct problems. Future studies should also include longer follow-up periods to examine the sustainability of self-monitoring and decreasing burden over time and the significance of behavior changes indicated by participants in this study. Furthermore, future studies could also provide empirical and theoretical insight into how sibling relationships can serve as important contexts for individual development and family functioning. Due to participant feedback about frequency and duration of assessments, future studies using frequent assessments for highly transient states or frequent behaviors (eg, every 30 min) should be short in duration (eg, seconds to minutes) to minimize response burden over shorter assessment periods (ie, several days) [20]. Conversely, daily assessments may allow for a longer duration over longer periods of time (eg, weeks to months). Finally, future research assessing family functioning should ensure that assessments are adequately collecting data during times or situations of interest to families [20]. For example, if a study examines parent-child interactions, assessments should only occur when parents and children are together, such as mornings, evenings, and weekends (eg, not when at school/work). Future studies should include larger samples with more diverse and higher-risk populations, longer study durations, the inclusion of passive phone sensors and peripheral biometric devices, and integration with counseling and parenting interventions and programs.
Conclusions
Due to the increasing ease of implementing EMA and diary self-monitoring via smartphones, practitioners may reconsider using smartphones to enhance psychotherapy, parenting programs, and other counseling modalities. Conversely, researchers using EMA and diaries should examine reactivity more consistently and robustly. Real-time data visualization tools (eg, time trends, correlations, and maps) hold the potential to make self-monitoring more salient and actionable through use in counseling sessions for problem solving, feedback, praise, and goal refinement. Machine learning algorithms also hold promise for detecting patterns and anticipating change points to trigger automated in-the-moment or just-in-time interventions. Further research is needed on self-monitoring as a purely self-directed intervention activity and the potential for enhancing therapeutic relationships. | 2020-04-02T09:07:12.859Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "1a411dff4c27c929d0e2b004dbf0442cbbef1ff8",
"oa_license": "CCBY",
"oa_url": "https://formative.jmir.org/2020/6/e15777/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21a8853eacd2046f027933e57636351cff5537d6",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
23890149 | pes2o/s2orc | v3-fos-license | Activation Function 2 in the Human Androgen Receptor Ligand Binding Domain Mediates Interdomain Communication with the NH2-terminal Domain*
Activation function 2 in the ligand binding domain of nuclear receptors forms a hydrophobic cleft that binds the LXXLL motif of p160 transcriptional coactivators. Here we provide evidence that activation function 2 in the androgen receptor serves as the contact site for the androgen dependent NH2- and carboxyl-terminal interaction of the androgen receptor and only weakly interacts with p160 coactivators in an LXXLL-dependent manner. Mutagenesis studies indicate that it is the NH2-/carboxyl-terminal interaction that is required by activation function 2 to stabilize helix 12 and slow androgen dissociation critical for androgen receptor activityin vivo. The androgen receptor recruits p160 coactivators through its NH2-terminal and DNA binding domains in an LXXLL motif-independent manner. The results suggest a novel function for activation function 2 and a unique mechanism of nuclear receptor transactivation.
Steroid receptors interact with coactivators during the recruitment of active transcription initiation complexes required for hormone-regulated gene transcription (1). Transcriptional activation domains in the steroid receptors that may mediate these interactions include activation function 1 in the NH 2terminal domain and activation function 2 (AF2) 1 in the ligand binding domain (LBD). Recent studies have focused on a family of p160 coactivators that interact with the AF2 region that include steroid receptor coactivator 1 (SRC1) (2) and the human transcriptional intermediary factor 2 (TIF2) (3). SRC1 and TIF2 contain distinct nuclear receptor interaction domains in the central and/or carboxyl-terminal regions (3,4). Mutagenesis studies demonstrated a functional link between AF2 activity in the LBD and the binding of p160 coactivators (5,6). The p160 coactivators interact with the AF2 hydrophobic surface of the LBD through conserved LXXLL motifs that form amphipathic ␣ helices (7,8). Recent co-crystal structures of nuclear receptor LBDs and LXXLL motif fragments confirm that AF2 recruits TIF2 and SRC1 through their LXXLL motifs (6, 9 -11). A multistep mechanism for transcriptional activation by nuclear receptors involves hormone-dependent recruitment and association through these LXXLL binding motifs of histone acetyltransferase activity associated with the p160 coactivator family, CREB-binding protein/p300, and p300/CREB-binding protein-associated factor, resulting in chromatin remodeling (12,13) and the formation of a transcriptionally competent Srb/mediator coactivator complex (thyroid hormone receptorassociated protein/vitamin D receptor-interacting protein) complex (14).
However, androgen receptor (AR) AF2 activity is not detected in a variety of mammalian cell lines (15)(16)(17)(18) despite homology of the region with other nuclear receptors. We therefore investigated the mechanism whereby AR recruits p160 coactivators and the role of AF2 in AR function. It is demonstrated that weak interactions between the AR LBD and SRC1 and TIF2 correspond with weak AR AF2 activity. The AF2 surface in the AR LBD instead functions as a strong interaction site for the AR NH 2 -terminal domain that is required for AR activity in vivo. SRC1 and TIF2 interact with the AR NH 2terminal and DNA binding domain (DBD) regions in an LXXLL motif-independent manner mediated by the carboxyl-terminal region of SRC1 and the carboxyl-terminal and central regions of TIF2.
Mammalian Two Hybrid Assay-The NH 2 -terminal and carboxylterminal (N/C) interaction assay between the AR NH 2 -and carboxylterminal regions was determined using GALAR624 -919, a fusion protein with Saccharomyces cerevisiae GAL4 DBD residues 1-147 and AR LBD residues 624 -919 in pGALO (16,20) with VPAR1-660 (AR NH 2terminal and DBD residues 1-660) containing the herpes simplex virus VP16 transactivation domain residues 411-456 (16,20). CHO cells were transfected using DEAE-dextran (16,20) with 1 g of GAL and VP16 fusion vectors and 5 g of G5E1b-luciferase reporter. Activity was determined as indicated or in the presence or absence of 1 M dihydrotestosterone (DHT). Fold induction relative to the no hormone control is indicated above the bars. For interactions between TIF2 and SRC1, GALAR624 -919 was cotransfected with VPTIF2 or VPSRC1 fusion constructs in the CHO two hybrid assay. VPAR and VPAR1-660 were expressed with GALTIF2 or GALSRC1 mutants containing the GAL4 DBD. Control interactions were with pNLVP16 (VP16).
In Vitro Binding Assays-GST fusion proteins were expressed in XL1-Blue Escherichia coli cells treated with 0.5 mM isopropyl-1-thio--D-galactopyranoside for 3 h after log phase growth. Bacteria were sonicated and centrifuged, and the supernatant was incubated with glutathione-agarose beads (Amersham Pharmacia Biotech) for 1 h at 4°C. Beads are washed five times with 0.5% Nonidet P-40, 1 mM EDTA, 0.1 M NaCl, 0.02 M Tris-HCl, pH 8.0, and incubated for 2 h at 4°C with and without 0.2 M DHT, and in vitro translated proteins were labeled with 25 Ci of [ 35 S]methionine (NEN Life Science Products) using the TNT T7 quick coupled transcription/translation system (Promega) in the presence and absence of 0.2 M DHT. Beads were centrifuged, washed five times, and boiled in SDS. Input lanes contain approximately 20% that used for the binding reactions. GSTAR1-660 was prepared by excising AR1-660 coding for AR NH 2 -terminal and DBD residues 1-660 from GALAR using TthIII(blunt)/BamHI and cloned into pGEX-5X-1 (Amersham Pharmacia Biotech) at SmaI/BamHI. GSTTIF2M (TIF2 624 -1141) and GSTTIF2C (TIF2 1144 -1464) were PCR amplified, and fragments were cloned in pGEX-2T (EcoRI/BamHI). TIF2 carboxyl-terminal residues 1143-1464 were amplified from pSG5TIF2 by PCR and cloned into pcDNA3HA (provided by Yue Xiong) at the BamHI/XbaI sites to prepare 35 S-labeled TIF2-C. pGEMhAR (provided by Jiann-an Tan and Frank S. French) coded for full-length human AR residues 1-919 and was used to prepare 35 S-AR. GSTAR1-565 was prepared by digesting GALAR1-919 with HindIII(blunt)/BamHI and cloned into pGEX-3X at EcoRI(blunt)/BamHI. pcDNA3HA-AR-LBD expressing the human AR LBD residues 624 -919 was digested from GALAR624 -919 with BamHI/XbaI and cloned in the same sites in pcDNA3HA for in vitro translation.
RESULTS AND DISCUSSION
Expression of the AR DBD and LBD fragment AR507-919 ( Fig. 1A) or AR LBD residues 624 -919 fused with the GAL4 DBD (GALAR624 -919, Fig. 1B) shows little or no induction of transcriptional activity indicating the absence of AF2 activity. In contrast, agonist-dependent AF2 activity of the GAL4-glucocorticoid or estrogen receptors LBD fusion proteins were 16 Ϯ 6-fold and 3.6 Ϯ 0.3-fold (Fig. 1B). Lack of AF2 activity by the AR LBD might result from failure to recruit p160 coactivators. Moreover, in transient cotransfection assays, expression of SRC1 or TIF2 increased full-length AR transcriptional activity about 3-6-fold, which surprisingly was only partially diminished by mutation of the three LXXLL motifs in TIF2 (TIF2 m123, Fig. 1A) and SRC1 (21), suggesting that p160 coactivators can increase AR transactivation in an LXXLL motif-independent manner. We therefore investigated the interaction of AR with SRC1 and TIF2.
Of several fragments tested in a mammalian two hybrid assay, only TIF624 -1287 and SRC568 -1441 each with three (3) and four LXXLL motifs, respectively, interacted 2-3-fold with the AR LBD (GAL-AR624 -919, Fig. 2), which was less than 10% the activity observed in the N/C interaction (see below and Fig. 4A), suggesting weak coactivator binding affinity compared with the interaction between the NH 2 -and carboxyl-terminal AR domains. Although results are shown at FIG. 1. Transcriptional activation by AR. A, effect of overexpression of p160 coactivators. The expression vector pCMVhAR507-919 coding for the AR DBD and LBD (AR507-919, 50 ng) was cotransfected without or with 2 g of pSG5TIF2 or pSG5TIF2 m123 and 5 g of mouse mammary tumor virus (MMTV) luciferase reporter. Full-length human AR expression vector pCMVhAR (AR, 20 ng) was cotransfected without or with 6 g of pSG5SRC1, pSG5TIF2, or pSG5TIF2 m123 together with 5 g of the MMTV luciferase reporter. The parent vector pCMV5 (p5, 50 ng) was cotransfected with 5 g of the luciferase reporter. Monkey kidney CV-1 cells were transfected using calcium phosphate (34). The last two leucine residues in each of three LXXLL motifs were mutated to alanine in pSG5TIF2 m123 (3). B, transcriptional activity of AR, glucocorticoid receptor (GR), and estrogen receptor (ER) LBDs expressed as fusion proteins with the GAL4 DBD. CHO cells were cotransfected with 1 g of pNLVP16 parent vector (VP16) together with 1 g of GALAR624 -919, GALGR486 -778, or GALER250 -595, and 5 g of G5E1b-luciferase reporter (16,20). Cells were incubated 24 h with or without 1 M DHT, dexamethasone, or 17-estradiol with the cognate receptor fragment.
saturating DHT concentrations (1 M, Fig. 2, Table I), interactions between the p160 coactivators and the AR LBD in the two hybrid assay were detected at 0.01 nM DHT. The LBD regions of the glucocorticoid (486 -778) and estrogen (250 -595) receptors interacted with these fragments 69 Ϯ 4-fold and 5.9 Ϯ 1.2-fold, and 7.5 Ϯ 1.7 and 8.2 Ϯ 1.7, respectively (data not shown). However, overexpressed TIF2, but not a mutant with three mutated LXXLL motifs, increased activation by the AR LBD (AR507-919, Fig. 1A), indicating that exogenously expressed coactivators can rescue LXXLL motif-dependent AF2 activity in the AR LBD, which as shown below was blocked by site-directed mutations in AF2 (see Fig. 4B). The results suggest that the apparent lack of AR AF2 activity results from inefficient LXXLL motif-dependent recruitment of endogenous coactivators. Recovery of AF2 by overexpression of p160 coactivators suggests overall retention of nuclear receptor AF2 TABLE I Summary of AR LBD mutants Apparent equilibrium binding affinity and dissociation half-times were determined in COS cells at 37°C using wild-type or mutant pCMVhAR full-length AR or AR507-919 coding for the DBD and LBD (20,33). DHT concentration for at least 10-fold transcriptional activity (MMTV-Luc) was determined in CV-1 cells using pCMVhAR full-length wild-type and mutant. TIF2 two hybrid interaction was determined using VPTIF624 -1287 and GALAR624 -919 with wild-type or mutant sequence in CHO cells at 1 M DHT, shown as fold induction relative to activity determined in the absence of hormone. The AR-TIF2 interaction was also determined by cotransfecting pCMVhAR507-919 and pSG5TIF2 with the MMTV-luciferase reporter in CV-1 cells assayed at 10 nM DHT. The N/C interaction (16,20) shows the DHT concentration for at least 3-fold induction using VPAR1-660 and GALAR624 -919 with wild-type or mutant sequence determined in CHO cells. The N/C interaction was also determined by cotransfecting pCMVhAR1-660 and pCMVhAR1-503 and MMTV-Luc in CV-1 cells at 10 nM DHT. Androgen insensitivity syndrome (AIS) stage is on a scale where 1 is normal and 7 is complete (44). AF2, helix, and loop regions were based on crystal structures of estrogen and progesterone receptor LBDs (29,30). Signature sequence is amino acid residues 718 -741 in human AR (31). PC indicates somatic prostate cancer mutation; ϩϩ, activity equivalent to wild-type; ϩ, greatly reduced but detectable activity; Ϫ, not detectable; nd, not determined. structure (6,22,23).
The role of the AR NH 2 -terminal and DBD regions in p160 coactivator recruitment was also investigated using the two hybrid assay. A 2-5-fold interaction between TIF624 -1179 or TIF1288 -1464 with full-length AR (VPAR, Fig. 2A) or the constitutively active NH 2 -terminal and DBD fragment AR1-660 (VPAR1-660, Fig. 2A) indicates interaction of AR with two regions of TIF2. This interaction increases to 7-14-fold by including the TIF2 glutamine-rich region in TIF624 -1287 and TIF1143-1464 ( Fig. 2A). The results of GST adsorption assays confirm that both the central and carboxyl-terminal domains of TIF2 interact with the AR NH 2 -terminal and DBD fragment (Fig. 3A). Deletion mapping of SRC1 indicates that mainly its carboxyl-terminal region interacts with AR or the AR NH 2terminal fragment, and deletion of the SRC1 carboxyl-terminal LXXLL motif did not diminish this interaction (Fig. 2B). Deletions of AR NH 2 -terminal residues 339 -499, but not ⌬14 -150 or ⌬142-337, decreased the SRC1 interaction by 50% suggesting this region of the NH 2 terminus contributes to the LXXLLindependent interaction with TIF2 and SRC1 (data not shown). We concluded that AR can recruit p160 coactivators through its NH 2 -terminal and DBD regions independent of the LXXLL motifs by interacting with the carboxyl-terminal region of SRC1 or the carboxyl and central regions of TIF2. Whereas the role of nuclear receptor NH 2 -terminal domains in recruiting 160 coactivators has been controversial (4, 24 -28), this interaction clearly contributes to the LXXLL motif-independent activation of AR.
The function of the AF2 region in AR-mediated gene activation was further investigated by site-directed mutagenesis. Sequence alignments based on steroid receptor LBD crystal structure predictions (29 -31) place several androgen insensitive and site-directed mutations within AF2 helices 3, 4, and 12 and a highly conserved nuclear receptor signature sequence (31). Sites for mutagenesis were based on an association with the androgen insensitivity syndrome and with retention of high affinity androgen binding ( Table I). All of the AR LBD mutants expressed at similar levels based on binding capacity and retained high affinity binding of the synthetic androgen [ 3 H]R1881 (K d 0.3-0.7 nM) ( Table I) indicating conservation of the ligand binding pocket. However, mutations at V889M, Y739A, W741A, E897K, I898T, and V716R increased the dissociation rate of androgen bound to full-length AR by 2-5-fold (Table I) suggesting an increased association rate and perturbation of the hormone binding region. V889M lies between helices 11 and 12 and causes nearly complete androgen insensitivity (32), increases the androgen dissociation rate (33), and interferes with the androgen-dependent interaction between the AR NH 2 -and carboxyl-terminal regions (16,20). The N/C interaction facilitates AR transcriptional activity at physiological androgen concentrations (34) but, unlike peroxisome proliferator-activated receptor ␥ (35), is not required for high affinity androgen binding (16).
When expressed in full-length AR, all AF2/signature sequence mutants, with the exception of K720A (see below), required 100 -1000-fold higher DHT concentrations to activate an androgen responsive reporter (Table I) indicating greatly reduced function by the mutant ARs. Almost all of the AF2/ signature mutants had reduced to undetectable interaction with TIF2 (Table I), SRC1 (data not shown), and the AR NH 2terminal domain (Table I), whereas most mutants outside this region had wild-type activity. Transcriptional activity at 0.1-1 nM DHT in the absence of an N/C interaction for V716R and E897K (Table I) shows that AR function can be compensated in vitro by elevated androgen levels (34), whereas in vivo, decreased N/C interaction is associated with partial (I737T, F725L) or complete (I898T, V889M) androgen insensitivity (Fig. 4A). Transcriptional activity of the AR DBD/LBD fragment AR507-919 coexpressed with TIF2 or with the AR NH 2terminal fragment AR1-503 lacking the AR DBD was also decreased by several of the mutations (Fig. 4B, Table I). Thus many of the same residues in the AF2/signature sequence serve as both a weak binding site for p160 coactivators and for the AR NH 2 -terminal domain. However, the binding sites are not identical, because AR mutant I898T greatly decreased the N/C interaction but retained strong p160 coactivator binding, and K720A retained the N/C interaction but essentially lost p160 coactivator binding (Fig. 4, A and B).
The functional significance of the AR AF2 region was therefore distinguished by these mutations, K720A and I898T. Lys-720 lies within helix 3 of the AF2 hydrophobic surface in a region highly conserved among nuclear receptors. Lys-720 corresponds to Lys-366 in mouse estrogen receptor, whose mutation eliminates estrogen receptor transcriptional activity (22), and to Lys-301 in peroxisome proliferator-activated receptor ␥, where it forms part of an LXXLL motif charge clamp (9). K720A retains the transcriptional activity of wild-type AR (Table I) (36), even though the p160 coactivator binding by the LBD is low to undetectable (Fig. 4, A and B, Table I). Retention of wild-type AR transcriptional activity by K720A correlates with the 21-fold N/C interaction (Fig. 4A), but not with the LXXLL motif-dependent p160 coactivator recruitment by the AR LBD (Fig. 4, A and B). An AR somatic mutation at this same site (K720E) in a bone metastases of hormone refractory prostate cancer also retained a normal transcriptional response (37,38) typical of most prostate cancer AR mutations (39). A mutation at the corresponding Lys-366 in the estrogen receptor distinguished the binding of SRC1 and RIP140, coactivators that interact through LXXLL motifs at the same hydrophobic cleft (22), suggesting this residue contributes to multiple overlapping interaction sites. I898T, on the other hand, retains strong coactivator binding to AF2 but has a greatly reduced N/C interaction (Fig. 4, A and B) and is associated with complete androgen insensitivity (Table I). Thus a decline in the N/C interaction at AF2, but to a much less extent coactivator interaction at AF2, is associated with androgen insensitivity and thus loss of AR function in vivo.
Although p160 coactivators may contribute to the N/C interaction (4,40,41), several lines of evidence, including recent studies with the progesterone receptor (42), support a direct N/C interaction. 1) In our studies, overexpression of TIF2 or SRC1 has no effect on the AR N/C interaction in the mammalian two hybrid assay (data not shown). 2) The AR N/C interaction is detected in both mammalian and yeast two hybrid assays. 3) AR GST adsorption experiments where the GST-AR LBD fusion protein interacts in an androgen-dependent manner with the AR NH 2 -terminal domain (Fig. 3B) are consistent with a direct N/C interaction. 4) The N/C interaction site in the AR LBD overlaps, but is not identical to, the p160 coactivator LXXLL motif binding site. 5) The AR LBD appears to bind the NH 2 -terminal domain with higher affinity than it does the LXXLL motif. The data predict that AF2 mutations that disrupt p160 coactivator binding alter male phenotypic expression only if they interfere with the overlapping N/C interaction site.
Most AF2 and signature sequence mutations that increase the androgen dissociation rate and cause severe androgen insensitivity (Table I) (43) are associated with helix 12 (29). Androgen dissociation rates from the DBD/LBD AR507-919 fragment increased 7-fold from t1 ⁄2 44 min to t1 ⁄2 3-8 min at 37°C by W741A, I898T, Y739A, and V889M (Table I). Trp-741 in helix 5 is predicted to contact Ile-898 in helix 12, Tyr-739 in helix 4 contacts Val-911 in helix 12, and Val-889 lies between helices 11 and 12 (Fig. 5). Trp-741 corresponds to Trp-755 in the progesterone receptor, which directly interacts with bound agonist (29), so a mutation at this site could directly increase FIG. 4. AR mutations that distinguish coactivator binding and the N/C interaction. A, two hybrid interaction assay between the AR LBD mutants and the AR NH 2 -terminal domain, TIF2 and SRC1. GALAR624 -919 coding for the AR LBD residues 624 -919 with wild-type sequence (WT) or the indicated mutations were tested in the CHO cell two hybrid assay as described under "Experimental Procedures" using pNLVP16 (VP16), VPAR1-660 (AD) coding for the NH 2 -terminal region and DBD, or the VP16 fusion proteins with full-length SRC1 (SRC) and TIF2 (TIF). The experiment shown is representative of at least three independent experiments where fold induction is shown above the bars. B, transcriptional activation by the AR LBD in the presence of TIF2 and the AR NH 2 -terminal region. Transient cotransfection experiments were performed in CV-1 cells using the MMTV-luciferase reporter vector as described under "Experimental Procedures" in the absence (Ϫ) or presence (ϩ) of 10 nM DHT. AR507-919 with wild-type (WT) or mutant sequence as indicated were coexpressed with 0.5, 2, and 6 g of pSG5-TIF2 or 0.1, 0.5, or 1 g of pCMVhAR1-503 coding for the AR NH 2 -terminal region but lacking the AR DBD. The experiment shown is representative of at least three experiments, and the fold induction is shown above the bars.
the ligand dissociation rate. On the other hand, I737T in helix 4 and F725L between helices 3 and 4 cause only partial androgen insensitivity (44) and are not predicted to contact helix 12. Nor do they influence the androgen dissociation rate or completely disrupt the N/C interaction (Table I). These and other mutations not associated with helix 12 (V716R, K720A, Q867H/ P868D) retained the wild-type androgen dissociation rate. Thus helix 12 appears to stabilize androgen in the binding pocket.
The N/C interaction appears to further stabilize helix 12 and bound androgen. As we showed previously, deletion of the NH 2 -terminal domain increases the androgen dissociation rate by 4 -5-fold (20,33). Furthermore, E897K in helix 12 eliminates the N/C interaction and increases the androgen dissociation rate 2-fold (Table I). Glu-897 is equivalent to Glu-471 in helix 12 in peroxisome proliferator-activated receptor ␥, which forms part of the LXXLL charge clamp of AF2 (9), supporting overlapping coactivator and AR NH 2 -terminal binding sites. V716R, though not positioned near helix 12, eliminates the N/C interaction, and the androgen dissociation rate increased 5-fold (Table I). Mutations at Gln-867 and Pro-868 in the loop between helices 10 -11 to the conserved residues HD of the progesterone and estrogen receptors (Q867H/P868D, Table I), where Gln-867 juxtaposes Tyr-915 in helix 12, increased the N/C interaction 2-fold (data not shown) and slowed the androgen dissociation rate to a similar extent (Table I).
Thus the N/C interaction and the AF2/signature sequence residues appear to contribute to the positioning of helix 12, which results in slowing the dissociation rate of bound androgen. SRC1 slowed estrogen receptor ligand dissociation (45); however, overexpression of TIF2 had no effect on androgen dissociation from full-length AR (data not shown). The data are consistent with overlapping LBD AF2 binding sites for TIF2 and the AR NH 2 -terminal domain, which in the presence of androgen agonist participates in the N/C interaction. For the AR, p160 coactivator recruitment appears to be mediated primarily by the AR NH 2 -terminal and DBD regions. As illustrated in Fig. 5, the data suggest that AF2 in the AR LBD serves predominantly as an N/C interaction site, which upon agonist binding contributes to stabilization of helix 12 to slow androgen dissociation necessary for AR functional activity at physiological androgen concentrations. FIG. 5. Model of the AR LBD. The predicted model of the AR LBD is shown based on the LBD structure of the progesterone and other nuclear receptors (29,30,46). All indicated residues are conserved with the progesterone receptor except Ile-898 in AR is substituted for Val-912. Part of helices 3, 4, 12, and a short 3 residue helix between helices 3-4 that comprise AF2 are shown in red. AR AF2 and the signature sequence include charged residues Glu-897 and Lys-720 in blue and hydrophobic residues in yellow. | 2018-04-03T01:38:41.621Z | 1999-12-24T00:00:00.000 | {
"year": 1999,
"sha1": "09e506f552a306564acc9ba1653c234206d8e029",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/52/37219.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7f7ef86ed0662fe3c594ec5d3a72a0430c1ab9c7",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
253759786 | pes2o/s2orc | v3-fos-license | Perspectives on Collaboration between Physicians and Nurse Practitioners in Japan: A Cross-Sectional Study
Background: Nurse practitioners (NPs) are known as effective healthcare providers worldwide. In Japan, nurse practitioner adoption is considered to be in a shaky period. Although nurse practitioners were introduced approximately 10 years ago at the initiative of educational institutions in Japan, the full extent of this trend is not known. Therefore, we have clarified the whole picture of nurse practitioners from two directions: the perception of nurse practitioners in Japan and the perception of physicians who work with nurse practitioners. This will inform discussions regarding the recruitment of nurse practitioners at the national level in Japan. Methods: From 18 June to 24 July 2021, we administered a nationwide cross-sectional survey of NPs and physicians working in the same clinical settings as NPs in Japan. The domains of the survey included “scope and content of work”, “perceptions of NPs’ clinical practice”, and “individual clinical practice characteristics”. The survey was distributed and collected digitally. Results: The total number of respondents to the survey was 281, including 169 NPs and 112 physicians; the percentage of NPs who responded was 50.5%. The number of valid responses was 164 NPs and 111 physicians, for a total of 275 respondents. Approximately 60% of NPs are concentrated in Tokyo, the capital of Japan, and the three prefectures adjacent to Tokyo. They also worked fewer hours per week, cared for fewer patients per day, and earned less money than physicians. More physicians than NPs indicated that “more NPs would improve the quality of care”. A total of 90.1% of physicians and 82.3% of NPs agreed that “Nurse practitioners should practice to the full extent of their education and training,” and 73.9% of physicians and 81.7% of NPs agreed that “Nurse practitioners’ scope of practice should be uniformly defined at a national level”. Conclusions: This study clarified the present working conditions of NPs from NPs’ and physicians’ perspectives in Japanese contexts. Japanese NPs may be able to work effectively in collaboration with physicians. Therefore, the implementation of NPs in Japanese medical conditions should be discussed further for better healthcare.
Introduction
Nurse practitioners (NPs) are known as healthcare providers who contribute to improving access to healthcare and patient satisfaction [1][2][3]. The recruitment of NPs for health care innovation in many countries has become a global trend [4]. International standards for NPs were set forth by the International Council of Nurses in 2020, but specific authority and job descriptions vary depending on the employing country [5]. The United States, where most NPs practice independently, has been active for over 50 years since the birth of NPs, making them indispensable, especially as primary care providers [6]. Perhaps because of this, many NPs in the U.S. recognize that their practice improves the safety, efficiency, etc., of medical care, but this does not necessarily mean that U.S. physicians have the same perception as NPs [7]. The evaluation of NP practices by physicians, who are the primary users of medicine, influences healthcare policy decisions.
In Japan, on the other hand, NPs were created approximately 10 years ago, following the model of NPs in the United States. Japanese NPs are not certified by the national government but by an organization composed of graduate schools that train NPs [8]. Japan's medical background is a country that has adopted the universal health insurance system recommended by the World Health Organization, a system in which anyone can receive medical care anywhere at a low cost, and a long-term care insurance system that covers care for the elderly throughout the country. However, the country continues to have the largest proportion of elderly people in the world, and the declining birthrate is not slowing down, so the question is whether this system can be maintained [9][10][11]. As part of its efforts to maintain the healthcare delivery system, the Japanese government is steering the transfer of physicians' duties to non-physician healthcare professionals. In particular, a system was created in 2015 for nurses to be able to perform 38 specific types of medical procedures under comprehensive instructions from physicians if they receive training at institutions designated by the government [12]. Originally, the law stipulated that the duties of Japanese nurses were to "care for the medical treatment of patients" and "assistance in the treatment of physicians" [13]. The Specific Medical Practice training system has positioned nurses' medical practice as "assisting physicians in the practice of medicine", and the organization that oversees graduate schools that educate NPs has mandated training in specific medical practices as part of their educational curriculum.
On the other hand, the process of this legalization led to the interpretation that the scope of duties of nurses would be limited to certain medical procedures and that nurses could perform even relatively invasive medical procedures, such as intubation and extubation, as "assisting physicians" if directly instructed by the physician.
Therefore, at present, NPs in Japan practice the medical acts specified by the government under the comprehensive supervision of physicians, and practice other medical acts under the direct supervision of physicians within the scope of their discretion.
Given these factors, it is extremely important to know how physicians evaluate NPs in their clinical practice in order for NPs to operate in Japan. Therefore, this study sought to clarify the current status of NPs' job descriptions in Japan and to determine how NPs and physicians who work with NPs perceive the current status of NPs. This is the first report of its kind in East Asian countries. Therefore, the clarification of these findings may provide significant data for discussions on the official use of NPs in Japan in the future, and may influence decision-making on the introduction of NPs in countries in the midst of the NP adoption wave, especially in East Asian countries with similar cultural backgrounds.
Study Design
This study is a national cross-sectional survey of NPs and physicians collaborating with NPs in Japan; collaborating with an NP was defined as working in the same department. We conducted this survey online from 18 June to 24 July 2021.
Samples
The sampling of NPs in Japan was 338 of the 583 NPs whose credentials had been certified by the Japanese Organization of Nurse Practitioner Faculties (JONPF) by 31 March 2021, and who had given permission to be contacted for research purposes. The sampling of physicians working with the NPs included the physicians in their departments. The NPs were asked to distribute the questionnaire to the relevant NPs via email from JONPF, and we asked NPs to distribute questionnaires to the physicians who collaborate with them. Because physicians were asked to participate in this survey via NPs by the snowball method, we did not count the number of questionnaires distributed to physicians. There-fore, we could not tell the response rate for physicians.
Measurements
The questionnaire was developed by Donelan [14] in 2020 and modified for the Japanese version after obtaining permission from the authors. The modifications were made by having five experts with knowledge of the medical backgrounds of the U.S. and Japan and familiar with the activities of NPs in Japan validate the questionnaire from previous studies and modify it to fit the actual situation in Japan. The domains of the questionnaire included the "scope and content of work.", "perceptions of NPs' clinical practice." and "individual clinical practice characteristics.". The questionnaires were distributed and collected via the Internet using Google Form TM .
Analysis
The response data were corrected for suspected outliers by confirming the correct values with the respondents via e-mail. Statistical analysis was conducted for the two groups of NPs and physicians working with NPs, with a significance level of 0.05 (95% confidence interval), and logistic regression analysis was performed. SPSS Statistics version 27 (IBM, New York, NY, USA) [15], and EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan) were used for statistical analysis. The χ-square test was used for variables on the nominal scale, and Fisher's exact test was used for those with an expected frequency of less than 5. For scale variables, T-tests or Mann-Whitney U-tests were used, depending on the characteristics of the data.
Ethical Considerations
The participation of the research collaborators in the questionnaire clearly stated that they were deemed to have given their consent by answering the questionnaire so that the free will of the individuals could be respected. In addition, participant information obtained from the questionnaire responses was not used for any purpose other than research purposes and was kept in strict confidence so as not to be leaked. The contact information for the respondents who wished to withdraw their responses was indicated, and it was guaranteed that they could withdraw their responses before the publication of the research results. Although respondents were free to write their names on the questionnaire, we required them to provide their e-mail addresses so that we could confirm their answers if they were clearly erroneous. The study was conducted with the approval of the International University of Health and Welfare Ethics Committee. The ethics approval number is 20-Im-017. Our study was reported according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines.
Results
The total number of respondents to the survey was 281, including 169 NPs and 112 physicians; the percentage of NPs responding was 50.5%. The numbers of valid responses were 164 NPs and 111 physicians, for a total of 275 respondents.
Characteristics of Respondents
Regarding the characteristics of the respondents (Table 1), in terms of gender, the NPs were predominantly female (59%) and the physicians were predominantly male (94%) (p < 0.001). The mean age was younger for NPs; it was 37.4 years (SD 22.1) for NPs and 45.2 years (SD 21.9) for physicians. In terms of final education, all NPs had a master's degree or higher, and 0.6% had a doctor's degree. Among the physicians, 10.2% had a master's degree and 27.0% had a doctorate. Since annual income was not required question, there were 207 respondents; 80.3% of NPs had annual incomes between JPY 5 and 10 million, while 88.2% of physicians had annual incomes of JPY 10 million or more, a significant difference (p < 0.001). There was a significant difference in the mean number of years of clinical experience, with 18.9 years (SD 10.2) for physicians and 3.59 years (SD 3.2) for NPs (p < 0.001). Among them, the Tokyo metropolitan area (Tokyo and the three prefectures adjacent to Tokyo: Kanagawa, Saitama, and Chiba) accounted for about 60%. The largest number of both physicians and NPs belonged to hospitals with fewer than 20-500 beds (60%). The most common affiliation for both physicians and NPs was the emergency department (MD 15.3%, NP 16.5%, p = 0.779), followed by general practice medicine (MD 12.6%, NP 9.8%, p = 0.456), and then cardiovascular surgery (MD 7.2%, NP 7.9%, p = 0.826). The number of actual hours worked per week was significantly different for NPs, averaging 50.3 h versus 58.6 h for physicians (p < 0.001); the number of patients cared for per day was significantly different for NPs, with 10.5, versus 19.4 for physicians (p < 0.001). Table 2 shows the actual job description of NPs: the most common response by the NPs themselves was "blood sampling by arterial puncture" (86.6%), followed by "history taking and physical examination" (76.8%), and the third was "interpretation of ECGs" (75.0%). The most common response on the physician's side was "history taking and physical examination" (67.8%), followed by "peripheral indwelling central venous catheter (PICC) insertion" (62.2%), and the third was "interpretation of ECG" (61.3%). Those that were answered by more than 60% of the physicians and NPs and did not differ significantly were "history taking and physical examination" (p = 0.089), "PICC insertion" (p = 0.676), and "performing a simple ultrasound examination" (p = 0.144). Regarding whether the impact of the spread of the novel coronavirus disease 2019 (COVID-19) infection changed NPs' job descriptions, 38.4% of the NPs and 31.5% of the physicians reported that they had changed, with a non-significant difference. In the open-ended responses on how it changed, negative responses indicated that regular medical care was not provided, while positive responses indicated that the importance of NPs was made known in the hospital through their special duties on the front lines of infectious diseases and their full-time intensive care of patients with severe coronary disease.
Perception of the Team
In terms of the perceptions about the team (Table 3), for the question "Who are the team members you work with every day?", the most common responses from physicians were, in descending order, 90.1% nonresident physicians, 94.6% registered nurses, 82.0% NPs, 63.1% residents and pharmacists, 60.4% physical therapists, and 54.1% medical social workers. In the advanced practice nursing field other than NPs, 14.4% were professional nurses, 25.2% were certified nurses, and 1.8% were nurses who had completed specific practice training.
In response to the statement "When physicians and NPs perform the same types of procedures and laboratory tests, physicians provide higher quality care than NPs". Approximately 36% of both physicians and NPs agreed with the statement. In addition, 76.6% of physicians and 59.1% of NPs (p = 0.003) agreed with the statement "The physicians I work with trust the skills and clinical judgment (decision making) of NPs". For the statement "NPs are effective leaders of the care team, which includes physicians, nurses, and other health professionals", 55.0% of physicians and 39.0% of NPs were in agreement.
Respondents' Views on the Effect of an Increased Supply of Nurse Practitioners on the Quality of Healthcare
Regarding the question about improving the quality of care (Table 4) by increasing the number of NPs, a total of 81.1% of physicians and 71.3% of NPs agreed that "safety will improve", while 88.3% of physicians and 84.1% of NPs agreed that "timeliness will improve". Additionally, 88.3% of physicians and 84.1% of NPs agreed with "better timeliness" 78.4% of physicians and 72.0% of NPs agreed with "better effectiveness", and 73.0% of physicians and 71.3% of NPs agreed with "better patient-centeredness". All responses were related to improving the quality of care with more NPs. The percentage of physicians who responded "better." on all items was higher than that of NPs. Items with significant differences were cost-effectiveness (84.7% of physicians vs. 65.9% of NPs, p < 0.001) and patient clinical outcomes (68.5% of physicians vs. 50.0% of NPs, p = 0.002).
Perceptions of NP Policy and Practice in Japan
Regarding the perceptions of NP policy and practice (Table 5), 90.1% of physicians and 82.3% of NPs agreed with the statement "NPs should practice the full range of their education and training". Total of 73.9% of physicians and 81.7% of NPs agreed with the statement "The scope of practice of NPs should be uniformly defined at the national level", and 28.9% of physicians and 28.7% of NPs agreed with the statement "The physicians I work with do not understand NPs". In contrast, approximately 70% of both physicians and NPs disagreed with the statement "The physicians I work with do not understand NPs". There was a significant difference for the statement "Physicians and NPs need to be paid the same fees to provide or perform the same services and procedures", with 36.9% of physicians and 54.3% of NPs in agreement (p = 0.005).
Discussion
A cross-sectional survey administered at the same time for two target groups, NPs and physicians working with NPs, revealed the current status of NPs in Japan and differences in the perceptions between the two groups. In addition, the questionnaire was modified from the one administered in the U.S. to the Japanese version, so many of the responses could be compared with those in the U.S. The results of the survey were also compared with those in the U.S. Although the two groups cannot be compared in exactly the same way due to differences in the time period, social background, and sampling of the study subjects, we compared the perceptions between Japanese physicians and NPs from various perspectives, including similarities and differences.
One characteristic of Japanese NPs was that many of them were engaged in critical care in the Tokyo metropolitan area and other urban centers. This indicates that, unlike in the U.S. and other countries, NPs did not have an impact as a presence to meet the demand for medical care in medically underpopulated areas where there were no physicians.
In terms of the gender ratio of respondents, similar to the NP group in the 2020 survey in the U.S. [14], there were more women in the NP group than in the physician group, but the difference was that the proportion of men in the NP group was twice that in the U.S. The NP group was more likely to be male than female. Since this is a phenomenon not seen internationally, this may be a new model for discussing gender differences between men and women.
There is clearly a difference in annual income between physicians and NPs. However, this may reflect the nature of their practice due to differences in the number of hours worked per week and the number of patients cared for per day, as well as differences in the number of years of clinical experience with NPs, gender differences, and age differences. Japan has an inherent seniority system in which salaries increase with age. Many physicians did not require NPs to be on-call, etc., and perceived that since the responsibility is solely on the physician, the income would not be the same.
As for the specific job description of NPs, it was recognized that their job was to take the patient's history and perform physical examinations. In addition, non-invasive examinations, such as electrocardiograms and simple ultrasound examinations, and invasive but minor arterial blood sampling for blood gas analysis were frequently performed, and this indicates that many NPs in Japan are trying to obtain physical information on patients in critical situations by using medical knowledge and technology. Furthermore, in devicerelated procedures, Japanese national specific acts were performed more than non-specific acts, and PICC insertion, among others, was recognized as an act that symbolizes NPs. Approximately 36% of both physicians and NPs believed that "physicians provide higher quality care than NPs when they perform the same type of procedure or perform a clinical examination." Paradoxically, this can be interpreted to mean that approximately 60% of physicians and NPs rated examinations and procedures as comparable to physician practice. In addition, 76.6% of physicians indicated that they trust the skills and clinical judgment (decision-making) of NPs, indicating that NPs receive a certain amount of positive feedback on their medical thinking and skills from the physicians they work with. In terms of perceptions within the team, 82% of physicians perceived that they always work with NPs and only 1.8% of non-NP nurses who have completed specific practice training. The fact that 32.9% of NPs but only 1.8% of physicians recognized those who had completed specific act training indicates that physicians do not recognize them as part of the team.
In fact, as of June 2021, when this survey was conducted, the actual number of nurses who had completed the specific act was 3307; subtracting the 583 NPs, the number was 2724, which is 4.67 times the number of NPs. This suggests that the government wants nurses who have completed specific practice training to function as key players in team medicine, but in order to do so, they will first need to be recognized as part of the team. Regarding the NP's leadership within the team, 55% of physicians agreed with the statement "The NP is an effective leader of the care team, which includes physicians, nurses, and other health professionals", indicating that more than half of the physicians in the field working with NPs rated NPs as functioning as team leaders.
In this study, NPs were sampled from the entire population. Therefore, one might argue that this is why it deserves to be a recommendation for national policy. On the other hand, the sampling of physicians was purposive and does not reflect the opinions of the population of physicians in Japan as a whole. However, at the very least, physicians who have seen NPs up close in actual clinical practice will better understand their capabilities than physicians who do not know them. They would be in a position to evaluate safety concerns even more severely since the physician who issued the order would be held accountable. Thus, this sampling provided a deep, multifaceted, and quantitative picture of the current status of NPs in real-world clinical practice. The results that many of these physicians recognize that increasing the number of NPs will improve the quality of medical care, that they practice the full range of education and training, and that the government should define the scope of their practice should serve as a reference for policy makers as they work to reform Japan's healthcare delivery system.
On the other hand, however, it is puzzling that NPs are less likely than physicians to believe that they themselves are contributing to improving the quality of medical care. Japan has the virtue of "modesty" and Japanese NPs believe in modesty [16]. This is a phenomenon that is difficult for Westerners to understand, often likened to a "bamboo ceiling", and understood as a negative in career development in the international community, especially in the West. However, in Japan, it is considered wisdom for career development without causing friction in society [17]. If this is the cause, there is no need to see it as a problem when practiced in the Japanese context. On the other hand, if it is due to a lack of clinical experience, then it will change over time, and no special measures will be necessary. If, however, the cause is that NPs feel incompetent and lack confidence, then additional education to build competence or an improved educational system may be necessary. In any case, the cause of this problem needs to be clarified in the future. This is because if NPs are to acquire prescriptive rights and assume independent practice in the future, it is a prerequisite that they demonstrate their own competence to those around them.
Limitation
The sampling of physicians in this survey is purposive sampling, and physicians who are not favorable to NPs may not have responded. Therefore, the opinions cannot be said to be representative of physicians throughout Japan, nor can a simple comparison of Japan and the U.S. be made. In the future, a nationwide survey with a randomized sampling of physicians is needed as the number of NPs expands.
In addition, this survey was conducted one year after the novel coronavirus disease 2019 began to spread in Japan in 2020, which can be considered a period of change in which normal medical care was often not provided, which may have affected the results of the survey in some way. It should be also noted that healthcare systems are different by countries and cultures, suggesting that these factors should be considered in the future studies [18]. In the future, it will be necessary to continue to investigate their duties and to further investigate their role in the medical care that is being provided along with COVID-19 infection.
Conclusions
This study clarity the present working conditions of NPs from NPs' and physicians' perspectives in Japanese contexts. Japanese NPs may be able to work effectively with the collaboration with physicians. Therefore, the implementation of NPs in Japanese medical conditions should be discussed further for better healthcare. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All relevant datasets in this study are described in the manuscript. | 2022-11-23T06:17:33.269Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "e8f915cae9d009c047ae6efd1bf6b1d1a995d63f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2a47c15e588266aac8000e9752c5c00f3ad780b9",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235625961 | pes2o/s2orc | v3-fos-license | A Case of Post-traumatic Corneal Endothelial Dysfunction Treated with Descemet Membrane Endothelial Keratoplasty Combined with Cataract Surgery
Dear Editor, Management of endothelial dysfunction with coexisting cataract have been reported with several surgical methods in abroad. According to these reports, 3-step procedure with concurrent cataract extraction, intraocular lens implantation and Descemet membrane endothelial keratoplasty (DMEK), in other words, triple DMEK, do not represent higher risk of any complications than DMEK alone [1-4]. In patients 50 years or older, combined cataract surgery is recommended because of need of cataract surgery after DMEK [5]. We already know that there are several cases of triple DMEK in Korea. Nevertheless, we cannot found reports about this triple DMEK in Korea, so we would like to introduce a successful case in our hospital. We report a case of post-traumatic corneal endothelial dysfunction successfully treated with triple DMEK surgery. A 51-year-old man was referred for impaired vision for 18 months. He had trauma history at 35 years ago that his right eye was hit by a baseball and the cornea was lacerated with glasses, and it was primary repaired immediately at local clinic. He had been examined at other hospital for traumatic cataract, endothelial dysfunction and peripheral anterior synechiae (PAS) since 2019, and referred to our clinic. At the initial visit, his best-corrected visual acuity (BCVA) was 20 / 32 in his right eye. On specular microscopy, cell density of his right eye was 683 per mm and coefficient of variation was 34. Slit lamp examination and CASIA (Tomey Corp., Nagoya, Japan) swept-source anterior segment optical coherence tomography showed 360 ̊ PAS, corneal opacity, corneal edema, and cataract in the right eye (Fig. 1A-1C). We planned triple DMEK on his right eye. He requested cataract surgery on his left eye simultaneously, because of the expected discomfort with postoperative anisometropia. As a result, he underwent simultaneous DMEK on his right eye and cataract surgery on his both eyes. Donor graft was prepared as precut, 8.00 mm sized. A side puncture was made and ophthalmic viscoelastic device (OVD) was injected in anterior chamber. A 2.8-mm-sized corneal incision was made on temporal side and anterior synechiolysis was attempted with the size of over 8.0 mm, especially at nasal half. We made smaller capsulorrhexis than usual cataract surgery. Routine cataract surgery was performed with careful polishing, and hydrophobic acrylic 1-piece intraocular lens. After additional anterior synechiolysis, Descemet membrane stripping was done within OVD-filled anterior chamber and dilated pupil. Remaining OVD was removed precisely. Manual meiosis with peripheral iridotomy was made, and the rest steps of DMEK was performed as usual. On 1 day after surgery, BCVA was 20 / 500 and intraocular pressure (IOP) was 13.8 mmHg in his right eye. Anterior synechiae was almost released and Descemet membrane was well attached (Fig. 1D-1F). One month after surgery, BCVA was 20 / 25 and IOP was 15.2 mmHg in his right eye. On specular microscopy of his right eye, cell density was 2,203 per mm and coefficient of variation was 28. Corneal opacity was significantly reduced (Fig. 1G). Released anterior synechiae on anterior segment optical coherence tomography was maintained (Fig. 1H). Two months after surgery, BCVA was 20 / 22 and IOP was 14.9 mmHg in his right eye. In surgical planning of the cases like this, we would like to recommend some critical matters. Firstly, capsulorrhexis should be smaller than the usual cataract cases. To prevent intraocular lens touch of DMEK graft, we think capsulorrhexis might be done 4.5 mm or smaller during triple DMEK operation. Second is minimal use of OVD and complete removal of OVD before DMEK surgery. If OVD is remaining in anterior chamber after cataract surgery, the unfolded DMEK graft may not attach well the stromal bed. Lastly, when PAS was found before the DMEK surgery, DMEK graft size should be carefully determined Korean J Ophthalmol 2021;35(4):332-334 https://doi.org/10.3341/kjo.2021.0062
A Case of Post-traumatic Corneal Endothelial Dysfunction Treated with Descemet Membrane Endothelial Keratoplasty Combined with Cataract Surgery Dear Editor, Management of endothelial dysfunction with coexisting cataract have been reported with several surgical methods in abroad. According to these reports, 3-step procedure with concurrent cataract extraction, intraocular lens implantation and Descemet membrane endothelial keratoplasty (DMEK), in other words, triple DMEK, do not represent higher risk of any complications than DMEK alone [1][2][3][4]. In patients 50 years or older, combined cataract surgery is recommended because of need of cataract surgery after DMEK [5]. We already know that there are several cases of triple DMEK in Korea. Nevertheless, we cannot found reports about this triple DMEK in Korea, so we would like to introduce a successful case in our hospital.
We report a case of post-traumatic corneal endothelial dysfunction successfully treated with triple DMEK surgery. A 51-year-old man was referred for impaired vision for 18 months. He had trauma history at 35 years ago that his right eye was hit by a baseball and the cornea was lacerated with glasses, and it was primary repaired immediately at local clinic. He had been examined at other hospital for traumatic cataract, endothelial dysfunction and peripheral anterior synechiae (PAS) since 2019, and referred to our clinic. At the initial visit, his best-corrected visual acuity (BCVA) was 20 / 32 in his right eye. On specular microscopy, cell density of his right eye was 683 per mm 3 and coefficient of variation was 34. Slit lamp examination and CASIA (Tomey Corp., Nagoya, Japan) swept-source anterior segment optical coherence tomography showed 360˚ PAS, corneal opacity, corneal edema, and cataract in the right eye ( Fig. 1A-1C).
We planned triple DMEK on his right eye. He requested cataract surgery on his left eye simultaneously, because of the expected discomfort with postoperative anisometropia. As a result, he underwent simultaneous DMEK on his right eye and cataract surgery on his both eyes. Donor graft was prepared as precut, 8.00 mm sized. A side puncture was made and ophthalmic viscoelastic device (OVD) was injected in anterior chamber. A 2.8-mm-sized corneal incision was made on temporal side and anterior synechiolysis was attempted with the size of over 8.0 mm, especially at nasal half. We made smaller capsulorrhexis than usual cataract surgery. Routine cataract surgery was performed with careful polishing, and hydrophobic acrylic 1-piece intraocular lens. After additional anterior synechiolysis, Descemet membrane stripping was done within OVD-filled anterior chamber and dilated pupil. Remaining OVD was removed precisely. Manual meiosis with peripheral iridotomy was made, and the rest steps of DMEK was performed as usual.
On 1 day after surgery, BCVA was 20 / 500 and intraocular pressure (IOP) was 13.8 mmHg in his right eye. Anterior synechiae was almost released and Descemet membrane was well attached (Fig. 1D-1F). One month after surgery, BCVA was 20 / 25 and IOP was 15.2 mmHg in his right eye. On specular microscopy of his right eye, cell density was 2,203 per mm 3 and coefficient of variation was 28. Corneal opacity was significantly reduced (Fig. 1G). Released anterior synechiae on anterior segment optical coherence tomography was maintained (Fig. 1H). Two months after surgery, BCVA was 20 / 22 and IOP was 14.9 mmHg in his right eye.
In surgical planning of the cases like this, we would like to recommend some critical matters. Firstly, capsulorrhexis should be smaller than the usual cataract cases. To prevent intraocular lens touch of DMEK graft, we think capsulorrhexis might be done 4.5 mm or smaller during triple DMEK operation. Second is minimal use of OVD and complete removal of OVD before DMEK surgery. If OVD is remaining in anterior chamber after cataract surgery, the unfolded DMEK graft may not attach well the stromal bed. Lastly, when PAS was found before the DMEK surgery, DMEK graft size should be carefully determined considering that if the PAS is dense, it can be removed incompletely. So, the size should be smaller than the usual DMEK graft. In conclusion, triple DMEK procedure represents rapid visual rehabilitation and stable postoperative condition. We also can provide cost-effectiveness for shorter admission duration and additional efforts required for operation, such as nurses or anesthesiologist. If skillful surgeons carefully consider for critical matters above, triple DMEK seems to make lesser endothelial damage and show no significant effect on refraction due to the nature of DMEK.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2021-06-25T06:17:13.160Z | 2021-06-21T00:00:00.000 | {
"year": 2021,
"sha1": "f9382c1d8abb09b6d56f576ed4beb59926c8c7fd",
"oa_license": "CCBYNC",
"oa_url": "https://www.ekjo.org/upload/pdf/kjo-2021-0062.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db5f369f9154e843b077c18865b688c1fb645d40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119432755 | pes2o/s2orc | v3-fos-license | Deformed relativistic and nonrelativistic symmetries on canonical noncommutative spaces
We study the general deformed conformal-Poincare (Galilean) symmetries consistent with relativistic (nonrelativistic) canonical noncommutative spaces. In either case we obtain deformed generators, containing arbitrary free parameters, which close to yield new algebraic structures. We show that a particular choice of these parameters reproduces the undeformed algebra. The modified coproduct rules and the associated Hopf algebra are also obtained. Finally, we show that for the choice of parameters leading to the undeformed algebra, the deformations are represented by twist functions.
Introduction
In a series of papers Wess [1] and collaborators [2,3,4] have discussed the deformation of various symmetries on noncommutative spaces. The modified coproduct rule obtained for the Poincaré generators is found to agree with an alternative quantumgroup-theoretic derivation [5,6,7] based on the application of twist functions [8]. The extension of these ideas to field theory and possible implications for Noether symmetry are discussed in [9,10,11]. An attempt to extend such notions to supersymmetry has been done in [12,13,14,15]. Recently, the deformed Poincaré generators for Liealgebraic θ (rather than a constant θ) [16] and Snyder [17] noncommutativity [18] have also been analysed.
In this paper we develop an algebraic method for analysing the deformed relativistic and nonrelativistic symmetries in noncommutative spaces with a constant noncommutativity parameter. By requiring the twin conditions of consistency with the noncommutative space and closure of the Lie algebra, we obtain deformed generators with arbitrary free parameters. For conformal-Poincaré symmetries we show that a specific choice of these parameters yields the undeformed algebra, although the generators are still deformed. For the nonrelativistic (Schrödinger [19,20,21]) case two possibilities are discussed for introducing the free parameters. In one of these there is no choice of the parameters that yields the undeformed algebra.
A differential-operator realisation of the deformed generators is given in the coordinate and momentum representations. The various expressions naturally contain the free parameters. For the particular choice of these parameters that yields the undeformed algebra, the deformations in the generators drop out completely in the momentum representation.
The modified comultiplication rules (in the coordinate representation) and the associated Hopf algebra are calculated. For the choice of parameters that leads to the undeformed algebra these rules agree with those obtained by an application of the abelian twist function on the primitive comultiplication rule. (For the conformal-Poincaré case this computation of modified coproduct rules using the twist function already exists in the literature [5,6,7,8], but a similar analysis for the nonrelativistic symmetries is new and presented here.) For other choices of the free parameters the deformations cannot be represented by twist functions. The possibility that there can be such deformations also arises in the context of κ-deformed symmetries [22].
Deformed conformal-Poincaré algebra
Here we analyse the deformations in the full conformal-Poincaré generators compatible with a canonical (constant) noncommutative spacetime. We find that there exists a oneparameter class of deformed special conformal generators that yields a closed algebra whose structure is completely new. A particular value of the parameter leads to the undeformed algebra.
We begin by presenting an algebraic approach whereby compatibility is achieved with noncommutative spacetime by the various Poincaré generators. This spacetime is characterised by the algebra It follows that, for any spacetime transformation, for constant θ. Translations, δx µ = a µ , with constant a µ , are obviously compatible with (2). The generator of the transformation consistent with δx µ = i a σ [P σ ,x µ ] isP µ =p µ . For a Lorentz transformation, δx µ = ω µνx ν , ω µν = −ω νµ , the requirement (2) implies ω µ λ θ λν − ω ν λ θ λµ = 0, which is not satisfied except for two dimensions. Therefore, in general, the usual Lorentz transformation is not consistent with (2). A deformation of the Lorentz transformation is therefore mandatory. We consider the minimal O(θ) deformation: where n 1 , n 2 and n 3 are coefficients to be determined by consistency arguments. Now the generator, where µν denotes the preceding terms with µ and ν interchanged, is consistent with δx µ = −( i /2)ω ρσ [Ĵ ρσ ,x µ ] for n 1 = n 2 + 1 = λ 1 , n 3 = −λ 2 , a result which follows on using (1). It is therefore clear that n 1 = n 2 = 0 is not possible, which necessitates the modification of the transformation as well as the generator. It turns out that The closure of the normal Lorentz algebra is obtained only for λ 1 = 1 2 and λ 2 = 0 [4]. Similarly, the usual scale transformation, δx µ = αx µ , is not consistent with (2). A minimally deformed form of the transformation is δx µ = αx µ +αnθ µνp ν . The consistency, δx µ = i α[D,x µ ], is achieved only for n = 1 byD =x µp µ . Likewise, starting with the minimally deformed form of the special conformal transformation we find that the generator, This completes our demonstration of the compatibility of the various transformation laws with the basic noncommutative algebra. However, achieving consistency with the transformation and closure of the algebra are two different things. It turns out that the minimal O(θ) deformation, while preserving consistency, does not yield a closed algebra. Indeed we find that [K ρ ,D] algebra does not close, necessitating the inclusion of O(θ 2 ) terms in the deformed transformation and the deformed generator. An appropriately deformed form, involves 6 free parameters. However, the closure of the [K ρ ,D] algebra fixes 5 parameters, η 2 = −η 3 = −4η 4 = 1, η 5 = η 6 = 0, leaving only one, η 1 , as free. The final form of the deformed generators, involves one free parameter. The algebra satisfied by the generators is such that the Poincaré sector remains unaffected changing only the conformal sector: A one-parameter class of closed algebras is found. Fixing η 1 = − i yields the usual (undeformed) Lie algebra. In that case the deformed special conformal generator also agrees with the result given in [15]. The relations in (1) are easily reproduced by representinĝ In this coordinate representation, the generators read One may also choose the momentum representation, where N = δ µ µ is the number of spacetime dimensions. For η 1 = − i , when the generators satisfy the undeformed algebra, the deformation inK ρ drops out and all the generators in momentum representation have exactly the same structure as in the commutative description.
The deformed generators lead to new comultiplication rules. The coproduct rules for the Poincaré sector were earlier derived in [1,4,5,6] and for the conformal sector in [7,15]. The free parameter appearing inK ρ does not appear explicitly in the coproduct ∆(K ρ ). Computing the basic Hopf algebra, it turns out that the Hopf algebra can be read off from (5) by just replacing the generators by the coproducts. For example,
Deformed Schrödinger and conformal-Galilean algebras
Now we consider separately the Schrödinger symmetry and the conformal-Galilean symmetry, both of which are extensions of the Galilean symmetry. The standard Schrödinger algebra is given by extending the Galilean algebra, which involves Hamiltonian (H), translations (P i ), rotations (J ij ) and boosts (G i ), with the algebra of dilatation (D) and expansion or special conformal transformation (K). The standard free-particle representation of this algebra is given by Now we introduce noncommutativity in space: Like the deformed conformal-Poincaré case, we follow a two-step algebraic process. First, by requiring the compatibility of transformations with the noncommutative space, a general deformation of the generators is obtained. A definite structure emerges after demanding the closure of the algebra. The linear momentum and the Hamiltonian retain their original forms because the algebra ofp i is identical to p i . For other generators we consider the minimal deformation. The final form of the generators, leads to a nonstandard closure of the algebra: the other brackets remaining unaltered. Some comments are in order. We have obtained the deformed Schrödinger algebra involving two parameters, λ 2 and λ 3 . For θ → 0, the deformed algebra reduces to the undeformed one. A distinctive feature is that there is no choice of the free parameters for which the standard (undeformed) algebra can be reproduced. This is an obvious and important difference from the Poincaré treatment.
It is however possible to obtain an alternative deformation which, for a particular choice of parameters, yields the undeformed algebra. We notice that the brackets involving all generators other thanK reduce to the standard ones by fixing λ 2 = 0 and λ 3 = 1 2 , although the generators are deformed. So we allow O(θ 2 ) terms inK. Demanding the closure of [Ĥ,K] and [D,K] brackets yields O(θ 2 )-deformedK: The brackets involving thisK turn out to be which give us another deformed Schrödinger algebra involving three parameters, λ 2 , λ 3 and λ 6 . It is easily seen from (10) that the particular choice of parameters, λ 2 = 0 and λ 3 = λ 6 = 1 2 , reproduces the standard algebra. This agrees with [23]. Now onwards we shall restrict toK given by (9) whenever expansions are considered.
In the coordinate representation, where N ′ = δ ii is the number of space dimensions. The momentum representation ofD isD = ip iði − tp 2 /m. The representation forK involves a deformation which, expectedly, drops out for λ 6 = 1 2 that corresponds to the standard algebra. The comultiplication rules, using the coordinate representation, turn out to be Among the free parameters λ 2 , λ 3 and λ 6 appearing in the definition of the deformed generators, only the first two occur in the expressions for the deformed coproducts. The parameter λ 6 , which is present inK, however, does not occur in ∆(K). Expectedly, it turns out that the Hopf algebra can be directly read off from the algebra (Eqs. (8), etc.) by just replacing the generators by the coproducts.
There is an alternative method, based on quantum-group-theoretic arguments, of computing the coproducts [5,6,7]. This is obtained for the particular case when the deformed generators satisfy the undeformed algebra. In our analysis it corresponds to the choice λ 2 = 0, λ 3 = λ 6 = 1 2 . The essential ingredient is the application of the abelian twist function, as a similarity transformation on the primitive coproduct rule to abstract the deformed rule. After some calculations it can be shown that the deformed coproduct rule (14), for example, for the specific values of the free parameters already stated, is obtained by identifying The coproducts for other generators can similarly be obtained from the same twist element.
Strictly speaking, the algebra obtained by enlarging the Galilean algebra by including dilatations and expansions is not a conformal algebra since it does not inherit some basic characteristics like vanishing of the mass, equality of the number of translations and the special conformal transformations, etc. However, since it is a symmetry of the Schrödinger equation, this enlargement of the Galilean algebra is appropriately referred to as the Schrödinger algebra. It is possible to discuss the conformal extension of the Galilean algebra by means of a nonrelativistic contraction of the relativistic conformal-Poincaré algebra. Recently this was discussed for the particular case of three dimensions [24]. This algebra is different from the Schrödinger algebra discussed earlier. We scale the generators and the noncommutativity parameter aŝ D =D,K ρ = K 0 ,K i = cK, c 2Ki ,P µ = P 0 ,P i = H /c,P i , where c is the velocity of light. We use this scaling in (5) and take the limit c → ∞.
Finally we redefine to choose the same symbols for the nonrelativistic case (D →D, etc.). Then we get the deformed algebra which also contains a free parameter. Restricting to three dimensions and the specific choice η 1 = − i reproduces the results obtained recently in [24].
Conclusions
We have analysed the deformed conformal-Poincaré, Schrödinger and conformal-Galilean symmetries compatible with the canonical (constant) noncommutative spacetime and found new algebraic structures. For the conformal-Poincaré case we found a one-parameter class of deformed special conformal generators that yielded a closed algebra whose structure was completely new. Fixing the arbitrary parameter reproduced the usual (undeformed) Lie algebra.
Next we considered the Schrödinger symmetry. Here we obtained the deformed Schrödinger algebra involving two parameters. The closure of this algebra yielded new structures. The generators involved O(θ) deformations. For θ → 0, the deformed algebra reduced to the undeformed one. However, a distinctive feature was that there was no choice of the free parameters for which the standard (undeformed) algebra could be reproduced. Exploring other possibilities, then we obtained an alternative deformation which, for a particular choice of parameters, indeed reproduced the undeformed algebra. In this case the modified special conformal generator involved O(θ 2 ) terms. The deformed Schrödinger algebra now involved three parameters, a particular choice of which reproduced the standard algebra.
Finally we discussed the conformal extension of the Galilean algebra by means of a nonrelativistic contraction of the relativistic conformal-Poincaré algebra. This algebra is different from the Schrödinger algebra, both in the commutative and noncommutative descriptions. The present analysis can be extended to other (nonconstant) types of noncommutativity. Some results in this direction have already been provided for the Snyder space [18]. | 2019-04-14T02:54:35.123Z | 2006-04-23T00:00:00.000 | {
"year": 2006,
"sha1": "dd0a816636fe04135b6fff976daab9c1c3b12c3f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0604162",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dd0a816636fe04135b6fff976daab9c1c3b12c3f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249200227 | pes2o/s2orc | v3-fos-license | Anti-Müllerian hormone as a diagnostic tool to identify queens with ovarian remnant syndrome
Objectives Ovarian remnant syndrome (ORS) is suspected when heat signs occur in spayed individuals, but further diagnostic procedures are necessary to exclude other possible oestrogen sources, such as the adrenal gland or exogenous supplementation. Anti-Müllerian hormone (AMH), secreted by granulosa cells or Sertoli cells, serves to differentiate sexually intact from gonadectomised animals and has been described in dogs as a tool for diagnosing ORS. The aim of this study was to evaluate if AMH determination can be used to diagnose ORS in cats. Methods AMH was measured with a chemiluminescence immunoassay in serum samples of 15 sexually intact, 9 spayed and 16 cats with a history of heat signs after spaying. Abdominal ultrasound (n = 13), vaginal smears (n = 7), progesterone measurement (n = 5) and laparotomy (n = 14) were used to determine the presence of ovarian tissue. After surgery, a histological examination of the obtained tissue was performed in the cats with suspected ORS. Results In 15 cats with ORS the AMH serum concentrations were significantly higher than in spayed cats (n = 10; P = 0.025) and significantly lower than in sexually intact cats (n = 15; P = 0.001). Among the cats with ORS, the highest AMH serum concentrations were measured in the queens with cystic ovarian alterations and in one cat from which a whole ovary was obtained. The cat with the lowest AMH serum concentration had a simultaneous high progesterone serum concentration. Cats with ORS did not show any heat signs after surgical removal of the ovarian tissue. Conclusions and relevance A single determination of AMH in blood serum is a useful diagnostic tool for the diagnosis of ORS in cats, regardless of the hormonal activity of the remnant ovarian tissue.
Introduction
Ovarian remnant syndrome (ORS) in cats is a consequence of the incomplete removal of the ovaries during elective spaying. [1][2][3][4][5] Although frequently discussed, there is little evidence that congenital ectopic ovarian tissue occurs in any domestic animal species. 6 Affected animals usually develop heat signs up to several months or years after spaying. Complications such as uterine stump pyometra, ovarian tumours or even hyperandrogenism seem to be rare in cats and have only been documented in isolated case reports. 2,[7][8][9][10] Several diagnostic methods have been described to verify the presence of ovarian tissue. In addition to heat signs, an exfoliative vaginal cytology containing a high number of superficial cells or an increased serum oestradiol concentration may indicate oestrogen-secreting ovarian tissue. However, administration of exogenous oestrogen has to be excluded. 1 After spontaneous or induced ovulation an elevated serum progesterone concentration confirms luteal tissue. 11,12 When no heat signs are present during the time of examination, an oestrogen stimulation test or luteinising hormone (LH) test has been described. 13,14 However, most of these diagnostic approaches have been solely tested in sexually intact queens; therefore, it can only be assumed that cats with ORS show comparable hormonal changes. Abdominal ultrasonography can detect remnant ovarian tissue, especially when follicles or corpora lutea are present. 5 The preferred treatment is the surgical removal of the remnant ovarian tissue via laparotomy, although laparoscopic approaches have been described. 4,5,15 In most cases, the remnant ovarian tissue is located at one or both of the ovarian pedicles, but ovarian tissue displaced to other locations inside the abdominal cavity during spaying has the potential to revascularise and resume its hormonal activity. 9,16 Serum anti-Müllerian hormone (AMH), secreted from Sertoli cells in males and granulosa cells in females, helps to distinguish sexually intact dogs and cats from gonadectomised individuals. [17][18][19] Furthermore, the usefulness of serum AMH concentration to diagnose ORS in female dogs has been described. 20,21 AMH determination as a diagnostic tool to identify cats with remnant ovarian tissue has only been used in a few cases so far. 16,18 The aim of this study was to determine the use of serum AMH to identify ORS in cats alone or in combination with other diagnostic approaches at different stages of hormonal activity of the remnant ovarian tissue.
Animals
This study included 15 cats, shown in Table 1, that were examined from January 2017 to October 2021, and were presented because of recurring oestrous behaviour after spaying, which took place between 1 and 69 months before presentation. In total, 13 of these cats were presented to our clinic and two to a private practitioner.
All cats underwent a general clinical examination. An abdominal ultrasound was performed in 13 cats to examine if remnant ovarian tissue was present behind the kidneys at the position of the ovaries (excluding cats 4 and 13). A vaginal swab was obtained and stained (DiffQuik, RAL Diagnostics) in seven cats (cats 2, 3, 8, 9, 11, 14 and 15) and evaluated. We determined the amount and type of epithelial cells, the quality of the background, the presence of other cells such as neutrophils or erythrocytes and the presence of bacteria. In three cats (cats 2, 5 and 14) serum progesterone levels were determined at first clinical presentation and in a further two cats (cats 3 and 8) serum progesterone levels were determined 6 days after injection of 0.5 ml of human chorionic gonadotropin (hCG) (Ovogest 300 IE/ml; MSD Tiergesundheitsdienst) intramuscularly. With the exception of cat 7, all cats underwent a laparotomy under general anaesthesia and a histopathological examination of the removed tissue was performed afterwards. For general anaesthesia at our clinic, the cats received premedication including diazepam and ketamin for sedation and methadone for analgesia, and were induced with either propofol or alfaxalone. After intubation, anaesthesia was maintained with isoflurane. For laparotomy, a midline incision from behind the umbilicus to the height of the last pair of mammary glands was performed, and the areas behind both kidneys, as well as the remnant uterus, were investigated, and all ovarian-like structures were removed for pathohistological examination. For postoperative analgesia, the cats were treated with meloxicam for 3 days.
AMH concentrations were also determined in 15 sexually intact cats presented for elective spaying or gynaecological examination and in 10 previously spayed cats presented because of orthopaedic issues in nine cases and in one case because of heat signs and cystic endometrial hyperplasia of the uterus after exogenous oestrogen administration. The group of sexually intact cats consisted of 13 European Shorthair, one Norwegian Forest Cat and one Birman cat. These cats were aged 6-48 months and had a body weight in the range of 2.3-6.1 kg. The group of previously spayed cats included seven European Shorthair, two Maine Coon and one Norwegian Forest Cat. These cats were aged 24-164 months and their body weight was in the range of 2.5-5.0 kg. Measurement of the AMH concentration was performed in all cats in serum samples collected for preanaesthesia blood testing or determination of the hormonal status.
Ethical approval and informed consent AMH determination in the cats that were not presented because of a suspected ORS or breeding soundness examination was conducted under the stipulations of the German Protection of Animals Act (reference number 55.2-1-54-2532-111-2016 from the Bavarian Government). All other examinations were carried out during routine diagnostics. All owners provided signed consent for the collection of data for the purpose of treatment and care of animals, as well as for research.
Hormonal analysis AMH measurements were performed at a commercial laboratory (Laboklin). AMH serum concentrations were determined using a chemiluminescence immunoassay on Cobas E602 analyser (Roche) using murine anti-AMH antibodies. The AMH test was validated for cats (intraassay 1.8 %; inter-assay 7.4 %). Recovery of human AMH standard added to feline plasma showed changes in optical density parallel to the AMH standard curve. The minimum detection limit of the AMH test was 0.01 ng/ml and the maximum detection limit was 23 ng/ml. Progesterone was measured with an automated enzyme linked fluorescent assay (MiniVidas; Biomerieux). Concentrations below 2 ng/ml were interpreted as baseline; concentrations above 2 ng/ml confirmed active luteal tissue.
Histopathological examination
The removed tissue was measured and inspected grossly in detail with a focus on size, cut surface and colour. It was cut in slices and representative sites were embedded in paraffin according to standard procedures, sectioned at 3-4 μm and stained with haematoxylin and eosin (H&E).
Statistical analysis
The statistical analysis was carried out with IBM SPSS 26.0 software. The data were checked for normal distribution using the Kolmogorov-Smirnov test. Since a normal distribution was not given, the non-parametric Kruskal-Wallis test with post hoc adjusting according to Bonferroni was used for group comparison. The data were visualised by using a dot plot with an overlying box plot. As specifications of the distribution the mean value, standard deviation, median and range of the metric parameters were determined. The level of significance was P <0.05.
Results
Ultrasound examination confirmed the suspicion of remnant ovarian tissue in 11/13 cats (cats 1-3, 5-12, 14 and 15) ( Table 1). In all of the vaginal swabs (cats 2, 3, 8, 9, 11, 14 and 15) the presence of superficial cells demonstrated a clinically relevant secretion of oestrogen. In one queen, the progesterone concentration was above 2 ng/ml at the time of first presentation (cat 14). The induction of ovulation in two cats resulted in a progesterone concentration above 2 ng/ml 6 days after treatment (cats 3 and 8).
The histopathological examination of the excised tissue (n = 14) confirmed that there was, in fact, remnant ovarian tissue. In 10 cases the ovarian tissue was located on the left side, in one queen on the right and in another one on both sides. In the two cases where cats underwent laparotomy outside the clinic, the exact location of ovarian tissue was not registered.
In 8/14 cats, the remnant ovarian tissue contained corpora lutea and follicles in different stages (cats 8-15). In cat 2 (Table 1) a whole ovary with corpora lutea in regression and small follicles was obtained. The ovarian tissue of the remaining five cats had diverse cystic alterations (cats 1 and 3-6). The results of the AMH determination in ORS cats (n = 15), ovariectomised cats (n = 10) and intact cats (n = 15) are shown in Table 2. All of the completely ovariectomised cats had an AMH concentration below the lower limit of the test (⩽0.01 ng/ml) (Figure 1). The mean AMH concentrations of the cats with ORS was significantly higher compared with the spayed cats (P = 0.025) and significantly lower than sexually intact cats (P = 0.001). Consequently, the mean AMH concentration of the sexually intact cats was significantly higher than that of the spayed cats (P <0.001). There was no overlapping of the AMH concentrations of the ORS cats with the sexually intact cats and with the ovariectomised cats.
Discussion
ORS is well known in cats as a result of incomplete removal of the ovaries during elective spaying, but the clinical diagnosis can be challenging. 1,2,4 The suspicion for ORS arises when a previously spayed cat is presented with heat signs and the vaginal swab and/or serum oestrogen measurement confirms a clinically relevant oestrogen secretion. While most animals with ORS show signs of heat shortly after the initial spaying, the interval between spaying and heat signs can be as long as 10 years. 5 In our study, 9/15 queens were presented within the first 6 months after spaying, and in one cat, the first heat signs occurred more than 5 years after spaying. It must be emphasised that heat signs in spayed cats can also be induced by exogenous oestrogens. ORS should not be based on oestrogen-induced alterations alone. One of the cats in the spayed group of this study was presented because of heat signs after spaying. Ultrasonography revealed signs of a cystic endometrial hyperplasia, but remnant ovarian tissue was not detected. The histopathological examination confirmed the ultrasonographic findings in the cat. In this case, the owner had used an oestrogen spray to reduce the side effects of her menopause. Exogenous oestrogens have the potential to induce behavioural and clinical signs of oestrogen up to and including alopecia in dogs. 15,22 This case indicates that this is also possible in cats, and owners should use oestrogen-containing sprays or creams carefully and only on body parts that are not exposed to the animals. Further, oestrous-like behaviour has been associated with a hormonal active adrenocortical carcinoma in a spayed cat. 23 Thus, the value of diagnostic approaches such as heat signs, vaginal smears or serum oestrogen determination depends on the definite exclusion of exogenous oestrogen application and endogenous extraovarian oestrogen sources.
An additional approach to diagnose ORS in queens with heat signs is the measurement of progesterone several days after induction of ovulation with hCG 11 or gonadotropin-releasing hormone (GnRH). GnRH has not been ascertained in cats with ORS; however, the clinical use has shown its usability. 1,2 When the queen is not under the influence of oestrogen at the time of presentation, oestradiol measurement after GnRH stimulation or a semi-quantitative quick test for LH have been described to verify the presence of ovaries, but these studies were conducted with intact and ovariectomised cats only. 13,14 The ultrasonographic visualisation of the remnant ovarian tissue was successful in 13/15 cats in this study. Ultrasonography seems to be a valuable clinical method for the diagnosis of ORS, especially when combined with behavioural signs of heat or vaginal smears containing high amounts of superficial cells as described before. 15 However, ORS should not be excluded when remnant ovarian tissue cannot be visualised during an ultrasound examination because the reliability of an ORS diagnosis with ultrasound depends on several factors: the equipment and the experience of the veterinarian; the hormonal activity of the ovarian tissue (eg, the formation of follicles, corpora lutea, cysts or tumours); and the size of the remnant ovarian tissue. 5 It has been shown in dogs that AMH can be a useful tool to differentiate sexually intact bitches from spayed animals. 20,21 This study shows that AMH is also useful in identifying ORS in queens. It is believed that only four cases of AMH determination in cats with ORS have been described so far. 16,18 In three of these cases, the AMH serum levels in ORS cats have been in-between the levels of spayed and intact individuals, and in one case the AMH concentration has not been mentioned. In the present study, the AMH concentration differed significantly between completely spayed cats and cats with ORS as well as sexually intact individuals and ORS cats. The highest AMH concentrations were measured in the cats with cystic ovarian alterations (cats 1 and 3-6) and in cat 2, which had a whole ovary left. In women, AMH has been described as a marker for polycystic ovarian syndrome with the highest sensitivity for anovulatory polycystic ovaries. 24 In contrast, in dogs 25 and cows, 26 cystic ovarian alteration seems to have no impact on the AMH concentration. In cat 3, a luteal cyst combined with a high AMH concentration was found; however, it is unlikely that this type of cyst is the source of the elevated AMH concentration, because the origin of AMH is granulosa cells in follicles. Further research is needed to examine a possible correlation between ovarian cystic alteration and serum AMH concentration in the cat.
The two cats with the lowest AMH concentrations (cats 14 and 15) had active luteal tissue. In bitches with ORS it has been described that AMH may be low, when the remaining ovarian tissue contains mostly corpora lutea. 21 This suggests that the AMH concentration may be low in cats, when the remnant ovarian tissue mainly consists of luteal tissue as described in dogs. Therefore, an additional progesterone measurement may be helpful for the diagnostic approach of the ORS, but further research is needed.
Every diagnostic approach should be as low stress as possible and the number of examinations, including blood sampling and drug administration, should be kept to a minimum. A possible scheme to diagnose ORS in cats including AMH measurement is shown in Figure 2. A first presentation during heat appears to be recommended. At this time oestrogen-induced changes can be seen in the gynaecological examination and follicular-like structures may be easier identified during ultrasonography. In addition, induction of ovulation can be performed during heat, and determination of progesterone several days later leads to a definite diagnosis. In addition, this study shows that a single blood sample and AMH determination helps to identify ORS in cats during heat as well as at any other time.
The treatment of choice for ORS is the surgical removal of the remnant ovarian tissue. Instead of conventional laparotomy with a considerably longer surgical incision than required for elective ovariectomy, laparoscopic treatment of ORS has also been described. 15 However, abdominal adhesions and enlargement of the ovarian tissue due to pathological enlargement may complicate a laparoscopic procedure. In addition, hysterectomy cannot be performed via laparoscopy without enlargement of the incision.
In contrast to dogs, in which remnant ovarian tissue is most often found on the right side, 5 there appears to be no preferred location in cats, which supports the findings of another report. 4 In this study, the remnant ovarian tissue was found on the left side in 10 cases, on the right side in one case and on both sides in one case. It has been described that remnant ovarian tissue can revasculise in other abdominal locations such as the omentum or the peritoneum. 9 In rare cases, ORS can be a result of anatomical specifics. In cat 2, a uterine horn aplasia combined with a renal agenesis was found on the left side. This is a previously described congenital abnormality with a likely predisposition in the Ragdoll, 27 but also described in a domestic shorthair. 28 A genetic influence seems possible because the occurrence has been described in littermates. 29 Uterus unicornus may occur with or without renal agenesis, but all of the reported animals had two ovaries, 30 and affected animals can become pregnant. 29 The left ovary of the cat in this study was without an ovarian bursa or an oviduct and had a subjectively longer, thinner and flatter appearance than normal ovaries. Furthermore, the ovary seemed to be in a more cranial position and was more strongly attached to the dorsal peritoneum than the ovaries of a normal developed genital tract.
Female dogs with ORS seem to be predisposed to ovarian alterations, primarily granulosa cell tumours. 31,32 This appears to be a rare finding in cats. Case reports described a luteoma 9 and a granulosa cell tumour. 2 A thecoma combined with behavioural signs of hyperandrogenism has also been described. 8,10 None of the cats in this study had ovarian pathologies other than cystic alterations, and all of them showed typical oestrogen-induced heat signs at the time of presentation or reported from the owner.
Conclusions
A single serum AMH determination is a useful diagnostic tool to identify cats with an ORS independent of the hormonal activity of the remnant ovarian tissue. Furthermore, serum AMH concentration enables the differentiation between intact queens and cats with ORS, which can be helpful in individuals with an unknown history. | 2022-06-01T06:26:10.261Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "5b01ccfcba5f5a818d38cbfb2b3d96b9787b86da",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1098612X221099195",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "956f08646d8fb318dbbbf8114ae2a13a51fc3f0b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208158378 | pes2o/s2orc | v3-fos-license | Magneto-structural correlations in a systematically disordered B2 lattice
Ferromagnetism in certain B2 ordered alloys such as Fe$_{60}$Al$_{40}$ can be switched on, and tuned, via antisite disordering of the atomic arrangement. The disordering is accompanied by a $\sim$1 % increase in the lattice parameter. Here we performed a systematic disordering of B2 Fe$_{60}$Al$_{40}$ thin films, and obtained correlations between the order parameter ($S$), lattice parameter ($a_0$), and the induced saturation magnetization ($M_{s}$). As the lattice is gradually disordered, a critical point occurs at 1-$S$=0.6 and $a_0$=291 pm, where a sharp increase of the $M_{s}$ is observed. DFT calculations suggest that below the critical point the system magnetically behaves as it would still be fully ordered, whereas above, it is largely the increase of $a_0$ in the disordered state that determines the $M_{s}$. The insights obtained here can be useful for achieving tailored magnetic properties in alloys through disordering.
Controlled disordering of the crystal lattice can unlock potential to tune the properties of magnetic materials. Intrinsic material properties such as the saturation magnetization (M s ) can be highly sensitive to the ordering of atoms in alloy lattices [1][2][3][4][5][6][7], manifesting a wide range for M s -tuning. However, a precise understanding of the mechanism of increasing M s with decreasing structural order has been elusive, since varying the ordering also causes changes of other structural properties. This makes it difficult to experimentally associate the changes of the magnetic properties to various changes in structural properties.
The understanding of disorder-induced effects can be approached in prototype alloys that respond sensitively to disordering. Examples of this behavior are certain binary alloys which order in the B2 structure and transform from para-or antiferromagnets to ferromagnets via small atomic re-arrangements [1][2][3][4][5][6][7].
To investigate the role of strain, recent studies have applied mechanical deformation to induce disorder [3,9]. Observations of the M s vs. lattice expansion and M s vs. disorder relationships under mechanical stress-induced disordering processes have been reported [3,9,29,30], however without consensus; the induced M s has been considered purely a disordering effect [18][19][20], and contradicted by claims of an M s contribution from the lattice expansion [21,25].
Disorder caused by mechanical deformation tends to be concentrated at the strained regions, and can be spatially inhomogeneous and difficult to characterize. A more direct way to induce atomic rearrangements is via ion-irradiation of thin films. Knock-on collisions with energetic ions can displace atoms from their ordered lattice sites, followed by a thermally-driven stochastic vacancy recombination leading to the formation of antisite defects. The mass of the penetrating ions, energy of the ions as well as temperature determine the chemical disorder manifested by the irradiation process -all of which can be exploited to subtly vary the induced disorder. This direct-disordering approach can be used to tailor the order-disorder transition in fine steps while keeping the composition fixed.
Here we show that the magnetic behavior of systematically disordered B2 Fe 60 Al 40 falls into three distinct regimes; despite the monotonic increase of a 0 with chemical disordering, the film behaves largely paramagnetic below a critical value of disordering, whereas above the critical regime it becomes ferromagnetic and M s is largely constant. The two regimes are separated by a third one, showing a critical M s increase.
Polycrystalline Fe 60 Al 40 films with a thickness of 250 nm were deposited by single-target magnetron sputtering under a 3·10 −3 mbar Ar atmosphere on Si(001) buffered with 250 nm thick SiO 2 . The use of thin films allows the whole film volume to be chemically disordered by ions and subsequently probed by X-ray and magnetic measurements. Post-annealing at 773 K for 1 hr was performed in vacuum to obtain B2 Fe 60 Al 40 . To achieve a systematic characterization of structureproperty relationships, ion-irradiation of the above B2 ordered films was performed under a wide variety of conditions. The variable parameters were the ion-species (H + , He + , and Ne + ), ion-energy (17 -170 keV), ion-fluence (up to 4·10 17 ions/cm 2 ) and sample temperature during irradiation (100 -523 K). These parameters were selected based on Monte Carlo type simulations implementing the binary collision approximation [31], to achieve a peak average displacement between 0.07 and 5.77 per atom (for details see Supplement [32]).
Figures 1 b-d show the structural and magnetic analysis on selected samples with 3 different treatments -
B2 and Ne + irradiated with 9·10 13 and 6·10 14 ions/cm 2 leading to the fully ordered, intermediate, and disordered state, respectively. The lattice parameter (a 0 ) and order parameter S were estimated using X-ray diffraction, where the shift of the (110) fundamental peak (FP) is a measure of the a 0 , and the integral intensity of the (100) superstructure peak (SSP) is directly dependent on the ordering (see supplement). In Figure 1, the XRD peaks of the fully ordered, intermediate and disordered thin films refer to order parameters of S=0.8, 0.4, and <0.2 (Figure 1b), and lattice parameters of a 0 =2.89Å, 2.91Å, and 2.93Å (Figure 1c), respectively. Correspondingly, the M s increases from 10 kA/m for the fully ordered structure, to 180 kA/m for intermediate order and ∼500 kA/m for the fully disordered ( Figure 1d). All measurements have been performed at room temperature.
The order parameter S can be estimated using the square root of the ratio of the integrated measured intensities of the SSP, I SSP and of the FP, I F P with respect to the theoretical calculated values of the ordered B2 Fe 60 Al 40 structure [33]: where S is dimensionless and 0 for a fully disordered A2 structure, and due to the off-equiatomic composition, can reach only a maximum of ∼0.8 for the best ordered case [34]. Since ion-irradiation gradually disorders the film, a more convenient term is the disorder parameter, defined as 1-S, and will be used in the discussion. The order parameter was estimated from the low order Bragg reflections. Considering the background of the XRD measurement and the peak broadening due to variations of microstrain and crystallite size, an estimated disorder of up to 1-S ≈ 0.8 is detectable (Supplement).
We evaluate the inter-dependencies of the structural and magnetic properties, namely, 1-S, a 0 and M s . The a 0 and M s are plotted as functions of 1-S in Figure 2a and b respectively, whereas the M s (a 0 ) is shown in Figure 2c. Despite the vast variety of conditions applied in the experiments, the relationship between M s , a 0 and 1-S collapses into a single curve, as shown in Figure 2d.
As seen in Figure 2a, a 0 increases monotonically with disorder, from a 0 = 2.89Å for the fully ordered films to 2.91Å for 1-S = 0.6. Further ion-irradiation results in a vanishing SSP, implying a 1-S > 0.8. Intermediate values between 1-S = 0.6 and 0.8 were not observed under any of the attempted experimental conditions. Thin films with 1-S > 0.8 are considered nominally fully disordered.
Here, a 0 varies over a range from 2.91Å to 2.935Å.
The M s for fully ordered and thus largely paramagnetic films is initially constant (10 kA/m) with increasing disorder (Figure 2b), until a sharp onset close to 1-S = 0.6 appears, after which a sharp increase of M s to 300 kA/m The two regimes of different, but nearly stable M s are demarcated by a dotted line passing through the critical values of 1-S = 0.6 and a 0 = 2.91Å (Figure 2). Even as M s and 1-S reach their limits above the critical point, the a 0 can be further increased until reaching its maximum; this can be explained by the further disordering of residual short-range ordered B2 regions since the irradiationinduced lattice expansion due to vacancies can be neglected [15].
We compare our results with previous approaches using different methods of mechanical stress-induced disorder-ing of bulk-like B2 Fe 60 Al 40 , i.e. ball-milled powder [3,4] and almost uniaxial compressed bulk samples [9]. As seen in Figure 2b, below and above 1-S = 0.6 there is agreement between the mechanical stress-induced and directdisordering approaches. However, at 1-S = 0.6 no critical behavior is observed for the mechanical stress-induced approaches. The possible inhomogeneities within the sample volume of the mechanical stress-induced disordered material, especially for ball-milling, may result in a smoothed 1-S behavior.
In general, intermediate states of disorder ranging from 1-S = 0.6 to fully disordered (1-S = 0.8 to 1) have not been observed. This is true for mechanically disordered bulk samples in literature as well as the present irradiation-disordered films. The present investigation of the region around the critical point reveals that whereas the monotonic behavior of a 0 vs. 1-S, the M s vs. 1-S shows an unambiguous critical increase (Figure 2b).
Kulikov et al. [36] applied tight-binding linear muffintin orbital (TB LMTO) approach on B2 Fe 50 Al 50 and obtained a moment (µ F e ) of 0.76 µ B on the Fe atom, and an equilibrium a 0 ≈ 2.86Å. The calculation also yielded a linear increase of µ F e with increasing a 0 . The calculation however, does not reproduce the increasing a 0 with disorder, seen clearly in Figure 2a, as well as in other works [3,4,9,14,26,38,39]. An increase in the number of Fe-Fe nearest neighbors at the antisite causes an increase in the occupancy of the d band. The increase of the spin-polarization at the disorder site due to electron filling is known from Kulikov et al.'s rigid band picture [36]. The perturbation at the antisite is associated with an increased a 0 as well as Friedel oscillations that cause a further increase of µ F e of Fe atoms that are a few atomic spacings away from the antisite. The rigid band picture is consistent with the monotonic variation of a 0 and 1-S, while µ F e remains at minimum (Figure 2a). Below the critical point, the system behaves paramagnetic as it would still be B2 ordered, seen experimentally in Figure 2c.
The regime observed above the critical point cannot be explained by the rigid band picture. Here the effect of the lattice expansion on the DOS must be considered. Apiñaniz et al. [23][24][25] applied TB LMTO method to both B2 and A2 structures and showed an increased a 0 with disordering; the calculated equilibrium a 0 for the B2 and A2 are 2.84 and 2.89Å respectively, and µ F e of 0.64 and 1.7 µ B respectively. Furthermore, a critical behavior of the µ F e with increasing a 0 is predicted, whereby µ F e in B2 Fe 50 Al 50 rises sharply from zero to 0.5 µ B as the a 0 expands above 2.78Å; in the absence of disorder. Whereas the calculated critical dependence of µ F e on a 0 is inconsistent with the results of mechanical stress-induced processes in the literature, it does bear resemblance to the observations on ion-irradiated films shown here.
The prediction that the B2 structure can undergo a transition to a ferromagnetic state above a critical a 0 , even without disorder, can prove useful in explaining the current experimental observations. We explore the above aspect by first performing DFT calculations on the relevant composition, i.e. B2 Fe 60 Al 40 . First principles density functional theory (DFT) calculations using fully relativistic Korringa-Kohn-Rostoker (KKR) formalism with the SPRKKR package [40] were performed. Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional has been used within generalized gradient approximation. Configurational disorder was treated within coherent potential approximation (CPA). The effect of increasing a 0 on the Fe moment for B2 as well as A2 structures is shown in Figure 3a. Whereas the A2 structure is FM throughout the investigated a 0 range, lattice expansion causes a ferromagnetic onset in the B2 at a 0 ≈ 2.87Å. According to calculations, the equilibrium a 0 for the B2 structure lies close to 2.87Å, with the E F is located within a narrow pseudogap of ≈ 1 eV width, thereby rendering the spin-splitting highly sensitive to changes in the DOS that can be manifested by the increasing disorder and a 0 . Figures 3b and c show the effect of disorder and lattice expansion respectively on the DOS.
Disorder causes a smearing of the DOS which is expected due to scattering from antisites. As seen in Figure 3b, the consequent changes to the DOS at the E F are sufficient to cause a spin-splitting, where the partially disordered state of 1-S = 0.6 shows an Fe-moment of 1.25 µ B . Comparing the DOS for the partially disordered state to that of the fully disordered A2 structure, it is seen that the spin-splitting due to antisite-scattering saturates at 1-S = 0.6. This matches well with the experimentally observed critical point above which the M s becomes independent of the disordering (Figure 2b).
Similarly, Figure 3c considers the effect of lattice expansion on disorder-free B2 Fe 60 Al 40 . Below the critical a 0 , the close distances between the atoms can cause a smearing of the d bands. As the lattice expands, orbital hybridization reduces and the peaks in the d band start to narrow. The location of E F in the vicinity of the narrowing d band peaks can, at a critical point and above, make spin-splitting energetically favorable. The onset of ferromagnetism in disorder free B2 Fe 60 Al 40 is therefore due to the particular position of E F in the presence of narrowing d band peaks. Since the µ F e increases with lattice expansion in both the B2 as well as the A2 structures (Figure 3a), the increasing µ F e caused by band narrowing appears to be valid for any given state of disorder.
From the above DOS considerations, it is seen that both antisite-scattering and band-narrowing favour an increased spin-splitting. The contribution of antisitescattering to the spin-splitting saturates at 1-S = 0.6, whereas increasing a 0 tends to continuously increase the spin-splitting, both in the fully disordered as well as the residual B2 ordered regions [41]. The initial sparse disordering of the B2 lattice leads to localized µ F e at antisites, manifesting an interplay with the disorder and a 0 . The strain induced due to the lattice expansion of the disordered regions increases the average a 0 thus modifying the DOS and causing spin-splitting throughout the lattice. The M s will follow a path bound by the B2 and A2 lines, indicated by the arrow in Figure 3a.
The latter part of the transition where M s is solely dependent on the a 0 has been addressed in previous studies, arriving at a conclusion that the lattice expansion contributes about 35 % of the induced µ F e [21]. However, as we have seen in the above discussion, separating the respective contributions of disorder and lattice expansion, is valid only in the regime above the critical point.
Unraveling the interplay between the disorder induced moment and lattice expansion, as well as the critical behavior sheds light on the magnetism of disordered systems, and can be applicable to a broad range of binary alloys. Our results show that controlled disordering of alloys can be a promising approach to sensitively engineer the DOS of alloys and achieve tailored functional properties.
We acknowledge the assistance of Andrea Scholz for structural analysis. We thank Johannes von Borany for useful discussions. Irradiation experiments were performed at the Ion Beam Center of the Helmholtz-Zentrum Dresden -Rossendorf. Funding from DFG Grants BA 5656/1-1 & WE 2623/14-1 is acknowledged. B.S. acknowledges financial support from Swedish Research Council and Swedish National Infrastructure for Computing for allocation of computing time under the project SNIC2017-12-53. | 2019-11-19T17:19:57.000Z | 2019-11-19T00:00:00.000 | {
"year": 2020,
"sha1": "a5e86703b1a54b91c27111cd92ef910d98cdd151",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/ab944a",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a5e86703b1a54b91c27111cd92ef910d98cdd151",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
257637088 | pes2o/s2orc | v3-fos-license | Reaction dynamics for the Cl($^2$P) + XCl $\to$ XCl + Cl($^2$P) (X = H, D, Mu) reaction on a high-fidelity ground state potential energy surface
Globally accurate full-dimensional ground state potential energy surface (PES) for the Cl($^2$P) + XCl $\to$ HCl + Cl($^2$P) reaction, a prototypical heavy-light-heavy abstract reaction, is developed using permutation invariant polynomial neural network (PIP-NN) method and embedded atom neural network (EANN) method, with the corresponding total root mean square error (RMSE) being only 0.043 and 0.056 kcal/mol, respectively. The saddle point of this reaction system is found to be nonlinear. A full-dimensional approximate quantum mechanical method, ring-polymer molecular dynamics (RPMD) with Cayley propagator, is employed to calculate the thermal rate coefficients and kinetic isotopic effects of title reactions Cl($^2$P) + XCl $\to$ XCl + Cl($^2$P) (X = H, D, Mu) on both new PESs. The results reproduce the experimental results at high temperatures perfectly, but with moderate accuracy at lower temperatures. The similar kinetic behavior is supported by quantum dynamics using wave packet calculations as well.
as 8.3 kcal/mol. It's found that BCMR PES gave highly rotational excitation products, by studies of quasi-classical trajectory (QCT) and quantum scattering dynamics. 7 Later there are a pair of PESs based on ab initio calculations are released. The first one is from D. Truhlar's group. 1 was fitted by rotated-Morse-oscillator-spline (RMOS) based on ~5500 ab initio points using the polarization configuration-interaction (POL-CI) method. 9 Although the TS from POL-CI calculation was not collinear, it was set to be collinear on the PES, with a potential barrier as 10 shell coupled cluster singles doubles with perturbation triples (RCCSD-T) 10 and 3 multireference configuration interactions (MRCI), 11,12 then fitted by the rotated-Morse cubic-spline function. The DCBKS PES contains three electronic states, and its TS is nonlinear. This nonlinear feature was attributed as the repulsion of both p-polarized orbitals on Cl atoms. But the potential barrier of TS was also scaled by a factor of 0.815 to match the calculated rate coefficients from QCT to the experimental values. 13 Recently, our group 14 calculated the thermal rate coefficients and KIE using an approximate full-dimensional quantum mechanical method named ring polymer molecular dynamics (RPMD) method on LEPS PES 15,16 . The results of RPMD are also consistent with those of other theoretical approaches such as ICVT and quantum dynamics (QD). For Cl( 2 P) + DCl reactions, the RPMD rate coefficients at higher temperatures are very accurate compared with the experimental results. However at 312.5 K the RPMD results are slightly lower than the experimental values, although it still close to all values from other theoretical methods. This may stem from the inaccuracy of the LEPS PES, since during our previous experience, the quality of PES used is essential for RPMD calculations. And since both the geometry and energetics of TS are not well defined from above discussion, it's needed to prepare accurate PES from high-level quantum chemical method.
To unveil the dynamics of the title reaction, it is usually essential to build an accurate global PES, which can be achieved by fitting a large number of high-level ab initio energy points. According to the previous work of G. Schatz 8 , the MRCI is necessary to calculate the energy of sampled configurations, due to the multiple electronic states features of the reaction system. And for fitting the PES, recently there is a novel method named embedded atom neural network (EANN) proposed by Bin Jiang's group 17 . It is extendable for high dimensional bimolecular reactions when with active learning technique. So, we choose the EANN in this work. To test the performance of the EANN method on constructing potential energy surfaces of bimolecular reactions, we have also adopted the standard permutation invariant polynomial-neural network (PIP-NN) method [18][19][20] , which has already been demonstrated to be suitable in fitting polyatomic bimolecular reactions and widely used, such as OH + H2O, 21 for isotope H is also validated by quantum dynamics using wave packets. This work is organized as follows. The PESs employed in the current work and the related theories and calculation details are introduced in section II. The results are presented and discussed in Section III. The final summary is contained in Section IV.
II.A PIP-NN PES
All electronic calculations in this work were performed using MOLPRO2015. 27 The geometries, energies and harmonic frequencies of all stationary points of the reactants, products and TSs were obtained at the MRCI-F12+Q levels. [28][29][30] The method is proved to give reliable potential energy surfaces since the higher energy accuracy.
The initial data set is initially sampled at three bond lengths R HCl 1 , R HCl 2 , and R Cl 1 Cl 2 in the range of 0.8-20 Å. Further improvement of the potential energy surface by adding points to the area around the stationary points and to the reaction path. Therefore, we obtain a primitive potential energy surface.
Based on this primitive potential energy surface, RPMD calculates from 300-1500 K for different temperatures. Exploration of dynamically relevant regions verifies the performance of the potential energy surface, which behaves unreliably in regions with lacking points. Therefore, data points in these regions are sampled to repair the potential energy surface. This process is repeated over and over again to improve the potential energy surface until all relevant dynamical results converge. To improve sampling efficiency, only those points that are not close to the existing dataset are added, using the generalized Euclidean distance
II.B Embedded Atom Neural Network Potentials
Although the PIPNN method works well for constructing potential energy surfaces, it is difficult to extend to the large system containing many atoms, which is caused by too many PIPs. Thus, Using the embedded atom neural network (EANN) method, 17 one can construct a surface of high-dimensional potential energy in which the total energy is calculated based on atomic energies. Specifically, atoms are embedded in an environment of other atoms, and their atomic energy is derived from an atomic neural network based on nonlinear transformations of electron densities, In Eq. 2, i ρ is a density-like structural descriptor that may be constructed simply by atomic orbitals of Gaussian type centered around neighboring atoms, where ij r is the Cartesian coordinate vector of embedded atom i with respect to neighbouring atom j, and r is its norm;
II.C RPMD reaction rate theory
All calculations are performed using the RPMD rate theory implemented in the RPMDrate code 38 . Since there have been lots of review articles about it, 39, 40 here we only give a brief summary related closely to the current work. For the title reaction, this Hamiltonian can be written as follows: Where ˆi p , ˆi q and i m are the momentum operator, the position operator and the 7 mass of the ith atom, respectively. Taking advantage of classical isomorphism between quantum systems and the ring polymer, each quantum particle is represented by a necklace formed by n classical beads connected by a harmonic potential: 15, 41 Where for each atom, Here QSTS ( ; ) kT is the centroid-density QTST rate coefficient, 16, 44 evaluated at the maximum of the free energy barrier, , along the reaction coordinate () q . In practice, it is calculated from the centroid potential of the mean force (PMF): 38-40 Where R is the reduced mass between the two reactants, is the freeenergy difference which is obtained via umbrella integration along the reaction coordinate. 38,45 The second factor ( ; ) t → is named the transmission coefficient, which provides dynamical correction and is calculated by the ratio between long-time limit and zero-time limit of the flux-side correlation function: Which captures the recrossing of the TS region and ensures that the obtained RPMD rate coefficient results do not depend on the choice of the dividing surface. 16 It should be noted that the final RPMD rate coefficients are corrected by an electronic partition function ratio of the following form: to account for the spin-orbit splitting of In addition, when only one bead is used, the results from RPMD will reduce to the classical limit. In this limit, the static and dynamic components become the same as the classical transition state theory (TST) rate coefficient and the classical transmission coefficient, respectively. Therefore, these quantities determine the limits at which quantum effects, such as ZPE and tunneling effects, can be evaluated by using more beads. The minimum number of beads considering the quantum effect can be given by the following formula: 47 min max n = (11) Where max is the largest vibration frequency in the system. In this work, the convergence is tested with increasing number of beads, and the numbers that yield converged results of PMF are chosen at different temperatures. In the Supporting Information, Figure S2 shows the convergence of PMF curves at 312.5 K obtained from different numbers of beads.
Additionally, there is a critical temperature named cross-over temperature 48 , Tc, which also needs to be considered: Where b i is the imaginary frequency of the reaction system in the TS. The reaction system temperature is lower than T c , which is considered as deep-tunneling region, the error of RPMD results would become large. Enough beads are needed to obtain accurate results. The cross-over temperature for the title reaction is T c =348 K.
II.D Quantum dynamics
The time-dependent quantum wave packet method is used to calculate the thermal rate coefficients as well. The Hamiltonian of the system in the reactant Jacobi coordinates can be written as 53 where R is the reduced mass between Cl and HCl; r is the reduced mass of the reactant HCl; R is the distance from the attacking Cl atom to the center of mass of the reactant HCl; r is the bond distance of the reactant HCl; θ is the bending angle between the vectors R and r;ˆt ot J is the total angular momentum operator of the system; ĵ is the rotational angular momentum operator of the reactant HCl; V is the potential energy operator.
The time-dependent wavefunction is expanded as where () The initial state-specific rate constant is obtained by thermal averaging the collision energy of the corresponding ICS as where Ec is the collision energy and kB is the Boltzmann constant. The electronic partition function Qe is given by 2 +
11
An L-shaped grid is used in this work. 55 The numerical parameters are listed in Table I. Figure 2 shows contour plots as functions of the breaking (RHCl 1 ) and forming (RHCl 2 ) bonds with the bond angle ∠ClHCl relaxed.
III.A Properties of the NN PES
It is clear to know that this is a typical symmetric reaction. However, the real frequency difference of DCBKS PES is not much, but its virtual frequency difference is up to 10%. [3] f The DIM-3C PES, see detail in Ref. [6] g The PK3 PES, see detail in Ref. [5] h The POLCl PES, see detail in Ref. [1] i The DCBKS PES, see detail in Ref. [8]
III.B RPMD rate coefficients
In this work, the thermal rate coefficients for the Cl( 2 P) + XCl (X=H, D, Mu) reactions were calculated at the temperature range of 200-1000 K. This calculation is first performed with one bead, which provides the classical limit, and then the number of beads increases until converged.
In Cayley-RPMD calculations the number of beads needed depends on different temperatures and isotopes. The number of beads should not be less than the minimum suggested by Eq. (11). The rate coefficients calculated by the RPMD method and other theoretical methods as well as the rate coefficients measured experimentally are listed in Table II.
As described in the section II C, the reaction is in deep-tunneling region at 312.5 K, since the Tc is 348 K for the title reaction. From previous discussions, the results below Tc would underestimate the rate coefficients since the title reaction is with symmetric barrier. 56 But in this work, the Cayley-RPMD results are still in good agreement with experimental values.
The left panel of Figure 3 shows the PMF of Cl( 2 P) + XCl (X=H, D, Mu) reaction at 312.5 K. The convergent RPMD barriers (with the optimal number of beads) for all three reactions are lower than the classical (single-bead) results. This is due to tunneling effects that make it easier for the three isotopes to penetrate the potential energy barrier.
The free energy barriers from different isotopic reactions decrease as ∆GD > ∆GH > ∆GMu according to the decrease in the mass of these isotopes. This order comes from the fact that the smaller the mass, the greater the tunneling capacity. The right panel of Figure 3 shows Table II shows that the converged RPMD transmission coefficients also increase with the increasing of temperature, indicating that the recrossing is more at low temperature, which is consistent with previous studies 39,40 . As can be seen from the table II, the potential of mean force (PMF) of RPMD rates at temperatures 200 K to 1000 K, the free energy barrier increases with the increase in temperature, which is The left panel of Figure 4 shows Therefore, the states with simultaneous vibration-rotation excitation are not considered in the quantum wave packet calculations, which is believed to have negligible effect on the thermal rate constants. As shown in Figure 4, the quantum thermal rate constants agree reasonably well with the Cayley-RPMD results although the former are slightly higher at low temperatures. QD calculations show that, on one hand, the reaction energy threshold is visibly lower than the classical barrier height, indicating the existence of significant quantum tunneling effect for the reactions, which consistent with the PMF curves from RPMD, as shown in Figure The right panels of Figure 4 shows Cl( 2 P) + DCl rate coefficients in the Arrhenius
IV. Summary and conclusions
In this work, we have investigate the reaction dynamics of Cl( 2 P) + HCl → HCl + Cl( 2 P). Firstly, we have developed two new full-dimensional neural network PESs for ground state of the title reaction, the permutation invariant polynomial neural network (PIP-NN) method and embedded atomic neural network (EANN) method, based on 5986 points and 6515 points at MRCI-F12+Q/AVTZ level, and with the total root mean square error was 0.043 kcal/mol and 0.056 kcal/mol, respectively. In particular, the EANN method is the first application of gas phase bimolecular reactions.
From the comparison, the performance is well, and is promising for being applied into multiple atomic reactions with active learning technique. We have also confirmed the | 2023-03-22T01:16:36.708Z | 2023-03-21T00:00:00.000 | {
"year": 2023,
"sha1": "57e6d0f6d4b08a8d1d89c47c476449c9e0be129e",
"oa_license": "CCBY",
"oa_url": "https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/5.0151401/18000597/234301_1_5.0151401.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "57e6d0f6d4b08a8d1d89c47c476449c9e0be129e",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
117183814 | pes2o/s2orc | v3-fos-license | Critical currents in the BEC/BCS crossover regime
Both the trapping geometry and the interatomic interaction strength of a dilute ultracold fermionic gas can be well controlled experimentally. When the interactions are tuned to strong attraction, Cooper pairing of neutral atoms takes place and a BCS superfluid is created. Alternatively, the presence of Feshbach resonances in the interatomic scattering allows populating a molecular (bound) state. These molecules are more tightly bound than the Cooper pairs and can form a Bose-Einstein condensate (BEC). In this contribution, we describe both the BCS and BEC regimes, and the crossover, from a functional integral point of view. In this description, the properties of the superfluid (such as vortices and Josephson tunneling) can be derived and followed as the system is tuned from BCS the BEC. In particular, we present results for the critical current of the superfluid through an optical lattice and link these results to recent experiments with atomic bosons in optical lattices.
I. THE ULTRACOLD DILUTE FERMI GAS
When a dilute Bose gas is cooled below the degeneracy temperature, the bosonic atoms all condense in the same one-particle state and a Bose-Einstein condensate forms. This has been convincingly demonstrated with magnetically trapped, evaporatively cooled atomic gases for a multitude of atom species. Moreover, magnetic or optical traps can be equally well loaded with fermionic isotopes, such as 6 Li or 40 K. These atoms do not undergo Bose-Einstein condensation, but fill up a Fermi sea, as has been demonstrated through the observation of the Pauli blocking effect [1] and through a measurement of the total energy of the Fermi gas [2]. Very soon after the observation of a degenerate Fermi sea of atoms, researchers embarked upon the quest to achieve Cooper pairing in the dilute Fermi gas. Indeed, for metals we know that the Fermi sea is unstable with respect to Cooper pair formation. So, if the (neutral) atoms in the dilute gas attract each other, a similar instability towards a paired state is to be expected.
The interatomic interactions in ultracold gases are remarkable for two reasons. Firstly, the collisions between the atoms can be satisfactorily characterized by a single number, the s-wave scattering length a s . For low-energy collisions, the effective interaction potential between atoms becomes a contact potential, V (r − r ′ ) = gδ(r − r ′ ), where g = 4π a s /m with m the mass of the atoms. The scattering length can be both positive (leading to interatomic repulsion) or negative (attraction).
Secondly, this scattering length can be tuned by an external magnetic field when a Feshbach resonance is present [3]. This resonance occurs when the energy of a bound (molecular) state in a closed scattering channel becomes equal to the energy of the colliding atoms in the open scattering channel. The different channels correspond here to different hyperfine states of the trapped atoms, and the distance in energy between these states can be tuned with a magnetic field.
In what follows, we will consider a trapped mixture of 40 K atoms in the |9/2, −7/2 and |9/2, −9/2 hyperfine states. This potassium isotope is fermionic, and the trapped states display a Feshbach resonance at B = 202.1 Gauss. When the scattering length is tuned to a negative value, the atoms attract and Cooper pairs can form leading to a BCS regime. The critical temperature for Cooper pairing can be raised by making the scattering length more strongly negative. When the scattering length is large and positive, the molecular state in the closed channel is populated, and molecules appear that can be Bose-Einstein condensed (the BEC regime). The adaptability of the scattering length allows bringing the gas from the BCS regime into the BEC regime or vice versa, and allows studying the interesting intermediate 'crossover' regime.
The first experimental realization of superfluidity of a Fermi gas in the molecular BEC regime came in 2003 [4]. A condensate of molecules was convincingly observed. The detection of superfluidity in the BCS regime however is much more subtle. In an initial experiment [5], the superfluid behavior was derived from the hydrodynamic nature of the expansion of the cloud, as compared to a ballistic expansion expected for a non-superfluid weaklyinteracting Fermi gas [6]. However, this did not constitute unambiguous proof, since the Fermi gas was in the strongly interacting regime. Subsequent experiments probed superfluidity by mapping the pair density onto a molecular condensate density [7] or by spectroscopically measuring the gap [8]. Yet although these experimental methods clearly demonstrate pairing, they do not unambiguously demonstrate superfluid behavior. The very recent observation of a lattice of quantized vortices in resonant Fermi gases [9] constitutes the first clear demonstration of superfluidity in the BEC/BCS regime. Observation of these vortices well in the BCS regime may be difficult since the fermionic density penetrates in the core of the vortex in the BCS regime, leading to a loss of contrast in direct imaging [9,10,11]. Another possibility to demonstrate superconductivity is though the observation of the Josephson effect [12] in optical lattices. These optical lattices are periodic potentials formed by two counterpropagating laser beams, for example in the z-direction: where λ is the laser wave length, E R = h 2 /(2mλ 2 ) is the recoil energy, and s is the laser intensity expressed in units of the recoil energy. Typically, s = 1 − 20, λ = 795 nm. The atoms collect in the valleys of the optical lattice and form a "stack of pancakes", illustrated in Fig. 1. Typically, there are on the order of a few 100 'pancakes' with on the order of 1000 atoms each. When a superfluid is loaded in such an optical lattice, the system corresponds to an array of Josephson junctions. In such an array, the superfluid gas can propagate whereas the normal state gas is pinned. This has already been demonstrated for bosonic atoms [13], and has been predicted theoretically for fermionic atoms [12,14].
In this contribution, we derive and discuss the critical Josephson current for the flow of the superfluid component through an optical lattice. For this purpose, we base ourselves on the path-integral description as applied by Randeria and co-workers [15,16] to the BEC/BCS crossover model of high-T c superconductors. In section II, we give an overview of the application of path-integrals to the system of ultracold fermions, and in section III we present our results for the critical current.
II. PATH-INTEGRAL TREATMENT OF THE BEC/BCS CROSSOVER
The partition function for the atomic Fermi gas is given by the functional integral with an action The fermionic fields ψ x,τ ,ψ x,τ are Grassman variables. The interaction potential, as discussed in the previous section, is a contact potential with experimentally adjustable strength g.
The two hyperfine states are denoted by σ =↑, ↓. The functional integral over the Grassman variables can be performed analytically only for an action that is quadratic in ψ x,τ ,ψ x,τ . In order to get rid of the quartic term in (3) we perform a Hubbard-Stratonovic (HS) transformation, introducing auxiliary bosonic fields∆ x,τ and ∆ x,τ : Indeed, performing the functional integral over the HS fields∆ x,τ ,∆ x,τ in (5) brings us back to (3). Our goal is an investigation of the superfluid properties of the ultracold Fermi system. For a straightforward hydrodynamic interpretation of the Hubbard-Stratonovic fields, it is advantageous to work with |∆ x,τ | and θ x,τ . These are related to the original HS field by ∆ x,τ = |∆ x,τ | exp(iθ x,τ ). We have restricted the functional integral to∆ x,τ = (∆ x,τ ) * without neglecting any field configurations of importance to the final result. The hydrodynamic interpretation of |∆ x,τ | 2 is the density of fermion pairs, whereas ∇ x θ x,τ /m = v x,τ can be interpreted as the superfluid velocity field. Performing this change of variables in the functional integral yields De Palo et al. [17] suggest at this point to introduce additional collective quantum variables to extract the fermionic density. However, care must be taken, since when additional collective quantum fields are present the problem of double-counting poses itself [18], and variational perturbation theory has to be applied to avoid double-counting [19]. However, in the present case it is not necessary to explicitly introduce the additional collective variables to obtain information about the atomic density profile [20]. In (6) the integration over the fermionic variables can be taken, leading to with an effective action where the inverse propagator can be written as the sum of an inverse 'free fermion propagator' and a term arising from the superfluidity: The inverse free fermion propagator is and the superfluid part of the propagator can be written as In these expressions, σ 0 ...σ 3 are the Pauli matrices. Note that if we have an external potential V ext (x) present, for example the optical potential or the magnetic trap, this appears in −G −1 0 as an extra term +σ 3 V ext (x). The effective action (9) depends on the fields |∆ x,τ | , θ x,τ and ρ x,τ , ζ x,τ . For the former, a saddle point approximation is usually made. For example, a good saddle point form when no vortex is present is [15,16]: The value of the constant for the phase is irrelevant, and the value of ∆ can be extracted by extremizing the effective action δS eff /δ∆ = 0. This yields the well-known gap equation in the case of neutral atoms interacting through a contact potential. Alternatively, we proposed in Ref. [11] to use a different saddle point approximation to investigate the case of a fermionic superfluid containing a vortex parallel to the z-axis: Here, φ is the angle around the z-axis, and r is the distance to the z-axis. Again, a gap equation can be derived for ∆ r by extremizing the action -this gap equation yields a gap that depends on the distance to the vortex line (the z-axis). Fixing the total number of fermions yields a number equation in which the local density of fermions can be identified straightforwardly.
Consider first the simplest saddle point approximation, (SP1). The saddle point result for the action in this case is Two unknowns are the chemical potential µ and the value of constant ∆, the gap. The chemical potential is obtained by fixing the particle density. In the BCS limit, µ → E F whereas in the BEC limit, the chemical potential goes to the binding energy of the molecule, µ → 2 /(ma 2 s ). In the intermediate regime, there is a smooth crossover between the two limiting values. The gap ∆ is found by extremizing the saddle point action, δS sp1 /δ∆ = 0. The result is shown for different temperatures in figure 2. At temperature zero, the gap depends exponentially on the scattering length as we expect from the BCS theory. As the temperature is raised, the gap decreases, reaching zero at a certain temperature. In the BCS limit, the superfluidity is destroyed by breaking up Cooper pairs, so the critical temperature corresponds to the temperature where ∆ = 0. However, in the BEC limit, superfluidity is destroyed through phase fluctuations, and one cannot extract the critical temperature from the results shown in figure 2. It becomes necessary to include fluctuations around the saddle point value (SP1) and expand the effective action up to second order in these fluctuations around the saddle point value. This second-order expansion yields an action that is quadratic in the fluctuation variables and that can be integrated analytically. For fluctuations around the saddle point (SP1) this was done by Randeria and co-workers, who obtained a corrected value of the critical temperature that in the BEC limit becomes independent of 1/(k F a s ). More recently, the effects of fluctuations in the superfluid regime, in the context of a diagrammatic expansion of the thermodynamic potential in refs. [21,22].
A. Effective action in the optical potential
The path-integral method outlined in the previous section has been applied before to describe vortices in a superfluid Fermi gas [11] and to describe the propagation of a superfluid Fermi gas in an optical potential [12]. When an optical lattice (1) is present along the zdirection, we can decouple the (free) motion in the x, y-plane from the (tunneling) motion in the z-direction. To make this decoupling clear in the notations, we write the partition function of the system as In the BEC regime, fluctuations around the saddle point need to be taken into account to obtain the correct critical temperature [15,21,22].
The action functional for the gas in layer j separately is given by This is the two-dimensional version of the action functional (3), supplemented with a layer index. Moreover, there is an external potential V ext (j) acting on each layer. This can a parabolic potential in addition to the optical potential itself. The tunneling of atoms from one layer to another is described by where the tunneling energy t 1 to bring an atom from one well of the optical potential to the next was derived in Ref. [23]: For this particular decomposition of the action functional in intralayer contributions and tunneling contributions, we can perform the same analysis as described in the previous section. A Hubbard-Stratonovich transformation gets rid of the four-operator term and introduces the HS fields |∆ j | , θ j , after which the integration over fermionic variables is performed. The final result for the effective action can again be written as the sum of contributions independent of t 1 and tunneling contributions: The effective action for layer j is with The tunneling contributions in the effective action can be treated perturbatively. In that framework, the saddle-point values |∆ j | can be extracted from the gap equation of each layer separately, and the chemical potential µ is obtained from the number equation. In each layer j, there is an 'effective' chemical potential V ext (j) + µ fixing the local density ρ j in layer j. Based on these results for the layers, the lowest-order perturbative expansion of the action with respect to the tunneling part (t 1 ) yields with, where E b is the binding energy of the molecule. This molecular binding energy appears through the gap equations and can be derived from scattering theory in reduced dimensionality. It is given by [24]: It is important to note that the binding energy depends on the intensity and wavelength of the lasers generating the optical potential. More intense laser beams or smaller wavelengths confine the gas more strongly in the optical lattice and alter the binding energy of the resonant molecules. A more detailed determination of the molecular binding energy in an optical lattice, taking into account molecules formed from atoms in neighboring lattice sites, is given in Ref. [25].
B. Coupled density-phase equations
The equations of motion for the remaining variables (density ρ j and phase θ j in layer j) can be derived from the effective action (19)- (22) through the extremum conditions δS eff /δθ j = 0 and the number equation. This leads to the equations reported by the present authors and M. Wouters in Ref. [12]: and In these equations, we have introduced the possibility of applying an external potential V ext (j) varying over the layers. Here, we investigate the case with a constant phase difference θ j+1 − θ j = ∆θ and a smoothly varying density ρ j+1 ≈ ρ j . This situation corresponds to a uniform flow of superfluid through the lattice. Then equations (25), (26) simplify to In the BEC case, E b ≫ 2 ρ j /m and we retrieve the equations describing a conventional Josephson junction array. However, on the BCS side, the tunneling coefficients start to depend on ρ j , as E b and 2 ρ j /m become comparable.
C. Critical Josephson current and critical velocity
Equation (27) states that the current density J is proportional to sin(∆θ). This is similar to the first Josephson equation, J = J c sin(∆θ). optical lattice in Ref. [26]. This yields a critical current density for Josephson tunneling from layer to layer: The layers are separated by a distance λ/2. From J c we can then extract the critical velocity for the fermionic atoms through the optical lattice, The critical velocity of the fermionic superfluid depends on the scattering length a s , via the binding energy of the Feshbach resonant molecule, E b . The critical velocity also depends on the density (or, equivalently, the Fermi wave vector). In Figure 3 we show the results for the critical velocity (expressed in microns per millisecond), as a function of k F and of the interaction parameter 1/(k F a s ). In the region 1/(k F a s ) > 0 we are in the molecular BEC regime, and E b ≫ 2 ρ j /m. The critical velocity in the BEC regime is roughly proportional to t 2 1 /E b . In the region 1/(k F a s ) < 0, the BCS regime of Cooper pairs arises, and the result for the critical velocity becomes nontrivial. For each fixed value of 1/(k F a s ) < 0, there appears a maximum as a function of k F . This maximum occurs when E b ≈ 2 ρ j /m, minimizing the denominator in (30).
Although superfluid gases of bosonic atoms have already been studied in optical lattices [13,27], superfluid Fermi gases have to this moment not been loaded in optical lattices. Also no molecular condensates have been placed in optical lattices. For atomic condensates, a critical velocity could be determined experimentally [27], and was found to vary between 0.2 and 1.2 µm/ms for 87 Rb atoms. This is comparable to the velocities that we predict for (fermionic) 40 K in the same λ = 795 optical potential. Thus, the superfluid regime of paired fermionic atoms in an optical lattice should be accessible experimentally.
IV. CONCLUSIONS
The path-integral description of ultracold fermionic atoms interacting through a tunable contact potential allows to describe vortex configurations and other non-ground state configurations through a judicious choice of saddle point. We apply this formalism to the case of a fermionic gas in an optical potential. When the fermionic gas is in the superfluid regime, the layers of gas in the optical potential form a Josephson junction array. Equations of motion for the density and phase in each layer are obtained and applied to the case where the phase difference between consecutive layers is constant. This permits the derivation of a critical velocity of the superfluid flow through the optical potential. Although these results are strictly speaking derived for T = 0, in the experiments the temperature can typically be brought down well below the degeneracy temperature so that we believe our results will be relevant to the experiments with optical lattices. | 2019-04-14T02:09:03.401Z | 2005-07-30T00:00:00.000 | {
"year": 2005,
"sha1": "3aed2e875edfffd177b7abed0cd6a3f2e9af4b29",
"oa_license": null,
"oa_url": "https://repository.uantwerpen.be/docman/irua/69d74c/7963.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3aed2e875edfffd177b7abed0cd6a3f2e9af4b29",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225761353 | pes2o/s2orc | v3-fos-license | Comparisons of spectrally resolved nightglow emission locally simulated with space and ground level observations
11 A mesospheric model of the airglow emission is developed to recover the night variations observed 12 at ground level. The model is based on a 1D vertical photochemical model, including the photodis- 13 sociation and heating processes. The spectral radiation is calculated at high altitude and propa- 14 gated through the atmosphere to the ground. We also include short scale vertical dynamic such 15 as turbulences and the molecular di ff usion. Simulations reveal realistic emissions when compared 16 with space observations. In addition, we estimate the impact of changes associated with parame- 17 terized atmospheric tides. The comparison with observations is performed over high altitude and 18 ground level. We confront the model outputs at high altitude with satellite observations (SABER 19 and GOMOS) and the simulations propagated at ground level are compared to local measurements 20 campaigns performed in France and India. Biases between observed and simulated radiances and 21 volume emission rates are suspected to be due to the impact of gravity waves or the large scale 22 dynamic.
Introduction 24
The night airglow is the radiation emitted over a wide spectrum originating from the chemical pling an OH*-model with a chemistry-transport model. 55 However, these studies do not spectrally resolve the emission as they only produce global -or 56 transition specific-volume emission rate (VER). In order to simulate the emission spectrum at 57 the various altitudes concerned, it is mandatory to include in the model the various excited states 58 implicated in the emission as reactive species. Very few models able to simulate the full spectrum 59 observed at high altitude were developed and none are available for ground-based analyses. Moreels noting that these models listed above are not implemented with a radiative transfer model, required 70 in order to propagate the spectrum simulated at high altitude down to the ground. 71 The objective of this study consist in simulating nightglow that can be observed at ground level. 72 Therefore a local photochemical model was developed based on the most up-to-date coefficients. 73 On the contrary to other models, various excited states along with a radiative transfer module are in-74 cluded in order to obtain the OH spectral emission emitted at high altitude, and propagate it down to 75 the ground through interaction with the neutral atmosphere for comparison to local measurements.
76
To include the various dynamical processes on the 1D model, temperature and wind fluctuations have been operated down to 0.05 nm in the UV region to take into account the Lyman-α line, the 121 Schumann-Runge and the Huggins bands. Above 3 µm, data are derived from Thekaekara (1974).
122
The solar zenith angle is calculated using the Chapman function (Smith and Smith, 1972) The mesopause is subject to strong energy exchanges (Mlynczak, 1997). The importance of the 134 heating has been noticed by Mlynczak (2000). We consider in the model the solar heating, the 135 chemical heating and also the radiation cooling (by CO 2 ). The solar irradiance is absorbed by the states is converted into kinetic energy. We apply here the formulation from Brasseur and Solomon 139 (2005) that expresses the difference of absorbed solar radiation at a specific layer i, between two 140 vertical levels: with dT/dt the heating rate, i.e. the variation of the temperature T with time t at the layer i, the 143 solar zenith angle Z, the density ρ, the calorific capacity C P , and I(z, λ) the incident solar intensity With k r the rate of the considered reaction, the density of the reactants considered ρ(1, 2) and air ρ, 152 the reaction enthalpy H, the Boltzmann constant k b and the Avogadro number N A .
153
The radiative cooling tallies with the CO 2 infrared radiation around 15 µm. We use here Thermodynamic Equilibrium) and non-LTE effect at high altitude.
156
These heating rates, encompassing the solar heating, the chemical heating and the radiative cool- Since we aim to compare the model results with ground based observations, we compute the fully 164 resolved spectrum of the airglow. The intensity of an emission line is written according to: Where I ( j ,ν → j ,ν ) is the transition intensity between the rovibrational state ( j , ν ) and ( j , ν ), j and
174
The VER for a specific vibrational transition is given by: It is also worthily to mention that because of the low temperature at this altitude, the local thermal 177 emission of the atmosphere is spectrally located in the mid and far infrared and does not interfere 178 with the emission.
184
The radiative transfer equation is written hereafter: With L(τ, Ω), the radiance for the optical depth τ, which propagates in the direction Ω. J th , J ds , 187 J dm and J glow are the different sources functions, respectively from the thermal emission, the sim-188 ple scattering, the multiple scattering and the nightglow emission. The expressions of the various 189 sources follow: The wind advection presents two components, the vertical drift velocity, calculated with the 211 molecular diffusion coefficient, and the tidal wind, which is described in the next paragraph. We 212 use here a semi-Lagrangian scheme to resolve the advection. but is higher than the mean profile peak. The consistency is increased for the 2.0 µm peak altitude.
294
The difference in the profile-to-profile comparison is useful to highlight the limits of the model.
295
To understand this, the temperature profiles are displayed in Figure 5 Therefore, changes in temperature and concentrations will lead to changes in emission. For ex-303 ample, a local increase in density can induce a local increase in the observed VER. A change in 304 temperature will imply changes in chemical rates and therefore in OH excited states sources. In this 305 particular case where the simulated VER is lower than observed, we assume that the GW increases 306 the temperature as seen in Figure 5 (c) and modify the density, leading to changes in the chemistry 307 of the nightglow production. Not shown here, the oxygen profile is also larger for the observation 308 | 2020-06-11T09:09:09.407Z | 2020-06-09T00:00:00.000 | {
"year": 2020,
"sha1": "d6f6dc2eda738fa75c8a007197eba0d19193ee5c",
"oa_license": "CCBY",
"oa_url": "https://www.swsc-journal.org/articles/swsc/pdf/2020/01/swsc180077.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "137f0b3a1a2a637574f9da4ba096e83525c903f6",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
244596582 | pes2o/s2orc | v3-fos-license | Tear and serum superoxide dismutase and catalase activities in hypertensive retinopathy
: The objective was to determine the changes in SOD and catalase activity, markers of oxidative stress/antioxidant balance in serum and tear of patients with hypertensive retinopathy and to identify whether there was a correlation between their levels and HR degree of hypertensive retinopathy (HR). Material and Methods — 90 hypertensive patients were divided in three groups, according to the Keith-Wagener classification: GI-36, GII-35 and GIII-19. SOD was assessed using the Dubinina and Matyushin method and catalase according to Koroliuk, both in modification of Gudumac V. The results were presented by median and interquartile range. The groups were compared using Kruskal-Wallis and Mann-Whitney nonparametric tests, and the Spearman correlation coefficient was calculated (SPSS 23.0). Results — Showed a statistically significant difference of SOD in serum (p=0.035) and tear (p=0.027) between groups. SOD decreased from GI until GIII in serum (-8%, p=0.032) and tear (-16%, p=0.031). In addition, it showed a weak significant negative correlation with the HR degree both in serum (r=-0.246, p=0.019) and tear (r=-0.284, p=0.007), while the correlation attested between serum and tear SOD levels was significant moderate and positive (r=0.336, p=0.001). It was noted a significant catalase elevation in the tear (p=0.033). In serum it was not correlated with HR degree, while in tear showed a significant weak strength, positive correlation (r=0.261, p=0.013). No correlations were found between serum and tear catalase levels. Conclusion — A progressive significant decrease in SOD levels and a tendency to increase of catalase activity was identified as HR advanced both in serum and in tear. The enhancement in the severity of HR was correlated with decreased SOD activity in tear and serum and increased catalase level in tear.
Introduction
Systemic arterial hypertension (HTN) is a major public health issue that is associated with an elevated risk of cardiovascular, cerebrovascular, renal and retinal disorders [1].WHO estimated that worldwide 1.13 billion people have HTN and less than 1 in 5 from them have the condition under control [2].
Hypertensive retinopathy (HR) is a series of retinal microvascular changes caused by elevated uncontrolled blood pressure and is the most common ocular complication [1].In a study by Erden et al. was proved that the severity and duration of HTN are directly proportional to the incidence of HR, which ranges from 66.3% to 83.6% out of the total hypertensive subjects [3][4][5].
The main danger of HTN and of HR itself, lies in the lack of prevenient symptoms.This is one of the reasons why an asymptomatic hypertensive patient can in many instances be primarily diagnosed with HR at a random visual check and only at that point referred to a general practitioner.Moreover, based on the high prevalence of uncontrolled HTN, hypertensive subjects must be aware of the fact that a successful treatment and an in-time diagnosis of HTN reduces the risk of HR development and of vision disorders [6][7][8].
An ophthalmological consultation relies on the eye specialist's dexterity, being in many ways subjective.Along these lines, the biochemical markers are demanded as an extra help in the estimation of HR degree and a subsequent correct treatment approach.
Oxidative stress (OS) and inflammation are regarded as HR prime reasons, but remained underexamined.The retina was always an attractive tissue for investigation, being a complex highly metabolic organ that works entirely by aerobic respiration, consuming the highest amount of oxygen in comparison with any other tissue.A consequence of this fact, will result in generation of reactive oxygen species (ROSs), like superoxide (O 2 −• ), hydroxyl radical (•OH), and hydrogen peroxide (H 2 O 2 ) [9].
The pathogenetic role of only two markers of OS were investigated in HR: serum gamma-glutamyl transferase (GGT) and serum ferritin levels.Both of these markers demonstrated a notable enhancement in their levels in parallel with the HR progression and also displayed a positive correlation with the grade of HR [9,10].
Under physiological conditions, the harmful effects of ROSs can be kept under control by a series of antioxidant proteins.Superoxide dismutase (SOD), whose isoforms can be differentiated by their localization and metallic constituents, acts like a primary defense mechanism against free radicals by catalyzing the dismutation of the superoxide, into water and hydrogen peroxide.Ultimately, catalase, glutathione peroxidase and glutathione reductase dissociate hydrogen peroxide to oxygen and water [11].Because of these enzymes action, the steady-state concentration of intracellular H 2 O 2 under physiological circumstances is kept in the range of 1-10 nM.Pathologically, some extra enzymatic sources of superoxide are activated, which contribute to the formation of increased amount of H 2 O 2 and in consequences other oxidative species, such as hydroxyl radicals, hypochlorous acid or peroxynitrite may be formed [12][13][14].
Catalase, the second main enzyme of antioxidant defense that is considered and the main regulator of hydrogen peroxide metabolism, is present in all aerobes and many aerotolerant anaerobes [15].Recent studies pointed out that catalase may be involved in different other processes in the cell and is catalytically active in the absence of H 2 O 2 .In addition, it showed a low oxidase activity, which imply the fact it can catalyze oxidation of some highly reductive substrates, such as benzidine, using molecular oxygen [16].
Catalase in addition to peroxiredoxins and glutathione peroxidases, plays a pivotal role in maintaining reduced steady state of H 2 O 2 concentration, which permits to keep the cell homeostasis and adapts it to stress.It is commonly admitted that peroxiredoxins and glutathione peroxidases are essentially responsible for the removal of H 2 O 2 at low concentration, although catalase is decisive at higher H 2 O 2 concentrations [17].
The stated mechanisms imply the fact that both enzymes constitute an important defense mechanism against oxidative stress.There is insufficient information in literature regarding SOD and catalase assessment in retinal pathologies.Meanwhile, there are no data regarding the role of these markers and diagnostical value in HR.
The objective of the study was to evaluate the role of SOD and catalase in the pathogenesis of HR and establish their diagnostic value.
Study design
The study was approved by the Research Ethics Committee (12.19 patients with 3 rd grade of HR with the same as in GII + flameshaped hemorrhages + cotton-wool spots + hard exudates.In the research were not included patients from the fourth group: the same as in GIII + optic disc swelling [2].All the participants in the study signed an informed consent.
Patient selection
In the study, we enrolled hypertensive patients who came for a consultation at the Ovisus Medical Center in the period 2018-2019 and who, for the first time, were diagnosed with HR, confirmed after a detailed specific ophthalmological investigation: determination of visual acuity, autorefracto-keratometry, perimetry, anterior and fundus biomicroscopy, ultrasonography, tonometry, gonioscopy, optical coherence tomography (OCT) of the macular area and the papilla of the optic nerve.
Subjects that received antihypertensive or any other drug that can discredit the results of the research were excluded from the study.Also, the patients with metabolic disorders like diabetes and severe obesity, with renal and neurological pathologies, severe somatic comorbidities, antecedent ocular trauma, optic nerve atrophies of different genesis and ocular associated diseases: glaucoma, diabetic retinopathy, acute and chronic inflammatory processes, uveitis, were expelled.
Sample collection
Venous blood samples (5 ml) were collected and centrifuged, with a further separation of serum.Tear samples were collected from the tear lake inside the lateral conjunctival sac of the inferior fornix with microcapillary tubes.Serum and tear were dispensed into Eppendorf microtubes and frozen ( -40ºC) until b
Biochemical analysis
Serum and tear SOD levels were assessed using the Dubinina E. E. and Matyushin B. N. method in the modification of Gudumac V. et al. [18][19][20], based on SOD capacity to inhibit the nitro blue tetrazolium salt (NBT) reduction in the system that contains phenazine methosulfate and NADH.Following the reduction of NBT, blue-colored nitroformazan is formed, the intensity of which coloration is proportional to the amount of reduced NBT.The degree of inhibition of this process depends on the SOD activity.Enzyme activity is reported in u/mL for both serum and tear samples.
Catalase activity in serum and tear was determined according to Koroliuk M. in the modification of Gudumac V., et al. [18] and was based on the property of the enzyme to catalyze the cleavage of H 2 O 2 to H 2 O and O 2 .Hydrogen peroxide forms a yellow compound with ammonium molybdate.In the reaction process, as the H 2 O 2 decomposes, the mixture discolors.The degree of discoloration over a period correlates with the activity of the enzyme and is estimated spectrophotometrically.The results for catalase activity were expressed in μM/L.
Statistical analysis
The obtained data were processed using SPSS 23.0 Software.Descriptive statistical methods were used in order to calculate the median and lower and upper quartiles -Me (LQ, UQ), interquartiles range (IQR).Kolmogorov-Smirnov and Shapiro-Wilk normality tests were used to analyse data distribution.The homogeneity of variance was determined by Levene's test.The groups were compared using the non-parametric Kruskal-Wallis and Mann-Whitney tests.Correlation analysis was performed using Spearman correlation test.A p<0.05 was considered statistically significant.
Results
Tear activity of SOD was statistically significantly lower than in serum in all studied groups by 25%.A significant weak positive correlation was attested between tear and serum SOD levels (r=0.336,p=0.001).
Was established a statistically significant difference of SOD in serum (p=0.035) and tear (p=0.027) between groups, the values decreasing in both cases as the HR progressed (Table 1).
In both researched fluids, the SOD activity showed a significant, weak strength, negative correlation with the degree of HR (r=-0.246,p=0.019 in serum/r=-0.284,p=0.007 in tear).
Tear activity of catalase was statistically significantly lower than in serum by 30% in all studied groups (p=0.033).There were no differences in catalase content in serum (p>0.05) between the groups.We noted a trend of catalase activity increase in the serum of patients as HR progressed.Catalase level in GII (+3%; 33.03 µM/L (IQR 10.81) increased vs. GI (32.20 µM/L (IQR 11.30)), as well as in patients in GIII compared vs. GII (+3%; 34.23 µM/L (IQR 15.16) vs. 33.03µM/L (IQR 10.81)).Catalase activity in serum did not show a correlation with HR degree (r = 0.143; p=0.177).
In the tear, the catalase values were No correlations were found between serum and tear catalase levels (r=0.125,p=0.239), while tear catalase showed a significant medium strength, positive correlation with the degree of HR (r=0.261*,p=0.013) (Table 2).
Discussion
HR is considered a multifaceted disorder associated with HTN.It is caused by a complex range of factors from lifestyle choice to genetic predisposition.Aside from essential and secondary HTN, because only the elevation of blood pressure does not fully reflect the extent of retinopathy, there are other factors which play a relevant role in the development of HR.The scientists linked the signs of HR with biochemical markers, such as of inflammation (an augmented high-sensitivity C-reactive protein amount), endothelial dysfunction (high von Willebrand factor (vWF) level), oxidative stress (increased serum ferritin and gamma-glutamyl transferase levels), angiogenesis (decreased adiponectin level and elevated leptin level), low birth weight, high body mass index and even alcohol consumption [5-7, 9, 21-24].
The retinal vessels are the only blood vessels detectable on routine examination, so the presence of HR will indicate also the vascular modifications occurring in other systems.But ophthalmological interpretation cannot be transposed in measurable indicators, and novel markers would substantially improve the diagnostic and guarantee a more superior stratification of the patients in groups.
Tissue damage, produced by the OS along with the inadequate antioxidant defense, is regarded as a prime cause and the most plausible mechanism of HR development in HTN.Thus, evaluation of the antioxidant enzymes in patients with HR is essential for the establishment of oxidative stress/antioxidant defense disbalance role in the pathogenesis of HR.
The retina was always an attractive tissue for investigation, being a complex highly metabolic organ, which has 10 distinct layers of cells and works entirely by aerobic respiration, consuming the highest amount of oxygen in comparison with any other tissue.A consequence of this fact, will result in generation of reactive species of oxygen (ROSs), like superoxide (O 2
−•
), hydroxyl radical (•OH), and hydrogen peroxide (H 2 O 2 ).What makes these molecules so harmful and reactive, are the unpaired electrons in O 2 −• and •OH that would consequently damage the cell membranes and produce modification of amino acid residues and oxidation of sulfhydryl groups in proteins, breakage of peptide bonds, loss of metals in metalloproteins, depolymerization of nucleic acids, point mutations, and also would atypically oxidize polysaccharides and polyunsaturated fatty acids.H 2 O 2 is less reactive, but it can interact with intracellular iron and other metalcontaining molecules, due to their capacity to easily pass the cell membrane, further generating more •OH [15,25].
The retinal OS can be triggered both by endogenous and exogenous factors.The largest contribution of superoxide to the intracellular space is due to mitochondrial respiration.Hypoxia or any other disbalances in mitochondrial function, caused by HTN or that lead to it, may disrupt oxidative phosphorylation and generate superoxide anions.Another source of superoxide is NADPH oxidase, which markedly reinforces the oxidant capacity of the retina in the extracellular space [26].Also, due to radiation damage caused by the light, a series of amino acids, such as tyrosine, histidine, cysteine and methionine can generate oxidative intermediates.Some up-to-date studies remarked that a respiratory mechanism in photoreceptor outer segments could contribute to extracellular ROSs.The rod outer segments of the retina that are phagocytized by retinal pigment epithelial (RPE) cells are highly susceptible to free radical damage by lipid peroxidation because of their high content of polyunsaturated fatty acids as docosahexaenoic acid (DHA).Enhanced ROSs levels are harmful and may lead to phototransduction impairment and disruption to cellular function of the retina and the RPE.The antioxidative enzymes, that will be discussed later ̶ superoxide dismutase and catalase, minimized damage to the RPE [26].
ROSs also come from exogenous sources as a result of our lifestyle and environment.Can be mentioned as contributing factors -pollution, alcohol, tobacco smoke, heavy metals, transition metals, industrial solvents, pesticides, certain drugs like halothane, paracetamol, and radiation [26,27].
Disregarding of how HTN evolves, ROSs are still a major and fundamental element in the pathogenesis of HR.Persistently enhanced levels of HTN generate an elevation in ROSs and disturb the frail stability in the retina, favoring cytotoxicity and tissue damage, that are noticed via fundoscopy, which allows to classify the extent and progression of HR [26,27].
Our results have shown a decreased antioxidant activity of SOD and an increased one of catalase, that might be interpreted as a cell defense mechanism against a higher OS.Possibly, the decrease of SOD activity, established by us, conditioned the increase of ROS production and subsequently of hydrogen peroxide.As a result, the need for detoxification of H 2 O 2 induced the increase of catalase activity.In this order we might stipulate that the impact of OS upon the development of HR onset was proved.
During time, multiple types of SOD have been mentioned and all of them provide an essential defense system for the cells.SOD3, an extracellular SOD, that can be detected also unbound in serum and intracellularly, catalyzes the dismutation of superoxide in the extracellular matrix and can be found in almost all tissue at varying degrees.It also has and a regulatory role by influencing the proliferation, survival and apoptosis, by modulating the membrane-bound receptors, as tyrosine kinases receptors (RTKs), that are involved in the production of proangiogenic factors.In this situation, SOD3 acts like a protective shield, preventing the ROSs harmful effects on these receptors.SOD1 and SOD2 are localized in the cytosol and mitochondria [11].
Some recent studies have underlined constantly reduced levels of SOD3 in diabetic retinopathy, of SOD2 in chronic renal failure and acute ischemic stroke [28][29][30].At the same time the reports of SOD activity evaluation in HTN are contradictory, yielding various results from increased to decreased levels [20,29].On the other hand, increased catalase expression was determined in HTN, various types of cancer, also a polymorphism in the promoter region of catalase is associated with high blood pressure levels [12,13,31].Catalase deficiency was identified in diabetes, anemia, Wilson disease, bipolar disorder and schizophrenia [32].
Experimental evidence highlighted that ROSs, and more specifically the superoxide anion (O 2
−•
), play a decisive role in the genesis of HTN through mechanisms that are not fully understood.
Usually, generation of ROSs is tightly regulated and they are preserved at low concentration.They serve as signaling molecules, that provide the integrity of the vessels, as a result of their involvement in the modulation of endothelial function and vascular contraction-relaxation balance.
Pathologically, augmented amount of the ROS induces endothelial dysfunction, that is considered to be an essential pathological mechanism in the evolution of HTN.Furthermore, elevated ROSs levels stimulate vascular smooth muscular cells growth, increased contractility, invasion of the monocytes, lipid peroxidation, inflammation and increased deposition of extracellular matrix proteins, all of which are crucial factors in hypertensive vascular damage and causes of the clinical signs of HR appearance [33][34][35].levels, had no effect on blood pressure, neither at baseline nor as a result to angiotensin II (AngII) stimulation, and also did not increase the inflammatory response to AngII.Also, the depletion of SOD3 in both circumventricular organs (CVOs) and in the vascular smooth muscle showed similar effect to depletion in CVOs alone.
The results underlined the fact that SOD3 in the CNS presumably have a more significant role in modulation of blood pressure than SOD3 in the vasculature [28].Gomez-Marcos et al. detected in their study that HTN was linked with a decrease of serum SOD [36].This decline suggests a deficit in antioxidant defense mechanisms, that subsequently would affect the ability of the hypertensive patients to eliminate the circulating superoxide anion and determine an elevation in vascular damage, induced by ROS.They concluded that a decreased level of SOD in serum is associated with enhanced vascular damage.This research is sustained by the results of Kumar et al. who highlighted in their study that the concentrations of both catalase and glutathione peroxidase in red blood cells were low in uncontrolled hypertensives but did not reach statistically significant levels, however there was a significant decrease in SOD concentration [37].
Contrary to previous results, Labios et al. observed an increase in SOD and catalase activities in leucocyte lysates from hypertensive patients.They supposed the general idea of a 'vicious circle' among HTN and ROS, that could be explained by the fact that an elevation of ROS does not only play a crucial role in the development of HTN, but it can be generated by HTN itself [13].So, long-term HTN would ultimately produce an irreversible endothelial dysfunction mostly due to self-sustaining ROS production.
Only a few up-to-date studies pointed out explicitly the involvement of OS in the development of HR.Up to now, only two markers of OS have been analyzed.Karaca et al. highlighted an elevated level of serum γ-glutamyl transferase (GGT) in HR, that is an enzyme with a pivotal role in glutathione homeostasis and is crucial in preserving sufficient concentrations of intracellular glutathione in order to defend the cells against oxidants [9].The second study, conducted by Coban et al. emphasized the fact that there is a correlation between HR and high ferritin level in serum, that might be related to an augmented level of OS due to iron involvement in Fenton reaction [10].
Our results are quite similar to those mentioned above.Moreover, as far as we know, our research underlined for the first time the decrease of serum and tear SOD levels and the augmented levels of serum and tear catalase in HR.The study pointed out that in hypertensive patients with HR, in both researched fluids, SOD activity showed a significant weak strength, negative correlation with the degree of HR (r=-0.246,p=0.019 in serum/r=-0.284,p=0.007 in tear).Also, a significant weak positive correlation was attested between serum and tear SOD levels (r=0.336,p=0.001).However, even though no correlation with serum catalase was found, tear catalase showed a significant weak strength, positive correlation with the degree of HR (r=0.261,p=0.013)Hence, the low serum and tear SOD levels and high serum and tear catalase activity in HR and their correlation with the severity of HR implies that oxidative stress may be attributed to the mechanism of HR progress.The correlation between serum SOD and tear SOD implies the necessity for a more precise interpretation of the decrease of this biomarker in both of the fluids, and the fact that might and should be interpreted in the context of clinical manifestation of HR.
We must also be aware of the evidence that the auto modulation of the retinal circulation is disrupted once the blood pressure (BP) increased.In addition, enhanced BP by itself does not explicitly clarify the stage of HR [6,9,38,39].It was even affirmed that in spite of the constancy of increased blood pressure, a resolution of the retinopathy was observed [40].In our study, taking under the consideration the evidence that the BP levels in all three groups were mostly similar, SOD levels progressively decreased while catalase activity increased gradually with HR change, we can conclude that these enzymes can be used as a marker for the monitoring of HR progression.
Conclusions
This study expands on previous research on understanding how OS contributes to HR. Biochemical evidence suggests that oxidative damage of the retina is engaged in the origin of HR.The article examined a number of issues, such as the reason why retina research is essential, OS relation to HR, and in what manner retina defends itself from overloading the anti-oxidant defense system.
A progressive significant decrease in both serum and tear SOD levels and a tendency to increase catalase activity were identified as HR advanced.The enhancement in the severity of HR correlates with decreased SOD activity in tear and serum and increased catalase level in tear and does not correlate with increased catalase in serum.The results are indicating that each of the studied enzymes could be used as a marker of HR progression and as a predictor of extensive retinal damage, considering the idea that they concomitantly change in both the eye and serum.Additional studies are necessary to finally identify the role of the oxidative stress and antioxidant system in the development of HR and also to determine more precisely the threshold values of SOD and catalase activities, that would permit their use for an easier stratification of patients in groups.
Limitations
This study has several limitations.SOD and catalase in our study are two markers, changes of which cannot fully be explained solely by HTN, HR or both of them simultaneously.Moreover, our results cannot clarify the issue whether the decreased SOD levels and increased catalase activity anticipate the development of retinopathy or contrarily is a repercussion.Also, it is quite challenging to detect the long-term natural history of HTN for the reason that antihypertensive medication is often administrated to prevent cardiovascular disease.In such cases, the true effect of different factors on natural modulation of blood pressure can be dissimulated by the effects of antihypertensive drugs.Nonetheless, our study was conducted on patients that were not taking any antihypertensive treatment at that moment or any other treatment that could compromise the results.
Finally, this research has been carried out at a single hospital.We must be precautious in generalizations made across different hospitals and countries.
Figure 1 .Table 2 .
Figure 1.Serum and tear levels of SOD and catalase in patients with different grade of HR.
Lob et al., in order to clarify the mechanism by which O 2 .contributes to HTN, used Cre-lox technology to create mice with a target deletion of SOD3.The study results showed that SOD3 in the vascular smooth muscle, regardless of enhanced vascular O 2 −• | 2021-10-16T15:11:19.536Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "4c2953c5b3db04a3607e0958eada160842109d17",
"oa_license": "CCBYNC",
"oa_url": "https://romj.org/files/pdf/2021/romj-2021-0305.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e74050e890639995491da3c3bdbc8e481de597ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271224226 | pes2o/s2orc | v3-fos-license | Excess Dally-like Induces Malformation of Drosophila Legs
Glypicans are closely associated with organ development and tumorigenesis in animals. Dally-like (Dlp), a membrane-bound glypican, plays pivotal roles in various biological processes in Drosophila. In this study, we observed that an excess of Dlp led to the malformation of legs, particularly affecting the distal part. Accordingly, the leg disc was shrunken and frequently exhibited aberrant morphology. In addition, elevated Dlp levels induced ectopic cell death with no apparent cell proliferation changes. Furthermore, Dlp overexpression in the posterior compartment significantly altered Wingless (Wg) distribution. We observed a marked expansion of Wg distribution within the posterior compartment, accompanied by a corresponding decrease in the anterior compartment. It appears that excess Dlp guides Wg to diffuse to cells with higher Dlp levels. In addition, the distal-less (dll) gene, which is crucial for leg patterning, was up-regulated significantly. Notably, dachshund (dac) and homothorax (hth) expression, also essential for leg patterning and development, only appeared to be negligibly affected. Based on these findings, we speculate that excess Dlp may contribute to malformations of the distal leg region of Drosophila, possibly through its influence on Wg distribution, dll expression and induced cell death. Our research advances the understanding of Dlp function in Drosophila leg development.
Introduction
The regulation of organ development is a highly intricate process that involves the orchestration of multiple signaling pathways.The subtle interplay among these pathways is critical for organ size determination and pattern formation.Heparan sulfate proteoglycans (HSPGs) have emerged as key regulators of these signaling cascades [1,2].HSPGs are a group of macromolecules present on the cell surface and in the extracellular matrix [3][4][5].They are comprised of a core protein and covalently attached heparan sulfate glycosaminoglycan (GAG) chains [6,7].HSPGs play a pivotal role in cell signaling and morphogen gradient formation [8].They serve as co-receptors, modulating the threshold and duration of numerous signal transductions [9].
Among the HSPGs, glypicans are cell surface proteoglycans bound to the cell membrane via a glycosylphosphatidylinositol (GPI) anchor [10].In vertebrates, six glypicans are recognized [11], while in Drosophila there are two types of glypicans [12], Dally and Dlp.Of these, Dally is associated with GPC3 and GPC5, while Dlp corresponds to GPC1, GPC2, GPC4 and GPC6.These unique glypican molecules do not possess transmembrane domains [13].Instead, they attach to cell surfaces through GPI anchors, facilitating interactions with secreted ligands.Dally and Dlp are implicated in regulating the concentration gradient formation of morphogens, including Wg and Hedgehog (Hh) [14][15][16].Dlp has Cells 2024, 13, 1199 2 of 13 opposite regulating effects on tissues expressing low and high levels of Wingless.Dlp reduces high-level Wg activity but enhances low-level Wg signaling [17].In addition, Dlp is also involved in Hh reception by acting with the Hh receptor Patched (Ptc) [18].Dlp also mediates the feedback control of the interdependence between Hh and Wnt signaling during GSC (Germline Stem Cell) progeny differentiation, highlighting its crucial role in coordinating the crosstalk between these signaling pathways [19].
The Drosophila leg provides a good model for investigating the molecular and cellular mechanisms of signaling and their outcome during organogenesis.Legs arise from specific ectodermal cells that are genetically specified during embryogenesis, giving rise to imaginal discs which eventually develop into adult appendages [20,21].The adult leg consists of distinct segments along the proximal-distal (P-D) axis, including the coxa, trochanter, femur, tibia and five tarsal segments [22].Establishing proper subdomains within the leg disc is critical for the formation of the corresponding well-structured segments in the adult leg.The correct patterning of the leg disc is tightly organized by the three morphgens Hh, Wg and decapentaplegic (Dpp).Hh is expressed in the posterior compartment of the leg disc and diffuses to the anterior compartment as a short-range signal [23].It directly activates the expression of Wg and Dpp.Dpp is expressed in a stripe abutting the anteriorposterior compartment boundary, with higher expression in the dorsal region and lower expression in the ventral region [24].Wg is expressed in a wedge-shaped pattern in the ventral-anterior region [25].Both Wg and Dpp are required for defining the target genes' spatial domains along the P-D axis and D-V axis [21].Interestingly, Wg and Dpp act antagonistically to provide positional information for the establishment of the D-V axis, while they act cooperatively to direct the P-D axis formation [26,27].
Homothorax (Hth), Distal-less (Dll) and Dachshund (Dac) are the vital target genes involved in leg formation [23].The early leg disc is patterned by the opposing activities of Homothorax (Hth) and Distal-less (Dll) along the P-D axis.These two factors define the proximal and distal domains of the leg, respectively [26].Dachshund (Dac) is activated later in an intermediate domain of the leg disc, which corresponds to the development of medial adult leg segment formation including the first tarsal segment, the tibia and the presumptive femur [28].During the mid-third larval instar, the expression domains of Dll and Dac overlap in a specific region.Both genes are induced by the combined activities of the Wg and Dpp signaling pathways.However, Dll and Dac are further regulated by Brk, which acts as a repressor for both [28].Dpp and Wg signaling act antagonistically to repress Hth expression.This combined repression restricts Hth localization to the most proximal region of the leg disc [26].
In this study, we found that overexpression of Dlp in Drosophila legs led to a malformation of legs.Excess Dlp results in obvious cell death without apparent cell proliferation alteration.Additionally, Wg distribution was aberrant, and the expression of its downstream gene dll was also mis-regulated.Our study provides valuable insight into the role of Dlp in the development of Drosophila legs.
Measurements and Data Statistical Analysis
To avoid the lethal effect of Dlp mis-expression at the early developmental stage, Gal4 driver was combined with tub-Gal80 ts .The F1 generation was incubated at 18 • C before the second instar.Then, the larvae were transferred to 30 • C to induce the overexpression of Dlp or RNAi against Dlp for 2 or 3 days.Subsequently, the flies were shifted to 18 • C to reach the adult stage.Adult flies or legs were dissected and imaged using a Multifocus Imaging System of a microscope (MV PLAPO 1X, Olympus, Tokyo, Japan).Images were analyzed by Fiji software, version 2.15.1.
For the statistical analysis of the distal part of adult legs, we measured the tarsal segment lengths of hindlegs.The results were analyzed by one-way ANOVA with Tukey's test.For the quantification of cell proliferation in leg discs, PH3-positive puncta density within the P compartment was calculated by determining the ratio of positive puncta to the P compartment size.Similarly, for Caspase-3 staining, the number of positive puncta was counted in both the P compartment and the central region of the leg disc.For the quantification of Wg distribution and wg-lacZ domains, the size of the apparent locations was measured by Fiji software.Quantification of dll expression was achieved by measuring the fluorescence intensity of the apparent increased dll-lacZ expression regions in the P compartment and adjacent areas.A histogram was plotted and statistical analysis was conducted using GraphPad Prism 8.0.All two-mean comparisons were created using Student's t-test.
Extra Dlp Expression Induces Leg Deformities
Dlp regulates multiple aspects of biological processes, including signal trafficking and morphogen transport.Dysregulation of Dlp expression may lead to abnormal organ development.To investigate the impact of dlp expression levels on leg development, we manipulated Dlp levels in Drosophila legs using the Gal4/UAS system.Under the control of the en-Gal4 driver, which is expressed in the posterior compartment of the leg disc [28], we modulated the expression levels of dlp within this region.To avoid the lethal effect of abnormal dlp expression, we applied a tub-Gal80 ts to manipulate the Gal4 activity.At 18 • C, the Gal4 activity was repressed, while it was de-repressed at a high temperature (30 • C).The 2nd-instar larvae with tub-Gal80 ts , en > dlp (en ts > dlp) were cultured at 30 • C for 3 days and then transferred to 18 • C until the adult stage.This resulted in consistent malformation of legs (Figure 1C).Compared to normal legs (Figure 1A), the tarsal segments were abbreviated and fused significantly (Figure 1D), while other segments still appeared normal (Figure 1C).Occasionally, some legs were misplaced or totally missing (Figure S1).Conversely, we found that repression of dlp showed few signs of leg deformities (Figure 1B) and no mis-location phenotype.This suggests that up-regulation rather than down-regulation of dlp induces malformed or mis-located legs.
To validate the manipulation of the Dlp expression, we conducted anti-Dlp staining to assess Dlp levels.The analysis revealed an up-regulation of Dlp expression in en ts > dlp leg discs, while Dlp expression was effectively inhibited in en ts > dlp-RNAi leg discs (Figure 1E).This indicates that the stocks we used are efficient to manipulate dlp expression levels.The statistical analysis showed that the posterior compartment of en ts > dlp leg discs was also significantly reduced (Figure 1F).
Excess Dlp Induces Morphology Modification of Leg Disc with No Apparent Cell Proliferation Alteration
In consideration of the visible effects of overexpression rather than repression of Dlp, we focused our investigation on the en ts > dlp leg discs in the following study.Next, we utilized rhodamine-phalloidin staining to visualize F-actin within the leg discs, providing insights into the morphology of leg discs.In line with the observed deformities in the adult leg structure, the leg discs of en ts > dlp flies also exhibited abnormal shapes (Figure 2A).This correlation suggested that elevated Dlp expression leads to leg morphology modification even at early developmental stages.To validate the manipulation of the Dlp expression, we conducted anti-Dlp staining to assess Dlp levels.The analysis revealed an up-regulation of Dlp expression in en ts > dlp leg discs, while Dlp expression was effectively inhibited in en ts > dlp-RNAi leg discs (Figure 1E).This indicates that the stocks we used are efficient to manipulate dlp expression levels.The statistical analysis showed that the posterior compartment of en ts > dlp leg discs was also significantly reduced (Figure 1F).
Excess Dlp Induces Morphology Modification of Leg Disc with No Apparent Cell Proliferation Alteration
In consideration of the visible effects of overexpression rather than repression of Dlp, we focused our investigation on the en ts > dlp leg discs in the following study.Next, we utilized rhodamine-phalloidin staining to visualize F-actin within the leg discs, providing insights into the morphology of leg discs.In line with the observed deformities in the adult leg structure, the leg discs of en ts > dlp flies also exhibited abnormal shapes (Figure 2A).This correlation suggested that elevated Dlp expression leads to leg morphology modification even at early developmental stages.
achieved by counting the PH3-positive puncta (Figure 2C).Considering that the shrunken P compartment might influence accurate quantification, we also examined leg discs expressing Dlp for 2 days, which exhibited a lesser reduction in the P compartment and fewer deformities (Figure S2A,B).Notably, this experiment also demonstrated no apparent alteration of cell proliferation in the P compartment (Figure S2C).These findings suggest that the leg deformities might not result from the cell proliferation alteration.
Excess Dlp Induces Apoptotic Cell Death
Cell death plays a pivotal role in tissue homeostasis and organ development.Abnormal signaling frequently triggers cell apoptosis.We wonder whether the malformed leg The reduced P compartment might be caused by abnormal cell proliferation.Next, we assessed the cell proliferation rate using anti-PH3 staining.It revealed that there was no apparent alteration of cell proliferation in the P compartment of en ts > dlp discs; this was achieved by counting the PH3-positive puncta (Figure 2C).Considering that the shrunken P compartment might influence accurate quantification, we also examined leg discs expressing Dlp for 2 days, which exhibited a lesser reduction in the P compartment and fewer deformities (Figure S2A,B).Notably, this experiment also demonstrated no apparent alteration of cell proliferation in the P compartment (Figure S2C).These findings suggest that the leg deformities might not result from the cell proliferation alteration.
Excess Dlp Induces Apoptotic Cell Death
Cell death plays a pivotal role in tissue homeostasis and organ development.Abnormal signaling frequently triggers cell apoptosis.We wonder whether the malformed leg morphology could be attributed to cell death elicited by the overexpression of Dlp.In control leg discs, no noticeable apoptosis was detected within the discs (Figure 3A).We observed apparent cell death in the posterior compartment of the en ts > dlp discs, as labeled by anti-Caspase-3 staining.Additionally, non-autonomous cell death also emerged in the anterior compartment (Figure 3B).This implied that the defect caused by excess Dlp expression may be mediated by cell death.
served apparent cell death in the posterior compartment of the en ts > dlp discs, as labeled by anti-Caspase-3 staining.Additionally, non-autonomous cell death also emerged in the anterior compartment (Figure 3B).This implied that the defect caused by excess Dlp expression may be mediated by cell death.
To further investigate whether the Dlp-induced leg deformities were caused by cell apoptosis, we co-expressed P35, a known cell death inhibitor, to block cell death.Caspase-3 detection showed that p35 efficiently repressed cell death, while non-autonomous cell death was still present outside the Gal4 region (Figure 3C,E).Furthermore, it was observed that the size of the P compartment was partially rescued (Figure 3D).However, most of the en ts > dlp + p35 flies failed to develop into the adult stage.Thus, we cannot directly assay the rescue effect.
Extra Dlp Induces Wg Mis-Distribution
Wg is typically expressed in the ventral half of the leg disc (Figure 4A) and plays a pivotal role in leg development.Aberrant Wg signaling is frequently linked to the malformation of organs [29,30].Previous studies have indicated that cells mutant for Dlp To further investigate whether the Dlp-induced leg deformities were caused by cell apoptosis, we co-expressed P35, a known cell death inhibitor, to block cell death.Caspase-3 detection showed that p35 efficiently repressed cell death, while non-autonomous cell death was still present outside the Gal4 region (Figure 3C,E).Furthermore, it was observed that the size of the P compartment was partially rescued (Figure 3D).However, most of the en ts > dlp + p35 flies failed to develop into the adult stage.Thus, we cannot directly assay the rescue effect.
Extra Dlp Induces Wg Mis-Distribution
Wg is typically expressed in the ventral half of the leg disc (Figure 4A) and plays a pivotal role in leg development.Aberrant Wg signaling is frequently linked to the malformation of organs [29,30].Previous studies have indicated that cells mutant for Dlp disrupted the Wg distribution and affected Wg diffusion in wing discs [14].Furthermore, it Cells 2024, 13, 1199 7 of 13 has been shown that Dlp exhibits contrasting effects at low and high levels of Wg, promoting low-level Wg activity while inhibiting high-level Wg activity.To investigate whether the leg deformities were due to mis-regulated Wg signaling, we performed double staining with the Wg antibody and wg-lacZ in the leg disc.In line with a previous study [27], Wg is predominantly distributed in the anterior compartment and only has a small distribution region in the P compartment in the normal leg disc, and wg-lacZ was also restricted in a wedge-shaped region in the anterior compartment, roughly overlapping with the distribution area of Wg (Figure 4A).Intriguingly, excess Dlp expanded the posterior distribution domain of Wg, accompanied by a reduction in the anterior Wg distribution region (Figure 4B,C).Our findings also revealed that wg-lacZ expression domains were notably expanded (Figure 4B,D), which is likely regulated by a feedback loop mechanism.These observations collectively suggest that excess Dlp alters the spatial distribution of Wg in the leg disc.
Cells 2024, 13, 1199 7 of 13 disrupted the Wg distribution and affected Wg diffusion in wing discs [14].Furthermore, it has been shown that Dlp exhibits contrasting effects at low and high levels of Wg, promoting low-level Wg activity while inhibiting high-level Wg activity.To investigate whether the leg deformities were due to mis-regulated Wg signaling, we performed double staining with the Wg antibody and wg-lacZ in the leg disc.In line with a previous study [27], Wg is predominantly distributed in the anterior compartment and only has a small distribution region in the P compartment in the normal leg disc, and wg-lacZ was also restricted in a wedge-shaped region in the anterior compartment, roughly overlapping with the distribution area of Wg (Figure 4A).Intriguingly, excess Dlp expanded the posterior distribution domain of Wg, accompanied by a reduction in the anterior Wg distribution region (Figure 4B,C).Our findings also revealed that wg-lacZ expression domains were notably expanded (Figure 4B,D), which is likely regulated by a feedback loop mechanism.These observations collectively suggest that excess Dlp alters the spatial distribution of Wg in the leg disc.
Excess Dlp Causes Mis-Expression of Leg Patterning Gene Dll
Drosophila legs consist of multiple segments, including the coxa, trochanter, femur, tibia and tarsa along the P-D axis.The leg segmentation is controlled by some key regulators, including hth, dac and dll.dll is in the central domain of the leg disc and directs the patterning of the future distal tip of the leg, whereas dac is expressed in an intermediate ring in the leg disc where it partially overlaps with the dll domain.It gives rise to the presumptive more proximal leg structures such as the femur and tibia.Additionally, hth is expressed in a peripheral ring of the leg disc, which corresponds to the most proximal part of adult legs.As mentioned above, wg signaling positively regulates dll and dac expression [31].Next, we detected whether the excess Dlp affected the expression pattern of these genes by employing a lacZ reporter or antibody to monitor the expression of these factors.Our results revealed that excess Dlp led to the up-regulation of dll expression in the posterior region.Furthermore, non-autonomous up-regulation of dll expression was observed in the ventral domain adjacent to the en-Gal4 region (Figure 5C).To confirm this result, we also detected different focal planes of the leg disc and observed an evident elevation of dll expression (Figure S3).Additionally, the expression patterns of dac and hth in this context appear relatively normal (Figure 5A,B).A previous study has demonstrated that dll overexpression in the leg disc induces abbreviated and fused distal segments of adult legs [32], resembling the adult leg phenotype observed in our study.Thus, we deduce that the observed leg malformations might be related to abnormal dll expression induced by excess Dlp, potentially associated with alterations in the Wg distribution.
Excess Dlp Causes Mis-Expression of Leg Patterning Gene Dll
Drosophila legs consist of multiple segments, including the coxa, trochanter, femur, tibia and tarsa along the P-D axis.The leg segmentation is controlled by some key regulators, including hth, dac and dll.dll is expressed in the central domain of the leg disc and directs the patterning of the future distal tip of the leg, whereas dac is expressed in an intermediate ring in the leg disc where it partially overlaps with the dll domain.It gives rise to the presumptive more proximal leg structures such as the femur and tibia.Additionally, hth is expressed in a peripheral ring of the leg disc, which corresponds to the most proximal part of adult legs.As mentioned above, wg signaling positively regulates dll and dac expression [31].Next, we detected whether the excess Dlp affected the expression pattern of these genes by employing a lacZ reporter or antibody to monitor the expression of these factors.Our results revealed that excess Dlp led to the up-regulation of dll expression in the posterior region.Furthermore, non-autonomous up-regulation of dll expression was observed in the ventral domain adjacent to the en-Gal4 region (Figure 5C).To confirm this result, we also detected different focal planes of the leg disc and observed an evident elevation of dll expression (Figure S3).Additionally, the expression patterns of dac and hth in this context appear relatively normal (Figure 5A,B).A previous study has demonstrated that dll overexpression in the leg disc induces abbreviated and fused distal segments of adult legs [32], resembling the adult leg phenotype observed in our study.Thus, we deduce that the observed leg malformations might be related to abnormal dll expression induced by excess Dlp, potentially associated with alterations in the Wg distribution.
Discussion
HSPGs are macromolecules widely distributed in the extracellular matrix of cells.They play a crucial role in various biological processes, including the maintenance of tissue homeostasis and involvement in tumorigenesis [10].HSPGs have been implicated in regulating the growth and patterning numerous tissues and organ systems.Mis-expression of these molecules may lead to deformities in developing organs [8].On the other hand, the mechanisms underlying organ development and patterning in both chordate and arthropod phyla share numerous similarities.Drosophila legs provide an excellent model to investigate these mechanisms governing organ formation.In this study, we observed that overexpression of Dlp, one of the Drosophila HSPGs in the posterior compartment of leg discs under the control of the en-Gal4 driver, resulted in the malformation of adult legs (Figure 1A,B).Similar results were obtained with the anterior Gal4 driver, dpp-Gal4.Considering the relatively large domain of en-Gal4, we focused on manipulating Dlp expression specifically in the P compartment using en-Gal4.
Cell Death Is the Potential Factor Inducing Deformities of Leg Discs
The reduction in the P compartment size may be caused by the activation of cell apoptosis or hindered cell proliferation.Disordered cell signaling frequently leads to cell death in various developing tissues.Our study revealed that excess Dlp induced a significant increase in cell death both autonomously and non-autonomously (Figure 3A,B).Why does the excess Dlp induce cell death in the center of leg discs non-autonomously?Based on some studies, apoptotic cells in one compartment of the Drosophila imaginal disc release long-range death factors such as Eiger, which induces apoptosis in an adjacent compartment [33].This could be a potential reason.Alternatively, some reports have indicated that a reduction in Wg levels also promotes cell apoptosis [34].It is plausible that an abnormal distribution of Wg triggered by excess Dlp reduces the Wg signals received by cells in the central region of the leg disc.Alternatively, cells situated in the center might display increased sensitivity to changes in Wg signaling, making them more prone to apoptosis when Wg levels are reduced.Furthermore, co-expression of p35 partially rescued the reduction in the posterior compartment.Therefore, we hypothesize that cell death plays an important role in the induction of leg deformities.
Excess Dlp Results in Abnormal Distribution of Wg
Wingless (Wg) is a crucial morphogen guiding organ patterning [29,30].Previous studies have suggested that Dlp is necessary for Wg movement and gradient formation in the wing disc.Additionally, Dlp is considered a co-factor in Wg signaling.It has been demonstrated that Dlp exerts opposing effects on Wg activity at the domains with low and high levels of Wg.In cells with low levels of Wingless activity, Dlp acts as a positive co-factor to enhance Wg activity.Conversely, Dlp reduces Wg activity in cells with high levels of Wingless activity [17].A previous study demonstrated that aside from the Wg source region in the A compartment, Wg is also distributed in a small area within the P compartment [28].In addition, it has been shown that overexpression of Dlp results in an accumulation of extracellular Wg in the wing disc [35].Our findings indicate that an excess of Dlp leads to the expansion of Wg distribution within the P compartment.Conversely, the region of Wg diffusion within the A compartment is notably reduced (Figure 4A,B,D).This implies that cells with higher Dlp recruit soluble Wg competitively.Dlp consists of a core protein with several attached HS GAG chains.It has been shown that the Dlp core protein exhibits biphasic activity similar to that of wild-type Dlp.The attached GAG chains appear to enhance the interaction between the Dlp core protein and Wg [36].Therefore, we posit that excess Dlp provokes alterations in Wg distribution primarily through its core protein.Moreover, non-autonomously, the domain of wg-lacZ in its source region is enlarged noticeably.This suggests the existence of an unknown mechanism regulating wg-lacZ expression, potentially involving a feedback loop.Drosophila legs consist of several segments along the proximal-distal (P-D) axis.In our study, conspicuous deformities were consistently observed in the distal part of adult legs.Interestingly, the repression of exhibited minimal indications of abnormal leg patterning (Figure 1A-D).The orchestration of distinct signaling pathways, including hth, dac and dll, were identified as pivotal in mediating the development of different leg segments [37].Specifically, hth has been shown to govern the proximal region [26,38], dac the medial region [28] and dll the distal part of legs [37].Previous research has highlighted the involvement of Wg signaling in activating the expression of Dac and Dll [28], whose expressions display varying sensitivities to Wg signaling levels [39].Our findings demonstrated an elevated dll expression in leg discs, while dac and hth exhibited no significant modifications (Figure 5).It has been shown that overexpression of dll in leg discs caused abbreviated and fused distal segments of legs [26], which is very similar to the legs observed in our study, providing support for this conclusion.Consequently, we suggest that the impact of excess Dlp on leg development involves the modulation of key patterning factors of legs.
The Limitations of the Study
It is essential to acknowledge the limitations of the study, such as the specific focus on Dlp and the need for further exploration of the other glypican dally.Additionally, our study revealed that the wg-lacZ domain in its source region was enlarged non-autonomously (Figure 4A,B,D).The regulation mechanism remains elusive.Additionally, our study revealed missing and mis-localization of legs, suggesting a potential implication of HSPGs in organ positioning.Abnormal Wnt signaling can lead to a variety of limb defects, including missing limbs or Ectrodactyly-hand and split-foot malformations [40,41].In mice, mutation of glypican-3 leads to defects such as Polydactyly [42].The observation of missing or mis-localized legs could serve as a valuable model for studying the etiology of finger and arm misalignment.Subsequent research endeavors could delve into the underlying molecular mechanisms and explore potential interactions with other regulatory factors.This knowledge may extend beyond Drosophila, offering valuable insights into the broader field of developmental biology.
Conclusions
This study sheds light on the important role of Dlp signaling in regulating Drosophila leg development.We demonstrate that excess Dlp induces a notable shrinkage of the adult leg distal part, accompanied by evident cell death but no apparent alterations in cell proliferation.Furthermore, our findings reveal an impact on Wg distribution and elevated Dll expression (Figure 6).Given that Dll directly guides the patterning of the leg's distal part and Wg tightly regulates Dll activity, we propose that the alterations of these two signals, as well as the induced cell death, are the potential factors responsible for the observed leg malformations.Our findings enrich our comprehension of the regulatory networks underlying Drosophila leg development and facilitate future exploration into the complex molecular mechanisms controlling organ pattern formation.
Figure 1 .
Figure 1.Overexpression but not repression of Dlp-induced defects of Drosophila legs.(A).Normal adult leg segmentation in control en ts > GFP flies.(B).Adult legs of en ts > dlp-RNAi flies show no visible defects.(C).Overexpression of Dlp in the en ts > dlp leg disc results in a characteristic shrinkage of the distal portion of the adult leg.The scale bar is 0.3 mm.(D).Statistical analysis of distal part length of adult legs shows the tarsal segments of adult legs are abbreviated significantly in en ts > dlp flies, while they are normal in en ts > dlp-RNAi flies (mean ± SEM; en ts > GFP, n = 22; en t s > dlp-RNAi n = 20; en ts > dlp, n = 23).Bars with different letters indicate significant statistical differences between the groups (p < 0.01).(E).Anti-Dlp staining (red) shows that Dlp is roughly uniform in the control en ts > GFP leg discs; Dlp is effectively repressed in the P compartment of en ts > dlp-RNAi leg discs; Dlp is up-regulated in en ts > dlp leg discs.The scale bar is 50 μm.(F).Statistical analysis of the proportion of the P compartment area to the whole leg disc (mean ± SEM; en ts > GFP, n = 16; en ts > dlp, n = 16).The white dashed lines indicate the A-P compartment boundary.Asterisks indicate significant differences (p < 0.01).
Figure 1 .
Figure 1.Overexpression but not repression of Dlp-induced defects of Drosophila legs.(A).Normal adult leg segmentation in control en ts > GFP flies.(B).Adult legs of en ts > dlp-RNAi flies show no visible defects.(C).Overexpression of Dlp in the en ts > dlp leg disc results in a characteristic shrinkage of the distal portion of the adult leg.The scale bar is 0.3 mm.(D).Statistical analysis of distal part length of adult legs shows the tarsal segments of adult legs are abbreviated significantly in en ts > dlp flies, while they are normal in en ts > dlp-RNAi flies (mean ± SEM; en ts > GFP, n = 22; en ts > dlp-RNAi n = 20; en ts > dlp, n = 23).Bars with different letters indicate significant statistical differences between the groups (p < 0.01).(E).Anti-Dlp staining (red) shows that Dlp is roughly uniform in the control en ts > GFP leg discs; Dlp is effectively repressed in the P compartment of en ts > dlp-RNAi leg discs; Dlp is up-regulated in en ts > dlp leg discs.The scale bar is 50 µm.(F).Statistical analysis of the proportion of the P compartment area to the whole leg disc (mean ± SEM; en ts > GFP, n = 16; en ts > dlp, n = 16).The white dashed lines indicate the A-P compartment boundary.Asterisks indicate significant differences (p < 0.01).
Figure 2 .
Figure 2. Excess Dlp induced shrinkage of leg discs with no apparent cell proliferation alteration.(A).Phalloidin staining reveals normal leg disc morphology in the control group (en ts > GFP), while overexpression of Dlp results in morphological deformities in the P compartment.(B).Staining with anti-PH3 reveals no apparent change in cell proliferation rate between the control and Dlp-overexpressing leg discs.(C).Statistical analysis of the PH3-positive puncta density (mean ± SEM; en ts > GFP, n = 22; en ts > dlp, n = 24).It reveals no apparent change in the cell proliferation rate between the control and Dlp-overexpressing leg discs.The white dashed lines indicate the A-P compartment boundary.ns means no statistically significant differences.The scale bar is 50 μm.
Figure 2 .
Figure 2. Excess Dlp induced shrinkage of leg discs with no apparent cell proliferation alteration.(A).Phalloidin staining reveals normal leg disc morphology in the control group (en ts > GFP), while overexpression of Dlp results in morphological deformities in the P compartment.(B).Staining with anti-PH3 reveals no apparent change in cell proliferation rate between the control and Dlpoverexpressing leg discs.(C).Statistical analysis of the PH3-positive puncta density (mean ± SEM; en ts > GFP, n = 22; en ts > dlp, n = 24).It reveals no apparent change in the cell proliferation rate between the control and Dlp-overexpressing leg discs.The white dashed lines indicate the A-P compartment boundary.ns means no statistically significant differences.The scale bar is 50 µm.
Figure 3 .
Figure 3. Dlp overexpression induces cell apoptosis both autonomously and non-autonomously.(A).No apparent cell death is detected in the control leg disc.(B).Overexpression of Dlp induces marked cell apoptosis within the P compartment (autonomous) and in regions outside of Gal4 expression (non-autonomous).(C).Expression of p35 blocks the cell apoptosis in the P compartment, while non-autonomous cell death still persists in the central region of leg disc.(D).The size of P compartment is partially rescued by co-expression of p35 (mean ± SEM; en ts > GFP, n = 16; en ts > dlp, n = 16; en ts > dlp + p35, n = 16) (E).Cell death in P compartment is inhibited totally by p35, while the non-autonomous cell death in central region is still severe (mean ± SEM; en ts > GFP, n = 23; en ts > dlp, n = 25; en ts > dlp + p35, n = 16).The white dashed lines indicate the A-P compartment boundary.Asterisks indicate significant differences (p < 0.01).The scale bar is 50 μm.
Figure 3 .
Figure 3. Dlp overexpression induces cell apoptosis both autonomously and non-autonomously.(A).No apparent cell death is detected in the control leg disc.(B).Overexpression of Dlp induces marked cell apoptosis within the P compartment (autonomous) and in regions outside of Gal4 expression (non-autonomous).(C).Expression of p35 blocks the cell apoptosis in the P compartment, while non-autonomous cell death still persists in the central region of leg disc.(D).The size of P compartment is partially rescued by co-expression of p35 (mean ± SEM; en ts > GFP, n = 16; en ts > dlp, n = 16; en ts > dlp + p35, n = 16) (E).Cell death in P compartment is inhibited totally by p35, while the non-autonomous cell death in central region is still severe (mean ± SEM; en ts > GFP, n = 23; en ts > dlp, n = 25; en ts > dlp + p35, n = 16).The white dashed lines indicate the A-P compartment boundary.Asterisks indicate significant differences (p < 0.01).The scale bar is 50 µm.
Figure 4 .
Figure 4. Excess Dlp changes Wg distribution in the leg disc.(A).In the control leg disc, the expression of wg-lacZ and the majority of Wg distribution are confined to a wedge-shaped region within the A compartment.(B).Overexpression of Dlp expands the Wg distribution domain within the P compartment, while concurrently causing a reduction in the distribution domain within the A compartment.wg-lacZ expression domain is enlarged.The white dashed lines indicate the A-P compartment boundary.(C).In the en ts > dlp leg disc, the Wg distribution domain in the P compartment is expanded; it is decreased in the A compartment (mean ± SEM; en ts > GFP, n = 24; en ts > dlp, n = 22).(D).The wg-lacZ expression domain is enlarged in the en ts > dlp leg disc (mean ± SEM; en ts >GFP, n = 24; en ts > dlp, n = 21).Asterisks indicate significant differences (p < 0.01).The scale bar is 50 μm.
Figure 4 .
Figure 4. Excess Dlp changes Wg distribution in the leg disc.(A).In the control leg disc, the expression of wg-lacZ and the majority of Wg distribution are confined to a wedge-shaped region within the A compartment.(B).Overexpression of Dlp expands the Wg distribution domain within the P compartment, while concurrently causing a reduction in the distribution domain within the A compartment.wg-lacZ expression domain is enlarged.The white dashed lines indicate the A-P compartment boundary.(C).In the en ts > dlp leg disc, the Wg distribution domain in the P compartment is expanded; it is decreased in the A compartment (mean ± SEM; en ts > GFP, n = 24; en ts > dlp, n = 22).(D).The wg-lacZ expression domain is enlarged in the en ts > dlp leg disc (mean ± SEM; en ts > GFP, n = 24; en ts > dlp, n = 21).Asterisks indicate significant differences (p < 0.01).The scale bar is 50 µm.
Figure 5 .
Figure 5. Excess Dlp causes mis-expression of dll but not notable alteration of hth and dac.(A).Excess Dlp has no apparent effect on the hth expression pattern.(B).Excess Dlp has no apparent effect on the Dac expression pattern.(C).Excess Dlp causes up-regulated dll in the edge region of the P compartment and the region adjacent to the en-Gal4 domain.(D).The statistical analysis shows that the dll expression is elevated significantly (mean ± SEM; en ts > GFP, n = 23; en ts > dlp, n = 21).The white
Figure 5 .
Figure 5. Excess Dlp causes mis-expression of dll but not notable alteration of hth and dac.(A).Excess Dlp has no apparent effect on the hth expression pattern.(B).Excess Dlp has no apparent effect on the Dac expression pattern.(C).Excess Dlp causes up-regulated dll in the edge region of the P compartment and the region adjacent to the en-Gal4 domain.(D).The statistical analysis shows that the dll expression is elevated significantly (mean ± SEM; en ts > GFP, n = 23; en ts > dlp, n = 21).The white dashed lines indicate the A-P compartment boundary.Asterisks indicate significant differences (p < 0.01).The scale bar is 50 µm.
4. 3 .
Compared to Dac and Hth, Dll Is More Responsive to the Excess Dlp | 2024-07-17T15:11:27.829Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "d19111a2b77ff1e79748af946792999459a8699e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells13141199",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bedb3be8afca5f10daa9bd0009d9edfc9f0e9fbe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
43431065 | pes2o/s2orc | v3-fos-license | Effect of Molecular Weight and Molecular Distribution on Skin Structure and Shear Strength Distribution near the Surface of Thin-Wall Injection Molded Polypropylene
In this study, the relationship between skin structure and shear strength distribution of thin-wall injection molded polypropylene (PP) molded at different molecular weight and molecular distribution was investigated. Skin-core structure, cross-sectional morphology, crystallinity, crystal orientation, crystal morphology and molecular orientation were evaluated by using polarized optical microscope, differential scanning calorimeter, X-ray spectroscopic analyzer and laser Raman spectroscopy, respectively, while the shear strength distribution was investigated using a micro cutting method called SAICAS (Surface And Interfacial Cutting Analysis System). The results indicated that the difference of molecular weight and molecular weight distribution showed own skin layer thickness. Especially, high molecular weight sample showed thicker layer of the lamellar orientation and molecular orientation than low molecular weight sample. In addition, wide molecular distribution sample showed large crystal orientation layer.
Introduction
Injection molding is one of the most used methods to make plastic products.The parts made by this technique were used in various situations such as transportation equipment, electric appliance, and supplies.Recently, light-weighting is very important thing because their weight saving causes low emission of carbon dioxide, convenience and economy.One of the simple ways to light-weighting is thin-wall injection molding.However, the resin was rapidly cooled by mold, so that the skin and core layer of molding article formed different structure and this structure influenced bulk property strongly [1]- [6].This heterogeneous structure which is called skin-core structure was formed even if it was a thin-wall, so it is very important to understand the relationship between internal structure and property.In recent researches, effect of β phase crystalline nucleating agent added polypropylene (PP) [7] and influence of ultra-high speed injection molding [8] were studied to reveal the relationship between non-uniform structure and property of thin-wall injection molding products.Moreover, this non-uniform structure-property relationship was also investigated in composite material [9] and non-olefin resin based materials [10].Nevertheless, molecular weight and molecular weight distribution effects on structureproperty relationship of thick-wall materials were well studied [11]- [13].Comparably, it has not been reported in the case of thin-wall injection molding products.Besides, a number of reports about injection molding products in various conditions but almost reports focused on only internal structure or property.Especially in property measurement, most researchers focused on the properties of entire cross-section in injection molding product.However, property distribution which has the strong relationship with skin-core structure has not been hard studied except hardness and scratch resistance property of surface [14]- [18].In this paper, the relationship between skin-core structure and property distribution of thin-wall injection molded polypropylene with different molecular weight and molecular weight distribution was investigated.The skin-core structures were characterized in terms of morphology, crystallinity, crystal phase, lamellar and molecular orientation by using optical microscope, differential scanning calorimeter, laser Raman spectroscopy, small angle X-ray scatting and wide angle X-ray diffraction.The property distribution of shear strength was measured by micro cutting machine.
Materials & Injection Molding Condition
In this study, 4 different homo-PP resins were used.The molecular characteristics were summarized in Table 1.These resins had different molecular weight and distribution.In Table 1, there are 4 types PP resins were expressed by using 2 characters.H and L describe higher and lower molecular weight respectively.N, M and W describe narrow, middle and wide molecular weight distribution respectively.Thin-wall specimen was prepared by injection molding.Resin temperature and mold temperature were controlled 240˚C and 40˚C.Injection speed and pressure were controlled 100 mm/s and 30 MPa. Figure 1 showed the schematic drawing of specimen.The specimen dimension is 100 × 20 × 1 mm.
POM Observation
To observe the skin-core morphology, polarized optical microscope (POM) observation was conducted using BX-51 (Olympus Corp.) with 530 nm sensitive color plate.The observed cross section was FD-ND cross section (FD: Flow direction, ND: Normal direction).The samples were thin-sectioned with approximately 50 μm thickness.The thin-sections were observed under cross-nicol with −45˚ rotated against beam line.
Differential Scanning Calorimeter (DSC)
Crystallinity was measured by using DSC (PerkinElmer, Inc.Type: DSC2920).The samples were sliced into 50 μm films from the surface to the core layer in 250 μm depth to investigate the difference of crystallinity between the surface and the core layers.The samples were heated at a scanning rate of 10˚C/min from 30˚C to 200˚C under a nitrogen atmosphere.Crystallinity was calculated by using the following equation and crystal melt enthalpy of PP (ΔH m ) was used as 209 J/g [19] (see Equation ( 1)).
( ) where ΔH m is crystal heat fusion value of PP.
Small Angle X-Ray Scatting (SAXS) and Wide Angle X-Ray Diffraction (WAXD)
To investigate the difference of crystal morphology, i.e. lamellar orientation were measured by using SAXS (Rigaku Corp., Model: MicroMax-007HF) and β crystalline fraction was measured by using WAXD (Rigaku Corp., Model: 55R4206).The samples were prepared by the same method as for DSC film samples.The film thickness was 50 μm sliced from the surface to the core in 500 μm depth for SAXS and 250 μm depth for WAXD.X-ray beam was permeated the film samples in the direction to TD direction.Lamellar orientation from SAXS and crystalline orientation from WAXD was calculated by using the following Equation (2) [20].180 FWHM Crystalline Orietation 180 where, FWHM is Full With and Half Maximum of peak area on azimuthal analyzing of 2D-SAXS and WAXD pattern.The β crystalline fraction K from WAXD was calculated by using the following Equation (3) [21].
Laser Raman Spectroscopy
To measure molecular orientation along the FD direction, laser Raman spectroscopy (HORIBA.Ltd., Type: LabRam-HR-800) was used.The measurement was conducted on the FD-ND cross section every 2 µm from the surface to the core layer in 500 μm depth.The 633 nm red laser was used and laser beam was focused at a specimen position to approximately 2 μm through the half-wave plate.The molecular orientation was evaluated by using the ratio of 844 and 813 cm −1 [22].
Shear Strength
To measure the difference of mechanical property distribution, shear strength distribution were measured using a micro-cutting machine called SAICAS (Surface And Interfacial Cutting Analysis System, Daipla Wintes Co. Ltd., Type: DN-01).The cutting direction was conducted on the FD-ND axis from the surface to the core layer in 500 μm depth.The blade was 0.5 mm width diamond single crystal blade.Vertical force (F V ) and horizontal force (F H ) were measured during cutting.The cutting speed of vertical moving (ND direction) and horizontal moving were 0.05 and 0.5 μm/s.The shear strength τ S from these vertical and horizontal forces was calculated by the following Equation (4).
2 cot where, w is the blade width, d is the cutting depth from the surface and φ is the shear angle defined as tan −1 (F H /F V )/2 [23].
Skin-Core Morphology
Figure 2 shows skin-core morphology of injection molded PP with different molecular weight and molecular weight distribution observed by POM.Near the center in ND direction, every samples had similar no anisotropy morphology (core layer).However, unique morphology as compared with the core layer were observed near the surface (skin layer named characteristic morphology layer).In low molecular weight samples, LN and LM, these samples had almost same thickness characteristic morphology layer and its thickness were about 70 μm.
On the other hand, in high molecular weight samples, HM and HW, HM had a thicker characteristic morphology layer than HW and their thickness was about 120 and 100 μm, respectively.In comparison between LM and HM, it was found that high molecular weight caused the increase of characteristic morphology layer thickness.Moreover, from the results of HM and HW, a wide molecular distribution caused the increase of characteristic morphology layer thickness only in the high molecular weight sample.
Crystallinity
Figure 3 shows crystallinity distribution throughout the thickness direction of the samples measured by DSC.In addition, crystallinity of non-sliced whole sample was shown in cated that the molecular weight and molecular weight distribution did not significantly affect average crystallinity of the samples.From the results of DSC on sliced samples, the sample showed the characteristic crystallinity distribution was not.It indicated that the difference of molecular weight and molecular weight distribution didn't affect the crystallinity distribution.
Lamellar Orientation
Figure 4 shows 2-D SAXS patterns of thin films sliced from the surface to the core layer (0 -500 μm).Lamellar orientation was clearly observed in all the samples and depth.Every samples showed strong lamellar orientation near the skin layer (lamellar oriented layer).Especially, high molecular weight sample showed strong lamellar orientation pattern on its surface.In the surface layer of HM and HW. Figure 5 shows the lamellar orientation distribution from the 2D-SAXS patterns.From this graph, it was observed that every samples had highly oriented lamellar layer near the surface (lamellar oriented layer) and the lamella orientation in the core layer was much lower than the surface layer.In low molecular weight, LN and LM had almost same lamellar orientation from the surface to the core layers.On the other hand, in high molecular weight samples, the lamellar orientation near the surface (0 -75 μm) and the core layer (275 -500 μm) were almost same, but in the middle layer (75 -275 μm), HM showed higher lamellar orientation than HW.It indicated that narrow molecular distribution caused increasing lamellar orientated layer in high molecular weight sample.As compared with LM and HM, HM showed large lamellar oriented layer.
Crystalline Orientation and β Crystalline Phase Fraction
Figure 6 shows 2-D WAXD patterns of thin films sliced from the surface to core layer (0 -200 μm).As with 2D-SAXS pattern, every samples had high crystalline orientation layer near the surface and the crystalline orientation decreased as the depth increased.Figure 7 shows the crystalline orientation distribution from the 2D-SAXS patterns.It was observed that every samples had highly crystalline orientated layer near the surface and the crystalline orientation in core layer was lower than the surface layer.In the low molecular weight samples, LM showed highly crystalline orientated layer compared to LN.In the high molecular weight samples, HW showed highly crystalline orientated layer compared to HM.It indicated that wide molecular distribution led to high crystalline orientation near the surface.On the other hand, as compared LM and HM, these crystalline orientation were almost same from the surface to core layer, i.e. molecular weight didn't affect the crystalline orientation.
Shear Strength
As a mechanical property distribution, shear strength distribution from the surface to core layer were measured.
Figure 10 shows the shear strength distribution of different molecular weight samples (LM and HM).Shear strength near the surface (0 -50 μm) showed high values in all the samples because of the initial contact between blade tip and sample surface and the cutting process initiation.HM showed higher shear strength than LM in the entire cutting process from the surface to core layer.It indicated that high molecular weight caused the high shear strength distribution.
Discussion
This research investigated the relationship between molecular weight and molecular weight distribution, and skin layer thickness obtained by different characterization methods.High molecular weight and narrow molecular weight distribution samples led to thick lamellar oriented layer and characteristic morphology layer near the surface observed by POM and SAXS.However, according to the result of WAXD, wide molecular weight distribution sample showed thick crystalline oriented layer.This tendency of WAXD was different from that of POM and SAXS.It was indicated that lamellar orientation layer was not the same as crystalline orientation layer.Besides, molecular orientation analysis from laser Raman spectroscopy indicated that narrow molecular weight distribution showed thicker molecular oriented layer in low molecular weight samples.Whereas, molecular weight distribution didn't affect the molecular orientated layer thickness in high molecular weight samples.The characteristic structure (POM), lamellar orientation (SAXS), molecular orientation (laser Raman) and β phase fraction (WAXD), were found to be the structure affected by molecular weight.Meanwhile, crystalline orientation (WAXD) was found to be dominantly affected by molecular weight distribution.On the other hand, molecular weight and molecular weight distribution did not affect crystallinity from DSC.In terms of the shear strength distribution, different molecular weight sample showed different shear strength distribution.It indicated that molecular orientation and/or lamellar orientation were the dominant causes for shear strength distribution.
Conclusion
In this work, the relationship between skin-core structure and surface mechanical properties of thin-wall injection molded PP with different molecular weight and molecular weight distribution was investigated.High molecular weight samples showed thicker molecular oriented layer and lamellar oriented layer near the surface.The molecular weight mainly affected the lamellar orientation rather than crystalline orientation in the lamellar structure.On the other hand, molecular weight distribution affected the lamellar orientation rather than crystalline orientation.From the structure analysis and shear strength distribution measurement, the difference of lamellar orientation and/or molecular orientation was correlated to shear strength near the surface.
Figure 3 .
Figure 3.The results of crystallinity distribution throughout the thickness direction of the samples and crystallinity of non-sliced whole samples measured by DSC.
Figure 5 .
Figure 5.The lamellar orientation distribution throughout the thickness direction of the samples measured by 2D-SAXS patterns.
Figure 7 .
Figure 7.The crystalline orientation distribution throughout the thickness direction of the samples measured by 2D-WAXD patterns.
Figure 8 .
Figure 8.The β crystalline phase fraction distribution in the subsurface obtained from 2D-WAXD patterns.
Figure 11 (
a) shows the relationship between shear strength and depth from the surface of samples with different molecular weight distribution.In Figure 11(a) LN and LM had almost same shear strength distribution.And from Figure 11(b), HM and HW also had almost same shear strength distribution.It indicated that molecular weight distribution did not significantly affect shear strength near the surface.
Figure 9 .
Figure 9.The intensity ratio of molecular orientation distribution among FD direction from the surface to core layer by laser Raman spectroscopy, (a) LN; (b) LM; (c) HM; (d) HW.
Figure 10 .Figure 11 .
Figure 10.The shear strength distribution of LM and HM throughout the thickness direction of the samples measured by micro-cutting method.
Table 1 .
Molecular characteristics of PP resins. | 2017-10-16T14:25:52.311Z | 2016-01-07T00:00:00.000 | {
"year": 2016,
"sha1": "72a2fab92979f4db5f72e3d0c57d5603027b2561",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=62588",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "72a2fab92979f4db5f72e3d0c57d5603027b2561",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233449960 | pes2o/s2orc | v3-fos-license | Resibufogenin Suppresses Triple-Negative Breast Cancer Angiogenesis by Blocking VEGFR2-Mediated Signaling Pathway
Resibufogenin (RBF), an active compound from Bufo bufonis, has been used for the treatment of multiple malignant cancers, including pancreatic cancer, colorectal cancer, and breast cancer. However, whether RBF could exert its antitumor effect by inhibiting angiogenesis remains unknown. Here, we aimed to explore the antiangiogenic activity of RBF and its underlying mechanism on human umbilical vein endothelial cell (HUVEC), and the therapeutic efficacy with regard to antiangiogenesis in vivo using two triple-negative breast cancer (TNBC) models. Our results demonstrated that RBF can inhibit the proliferation, migration, and tube formation of HUVECs in a dose-dependent manner. Spheroid sprouts were thinner and shorter after RBF treatment in vitro 3D spheroid sprouting assay. RBF also significantly suppressed VEGF-mediated vascular network formation in vivo Matrigel plug assay. In addition, Western blot analysis was used to reveal that RBF inhibited the phosphorylation of VEGFR2 and its downstream protein kinases FAK and Src in endothelial cells (ECs). Molecular docking simulations showed that RBF affected the phosphorylation of VEGFR2 by competitively binding to the ATP-bound VEGFR2 kinase domain, thus preventing ATP from providing phosphate groups. Finally, we found that RBF exhibited promising antitumor effect through antiangiogenesis in vivo without obvious toxicity. The present study first revealed the high antiangiogenic activity and the underlying molecular basis of RBF, suggesting that RBF could be a potential antiangiogenic agent for angiogenesis-related diseases.
INTRODUCTION
Updated Global Cancer Statistics indicated that female breast cancer has surpassed lung cancer as the leading cause of global cancer incidence in 2020 with an estimated 2.3 million new cases, representing 11.7% of all cancer cases (Sung et al., 2021). Triple-negative breast cancer (TNBC) is the most challenging heterogenous subtype of breast cancer often associated with an aggressive phenotype, high recurrence, metastasis, and poor prognosis (Bianchini et al., 2016). Approximately 12% of breast cancer patients are TNBC in the United States from 2012 to 2016, with a 5-year survival rate is 8-16% lower than other subtypes (DeSantis et al., 2019;Howard and Olopade, 2021). Owing to the absence of expression of the estrogen receptor (ER), progesterone receptor (PgR), and human epidermal growth factor receptor 2 (HER2), conventional cytotoxic chemotherapy remains the mainstay of treatment (Waks and Winer, 2019). However, chemotherapeutics may cause the acute nonspecific side effects for normal tissues and multidrug resistance (MDR), leading to therapeutic failure (Nedeljkovic and Damjanovic, 2019). Therefore, discovery of neoadjuvant drugs with highly selective antitumor mechanism has become a promising approach for the treatment of TNBC.
Angiogenesis plays a critical role in tumor formation, progression, and metastasis. Through excessive secretion of pro-angiogenic factors, tumor cells continue to activate ECs to "sprout" in the original blood vessels and form new vascular structures (Duran et al., 2017). Correspondingly, angiogenesis provides cancer cells with the essential nutrients and oxygen, and also a route for metastasis (Viallard and Larrivée, 2017). Therefore, inhibiting tumor angiogenesis keeps an attractive strategy for oncotherapy since decades ago. To date, bevacizumab is the only antiangiogenic drug approved by FDA (Food and Drug Administration) for TNBC (Xie et al., 2021), whereas bevacizumab has little effect on overall survival due to acquired drug resistance (Liu et al., 2020) and its limitation of blocking VEGFA expression (Zou et al., 2020). It is essential to find antiangiogenic medicines with novel skeleton for antiangiogenesis therapy and to overcome drug resistance.
Natural products provide unparalleled source with unique molecular scaffolds for antiangiogenic drug discovery. Among them, resibufogenin (RBF) ( Figure 1A), the main component of the antitumor traditional Chinese medicine (TCM) Bufo bufonis from the dry secretions of Bufo gargarizans Cantor and Bufo melanostictus Schneider (Chu et al., 2016), is a compound with the steroid mother nucleus structure of cardiotonic aglycone (Qi et al., 2011). Studies have shown that RBF has antitumor activity and can inhibit the tumor growth through different mechanisms. For instance, RBF can suppress transforming growth factor-β activated kinase 1 (Tak1)-mediated nuclear transcription factorkappa B (NF-κB) activity through protein kinase C-dependent inhibition of glycogen synthase kinase-3 (GSK-3) on pancreatic cancer cells, PANC-1 and ASPC . RBF also can inhibit the growth and metastasis of colorectal cancer by triggering RIP3-dependent necrotizing ptosis and inducing glutathione peroxidase 4 (Gpx4) inactivation to induce oxidative stress (Han et al., 2018;Shen et al., 2021). Moreover, RBF treatment exhibited antitumorigenic and anti-Warburg effect in breast cancer through upregulating the inhibitory effect of miR-143-3p/HK2 axis (Guo et al., 2020). However, there is no study on RBF-mediated regulation of angiogenesis, the essential step for TNBC growth.
In present study, we first investigated the antiangiogenic effect of RBF and its mechanism on human umbilical vein endothelial cell (HUVEC). The antiangiogenic activity of RBF in vivo was evaluated by Matrigel plug assay. Furthermore, the in vivo antiangiogenic efficacy of RBF was evaluated in 4T1 and MDA-MB-231 orthotopic mice models. This study provided a new theoretical basis and reference for the potential clinical application of RBF.
Breast cancer cell lines 4T1 and MDA-MB-231 were obtained from Shanghai Cell Bank, Chinese Academy of Sciences (Shanghai, China). 4T1 cells were cultured in RPMI 1640 medium (meilunbio, Dalian, China), and MDA-MB-231 cells were cultured in Leibovitz's L-15 medium (Gibco, United States). All culture media were supplemented with 10% fetal bovine serum (FBS, Gibco, United States), 1% penicillin, and 1% streptomycin. Human umbilical vein endothelial cells (HUVECs) were obtained using Lifeline Cell Technology and cultured in completed endothelial cell medium (Lifeline ® Cell Technology, Frederick, MD). 4T1 cells and HUVECs were cultured at 37°C in a humidified atmosphere containing 5% carbon dioxide (CO 2 ), and MDA-MB-231 cells were cultured at 37°C without CO 2 .
BALB/c miceBALB/c nude mice, and C57 BL/6 mice were supplied by Shanghai Laboratory Animal Center (Shanghai, China). The animals were kept in an environment-controlled room (temperature: 20-25°C, relative humidity: 55-65%, and 12 h light/12 h dark cycle) with free access to water and fodder. All animal experimental protocols were approved by the Animal Ethics Committee of Shanghai University of Traditional Chinese Medicine.
Cell Viability Assay
The effect of RBF on cell viability of the HUVECs, 4T1 and MDA-MB-231 cells was measured using Cell Counting meilunbio,Dalian,China). Briefly, these cells were seeded in 96 well plates (5 × 10 3 cells/well), respectively. After 24-h incubation, the cells were treated with different concentrations of RBF (0.3, 1, 3, 10, and 30 μM) for 24 h. Then, 100 μl medium containing 10% CCK-8 was added to each well and incubated at 37°C for additional 2 h. The absorbance at 450 nm was determined by microplate reader (Spark 10M, Tecan, Switzerland). The percentage of cell viability was calculated against control. Each condition included replicate wells with at least four independent repeats.
Endothelial Cell Wound Healing Assay
HUVECs (2 × 10 5 cells/well) were seeded in a 6-well plate and incubated at 37°C for 24 h. Subsequently, confluent HUVECs were scratched with the pipette tips, washed with PBS and photographed, and then the cells were treated with various concentrations of RBF (0.3, 1, 3, 10, and 30 μM). After drug stimulation for 12 h, the plate was photographed with microscope (Spark 10M, Tecan, Switzerland) and EC migration was quantified by Image-Pro Plus 6.0 software (Media Cybernetics, Bethesda, MD).
Endothelial Cell Tube Formation Assay
Tube formation assay was carried out as described previously with some modifications (Dai et al., 2016;Lu et al., 2017). In brief, a precooled 96-well plate was coated with 50 μl/well Matrigel (BD Biosciences, San Jose, CA), which was thawed at 4°C overnight in advance and then incubated at 37°C for at least 30 min. HUVECs (1 × 10 4 cells/well) were dispersed in the completed medium containing different concentrations of RBF (0.3, 1, 3, and 10 μM) and then seeded on the Matrigel layer. After 10 h of incubation (37°C with 5% CO 2 ), the tubular structure formed by HUVECs stained with calcein AM with a final concentration of 2 μM for 15 min, then fluorescence photography was performed with Cytation 5 (BioTek, United States). The tube length was quantified by Image Pro Plus 6.0 software.
Endothelial Cell Transwell Migration Assay
The chemotactic motility of HUVEC was investigated using a transwell migration assay with 24-well transwell plates of polycarbonate filter with 8 μm pore diameter and 6.5 mm diameter inserts. Briefly, complete medium containing 20 ng/ ml VEGF165 was added to the lower chamber, and HUVECs (2 × 10 4 cells/well) was suspended in the medium which containing different concentrations of RBF (0.3, 1, 3, and 10 μM) and seeded in the top chamber. After incubation for 8 h in an incubator (37°C with 5% CO 2 ), the migrated cells were fixed with 4% paraformaldehyde for 20 min, then stained with 0.1% crystal violet, while the nonmigrated cells on the upper surface of polycarbonate membrane were gently wiped off with a cotton swab. The cells on the other side of the membrane were photographed under an inverted microscope (Laica, Germany) after washing the membrane three times with PBS. The number of migrated cells was determined by Image-Pro Plus 6.0 software.
Spheroid-Based Angiogenesis Assay
ECs spheroids of defined cell number were generated as described previously (Heiss et al., 2015;Wu et al., 2019) with minor modifications. In brief, HUVECs (1.6 × 10 4 cells/ml) were suspended in a culture medium containing 0.24% (wt/vol) methylcellulose (Adamas, China) and the mixture seeded alternately in a 100-mm × 20-mm dish (Corning, United States). Under these conditions, all suspended cells contributed to the formation of a single spheroid of defined size and cell number (400 cells/spheroid). Spheroids were cultured for 24 h in an incubator (37°C with 5% CO 2 ). Afterward, the spheroids were suspended with a solution of rat tail collagen type I (BD, United States), then rapidly transferred into prewarmed 24-well plates and allowed to polymerize (30 min). After the collagen gels were set, 100 μl of complete medium containing 500 ng/ml VEGF 165 and different concentrations of RBF (1 or 3 μM) or complete medium only containing 500 ng/ml VEGF165 was added to each well, and the spheroids formed sprouts after 24 h. The sprouts were photographed with microscope (Spark 10 M, Tecan, Switzerland).
Matrigel Plug Assay
Six-week-old female C57BL/6 mice were subcutaneously injected with Matrigel mixture (400 μl/plug) containing 400 ng/ml VEGF and different concentrations of RBF (10 or 30 μM), and the Matrigel was mixed with PBS for mock control or 400 ng/ml VEGF for vehicle control. After 7 days of implantation, Matrigel plugs were removed and fixed with 4% paraformaldehyde, then photographed with a digital camera. After Matrigel plugs were embedded and fixed in paraffin, neovascularization was determined by CD31 staining.
Western Blot
The effect of RBF on VEGF-dependent angiogenesis signaling pathways was determined by Western blot assay. HUVECs (1 × 10 5 cells/well) were seeded in a 6-well plate and incubated overnight in an incubator (37°C with 5% CO 2 ). When the cell density reached about 80%, the cells were starved in a serumfree medium for 6 h. The serum-free medium containing different concentrations of RBF was then changed and continued to culture for 30 min and stimulated for 4 min with 100 ng/ml VEGF, subsequently. RIPA Lysis Buffer (Beyotime, Shanghai, China) supplemented with complete protease inhibitor cocktail and PhosSTOP phosphatase inhibitor cocktail (Roche, Rotkreuz, Switzerland) were used for cell lysis extraction. The concentration of protein was determined by BCA Protein Assay Kit (Beyotime, Shanghai, China) and equalized before loading. Then, 20 μg of membrane protein from each sample was applied to 7.5% SDS-PAGE. Polyvinylidene fluoride (PVDF) was incubated with primary antibody (Cell Signaling Technology, Danvers, MA) at 4°C and then co-incubated with horseradish peroxidase-coupled second antibody. The luminescent images were detected using ECL kits.
Anticancer Therapy of Resibufogenin In Vivo
Mouse triple-negative breast cancer cells 4T1 (1 × 10 7 cells/ml) and human triple-negative breast cancer cells MDA-MB-231 (5 × 10 7 cells/mL) were suspended in PBS, and then inoculated in 6week-old female BALB/c mice and female BALB/c nude mice on the fourth pair of fat pads (100 μl/mouse) to establish orthotopic model of breast cancer. Once the tumor volume reached ∼50 mm 3 , the all of mice were randomly divided into two groups (n 6 per group): control group and RBF treatment group (10 mg/kg/day). The mice of the control group received intraperitoneal injection with oil and the mice in the RBF treatment group were injected with oil-containing RBF (10 mg/kg/day). The body weight, tumor length, and width of mice were monitored every 2 days. The formula for calculating tumor volume is as follows: tumor volume (mm 3 ) (length) × (width) 2 /2. After administration for 12 days, the mice were sacrificed and the tumor was dissected. And all of the tumors were photographed and weighed, and then fixed with 4% paraformaldehyde to prepare for paraffin section and immunohistochemical assay. The hematoxylin and eosin (H&E) staining evaluated tumor necrosis, and the immunohistochemical assay of CD31 staining was used to observe the tumor vessels according to the previous studies. All of the slices were photographed by photomicroscope. The tumor necrosis area, microvessel density, Ki67-positive cells, and TUNEL-positive cells of the slices were analyzed by Image-Pro Plus 6.0 software.
Molecular Docking
Through the computer virtual docking of molecular operating environment (MOE), the molecular interaction between VEGFR2 and RBF was explored. First, the threedimensional structure of RBF is generated by energy minimization in MOE. Then the x-ray crystals of VEGFR2 kinase domain and its ligands were obtained from the Protein Data Bank (http://www.rcsb.org). Two crystal structures of 3B8R and 3B8Q, which belong to DFG-in and DFG-out conformations, were selected to dock with RBF. The interaction between the molecules was analyzed and visualized by ligand interaction module and PyMOL.
Statistical Analysis
All data were presented as mean ± SD. Statistical analysis and graphical representation of the data were performed using GraphPad Prism 6.0 (GraphPad Software, San Diego, CA). The differences between groups were examined with Student's t-test or ANOVA with Bonferroni's multiple comparisons tests.
Differences were considered significant if the p value was less than 0.05.
Resibufogenin Inhibits Viability of Human Umbilical Vein Endothelial Cells at Concentrations Not Affecting Triple-Negative Breast Cancers
As shown in Figures 1B,C
Resibufogenin Inhibits Endothelial Cell Migration, Invasion, and Tube Formation
The migration of ECs is the important step in the process of angiogenesis (Varinska et al., 2018). Thus, we performed a wound healing assay and transwell migration assay to investigate the effect of RBF on the horizontal and vertical migration ability of HUVECs. As shown in Figures 2A,D, RBF dose-dependently inhibited the lateral migration of ECs, and it was obvious that the migration ability of HUVECs was completely inhibited at the concentration of 10 μM. The transwell assay ( Figures 2B,E) showed that RBF could inhibit the vertical migration of HUVECs to the bottom chamber at 0.3 μM. To evaluate the antiangiogenesis ability of RBF, we performed a tube formation assay to verify the effects of RBF on tube formation of HUVECs on a Matrigel substratum. HUVECs could form a complete tubular network after VEGF stimulation ( Figures 2C,F), while RBF markedly restrained HUVECs tube formation at the concentration of 3 μM. These results suggested that RBF has a strong inhibitory effect on HUVEC motility, migration, and tube formation at the nontoxic concentrations.
Resibufogenin Inhibits Human Umbilical Vein Endothelial Cells Spheroid Sprouting
In the sprout formation assay, HUVECs were formed a single spheroid and then embedded in a 3D collagen matrix. In the control group, the HUVEC spheroid was sprouting significantly under the stimulation of VEGF 165 . However, when treated with different concentrations of RBF, the sprouts became thinner. Sprouting was almost completely inhibited by the treatment with RBF at 3 µM ( Figure 3A).
These results suggested that RBF has an inhibitory effect on HUVEC spheroid sprouting.
Resibufogenin Inhibits Angiogenesis in Matrigel Plugs In Vivo
Next, we performed the Matrigel plug assay to further explore whether RBF could inhibit angiogenesis in vivo. As illustrated in Figure 3B, compared with the PBS group, Matrigel plugs mixed with VEGF exhibited obvious red area after 1 week of implantation, indicating a large number of new blood vessels in Matrigel plugs. In contrast, Matrigel plugs mixed with RBF were almost colorless and transparent, suggesting almost no angiogenesis existed. These results indicated that RBF could significantly inhibit neovascularization in the Matrigel plugs in vivo. The existence of blood vessels was verified by staining of CD31, a specific marker on the surface of ECs (Privratsky and Newman, 2014;Lertkiatmongkol et al., 2016). The results in Figure 3C displayed that RBF (10 μM) had a significant inhibitory effect on angiogenesis stimulated by VEGF. The number of blood vessels in the high-dose group (30 μM) was similar to that in the PBS alone group.
Resibufogenin Suppressed the Activation of VEGFR2-Mediated Signaling Pathway
The VEGF signaling pathway and its main receptor, VEGFR2, could stimulate tumor angiogenesis in most solid tumors (Simons To verify the mechanisms involved in the antiangiogenic function of RBF, we first examined whether RBF could affect VEGF-mediated phosphorylation of VEGFR2 by Western-blotting assay. It was shown that the expression of tyrosine phosphorylation of VEGFR2 was significantly increased after VEGF stimulation, and RBF could inhibit the phosphorylation level of VEGFR2 in a dose-dependent manner ( Figure 4A). As known, the downstream signaling pathway could be activated by the VEGFR2, which regulate multiple activities of ECs, including migration, proliferation, and survival (Simons et al., 2016;Wang et al., 2020). We further investigated the effect of RBF on the expression level of downstream proteins of VEGFR2. The results suggested that RBF could downregulate the level of phospho-FAK and phospho-Src. Taken together, the above results proved that RBF could inhibit the phosphorylation of VEGFR2, FAK, and Src to block the angiogenesis ability of ECs.
Resibufogenin Competitively Bound ATP-Binding VEGFR2 Kinase Domain
As RBF downregulate the phosphorylation of VEGFR2 and its downstream signal molecules, it was speculated that RBF may be an inhibitor of VEGFR2 kinase. So, molecular docking simulation was carried out to predict the possible binding mode between RBF and VEGFR2 kinase domain. Studies have shown that VEGFR2 inhibitors can be divided into two types according to different binding modes, namely, type I (conformational complex with DFG-in) and type II (conformational complex with DFGout) protein kinase inhibitors (Huang et al., 2012). Type I inhibitors affect phosphorylation of VEGFR2 by competitively binding ATP-binding VEGFR2 kinase domain, thereby preventing ATP from providing phosphate groups. While type II inhibitors inhitbit phosphorylation by occupying the space position of Phe1047 in active conformation (DFG-in) and preventing VEGFR2 from transforming from inactive to active (Weiss et al., 2008;Huang et al., 2012). Then, DFG-in and DFGout protein states were selected for docking. As shown in Figures 4B,C, RBF could bind to the DFG-in protein state and cannot penetrate into the Phe1047 pocket to further prevent VEGFR2 activation. RBF mainly interacted with amino acid residues, including Leu840, Val848, Ala866, Leu889, Val899, Val914, and Leu1035 via the hydrophobic interaction. There was also H-pi interaction between the arene moiety of RBF and the key residues of Lys868. These results indicated that RBF was a Type I inhibitor which inhibited the phosphorylation of VEGFR2.
Resibufogenin Inhibited Tumor Angiogenesis and Growth in 4T1 and MDA-MB-231 Orthotopic Mouse Models
In order to investigate the inhibiting effects of RBF on tumor angiogenesis and growth in vivo, the tumor models were first established by in situ inoculation of 4T1 cells in BALB/c mice (Duan et al., 2019;Tsui et al., 2019). RBF was injected intraperitoneally at a dose of 10 mg/kg/day for 12 days. The results showed ( Figures 5A,C,D) that RBF significantly inhibited the growth of tumor, the tumor volume was 246.15 ± 69.9 mm 3 , and the average tumor weight was only 0.17 ± 0.04 g, whereas the tumor volume was 471.89 ± 45.1 mm 3 and the average tumor weight was 0.31 ± 0.03 g in the control group. In addition, there was no significant change in the body weight of mice at this dose ( Figure 5B), indicating that RBF had no obvious toxicity to the mice at the curative dose. Immunohistochemistry and pathological examination showed that RBF not only effectively inhibited tumor cell proliferation and increase tumor necrotic area but also reduced tumor microvessel density and elevate TUNEL-positive cells. Meanwhile, we also established another TNBC model by in situ inoculation of human TNBC cell line (MDA-MB-231) into BALB/c nude mice (Shi et al., 2019;Kachamakova-Trojanowska et al., 2020;Xu et al., 2020). As shown in Figure 6C, RBF also showed significant antitumor activity in this tumor model, and the tumor volume was 146.77 ± 37.5 mm 3 much smaller than that in the control group (244.31 ± 62.9 mm 3 ). In contrast to untreated controls, RBF-treated group showed a profound decrease in the number of CD31-positive microvessel and Ki67-positive cells, while the rate of TUNEL-positive cells and the area of tumor necrosis increased. These results suggested that the antiangiogenic activity of RBF effectively contributed to its antitumor effect in vivo.
DISCUSSION
Neovascularization is the critical characteristic of solid tumors to contribute tumor rapid progression and metastasis. Therefore, targeting tumor blood vessels has been considered as a reasonable approach to the treatment of various malignancies (Rajabi and Mousa, 2017). Bevacizumab, a recombinant humanized monoclonal IgG1 antibody that binds to VEGF, provides a new hope of improved survival for patients with intractable TNBC in combination with paclitaxel and capecitabine (Liu et al., 2020). However, this neoadjuvant therapy cannot fully meet the expectations of patients for higher overall survival owing to acquired resistance. Given that, it is imperative to explore alternative novel antiangiogenic agents to improve the therapeutic effectiveness. Active small components derived from TCM have been demonstrated to possess excellent bioactivity with low toxicity in the treatment of many diseases. Taking these advantages, we successfully identified a TCM Bufo bufonis-derived small molecule, RBF, which have high selectivity between TNBCs and HUVECs. Multiple mechanisms for RBF antitumor activity have been elucidated, but none of them touched its antiangiogenic activity in breast cancers, especially TNBC. We first demonstrated the potent antiangiogenic ability and mechanisms of RBF in vitro as well as the anti-TNBC effect in vivo.
We successfully proved that RBF could perform the antiangiogenic function toward migration, invasion, and tube formation of HUVECs in a dose-dependent inhibition. Meanwhile, we found that the sprouting 3D spheroid sprouts were thinner and shorter after treatment of RBF. The Matrigel plug assay was used to verify the antiangiogenesis effect of RBF in vivo. Then, we constructed 4T1 and MDA-MB-231 orthotopic mouse models to evaluate the therapeutic effect of RBF through antiangiogenic potency. As expected, the results exhibited that RBF can not only suppress the growth of mouse TNBC in vivo, but also inhibit human TNBC progression in mice through the successful blocking effect on tumor-related angiogenesis, thereby highlighting the potential clinical transformation of RBF. Immunohistochemical assay further revealed that RBF could significantly increase the necrotic area of tumor, inhibited tumor cell proliferation, and promoted apoptosis. More importantly, the intratumoral CD31-positive vessel in the RBF treatment group decreased pronouncedly, suggesting that its antitumor effect was closely related to antiangiogenesis. Collectively, these in vitro and in vivo results both suggested that the antiangiogenic activity of RBF played a critical role in suppressing tumor growth in vivo (Guo et al., 2020).
Among angiogenic factors, VEGF has the strongest effect to the process of angiogenesis. VEGFR2 plays a principal role in mediating VEGF-induced series of downstream signals of angiogenesis (such as Akt pathway, NF-κB pathway, and MAPK pathway) that subsequently promote the activation of endothelial cells (Simons et al., 2016). Thus, targeting VEGFR2 signaling pathway to inhibit tumor angiogenesis is regarded as vital strategy. We also observed that RBF dose-dependently decreased the VEGF-induced VEGFR2 phosphorylation and its downstream signals, including FAK and Src. Molecular docking test indicated that RBF could locate at the ATP-bound VEGFR2 kinase domain through hydrophobic interaction and H-pi interaction, thereby blocking the phosphorylation of VEGFR2. Such bioinformatics of the binding pattern of RBF and VEGFR2 can help us better understand the antiangiogenic effect of RBF, and we could reinforce this binding by chemical structure modification of RBF.
In conclusion, we successfully elucidated the antiangiogenic effect and corresponding mechanism of RBF on the HUVECs by attenuating VEGFR2 signal pathway. More importantly, this new antitumor mechanism further contributed to slow tumor growth and lower microvessel density in two TNBC mice, which provide a promising candidate for angiogenesis in TNBC treatment.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved. All animal experimental protocols were approved by the Animal Ethics Committee of Shanghai University of Traditional Chinese Medicine.
AUTHOR CONTRIBUTIONS
XL, HZ, Y-YG, and TY conceived and designed the experiments. TY, Y-XJ, DL and L-LW performed the experiments. TY, RH, YW, and S-QW analyzed the data and made the figures. TY, Y-XJ, and YW wrote the paper. XL and HZ helped to proofread the article. All the authors contributed to the article and approved the submitted version. | 2021-04-30T13:31:42.418Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "d1c75b56cbd5ae8f0a31b485fa973fa33768c8f6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.682735/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1c75b56cbd5ae8f0a31b485fa973fa33768c8f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
14552808 | pes2o/s2orc | v3-fos-license | Protection against graft vs. host-associated immunosuppression in F1 mice. I. Activation of F1 regulatory cells by host-specific anti-major histocompatibility complex antibodies.
Injection of parental spleen cells into unirradiated F1 hybrid mice results in suppression of the potential to generate cytotoxic T lymphocyte (CTL) responses in vitro. In an attempt to protect the F1 mice from immunosuppression, the recipients were injected with antibodies specific for major histocompatibility complex (MHC)-encoded antigens of the F1 mice 24 h before inoculation of the parental spleen cells. 8-14 d later, the generation of CTL responses in vitro against H-2 alloantigens was tested. Alloantiserum directed against either parental haplotype of the F1 strain markedly diminished the suppression of CTL activity. Furthermore, monoclonal antibodies recognizing H-2 or Ia antigens protected the F2 mice from parental spleen cell-induced suppression. Although this study has been limited to reagents that recognize host H-2 determinants, these findings do not necessarily imply that protection against graft vs. host (GvH) can be achieved only with anti-MHC antibodies. However, protection was observed only by antibodies reactive with F1 antigens, and small amounts of the alloantibodies were sufficient to diminish CTL suppression. Adoptive transfer of spleen cells from syngeneic F1 mice treated with anti-h-2a alloantiserum 24 h previously provided protection equal to that of injection of the recipients with alloantibodies. The cells necessary for this effect were shown to be T cells and to be radiosensitive to 2000 rad. This cell population is induced by antisera against F1 cell surface antigens and effectively counteracts GvH-associated immuno-suppression.
Inoculation of immunocompetent lymphoid cells into aUogeneic or semisyngeneic
Ft recipient mice can lead to graft vs. host (GvH) 1 reactions (1)(2)(3). In the induction phase of the GvH the injected lymphocytes recognize and respond to foreign alloantigens expressed by recipient cells (4). Such recognition can result in the generation of a cytotoxic effector T lymphocyte (CTL) response against the host cells that may be associated with the characteristic symptoms of GvH disease (4)(5)(6). Other facets of GvH reactivity are the production of autoantibodies and the development of severe immunoincompetence (7). It has also been shown that the injection of parental spleen cells into unirradiated F1 hybrid mice results in reduction or abrogation of the capability to generate a cytotoxic T cell response in vitro (8). This lack of immune responsiveness could be accounted for by at least two possible suppressive mechanisms: (a) activation of parent-anti-F1 allogeneic CTL; and (b) activation of a noncytotoxic Ft suppressor population resulting from parent-anti-F1 recognition (9).
Earlier reports indicated that GvH reactivity as detected by splenomegaly and mortality could be prevented or reduced by injecting F1 mice with alloantisera (10,11)i In the present study we have attempted to protect F1 hybrid mice from the induction of GvH-associated suppression of the CTL response by injecting Fl mice with alloantibodies before inoculation of parental spleen cells. The results described here indicate that alloantiserum directed against either parental haplotype of the F1 as well as certain monoclonal antibodies against H-2 or Ia antigens of the F~ host prevented the induction of GvH-associated CTL suppression. Moreover, adoptive transfer studies showed that such protection is due to a radiosensitive Ft regulatory T cell population, which is activated by specific alloantibodies. monoclonal antibodies were used in ascites fluid form. The characteristics of the monoclonal reagents have been described in detail elsewhere (12). Some of their properties are summarized in Table II.
Injection of FI Hybrid Mice with Antibodies and Parental Spleen Cells. The titers of alloantisera and monoclonal reagents were adjusted to the same concentration of cytotoxic antibodies by diluting with phosphate-buffered saline (PBS). The reagents were injected intravenously via the tail vein into normal FI recipients in a volume of 0.1 ml or 0.5 ml. 24 h later, half of the mice from each group were injected intravenously with parental spleen cells suspended in a volume of 0.5 ml in Hanks' balanced salt solution (HBSS) (8).
Treatment with a-Thy-l.2 Antibodies Plus Complement. T cells were eliminated by incubation of 100 × 10 n spleen cells in 1 ml monoclonal anti-Thy-1.2 reagent (New England Nuclear, Boston, Mass.) (dilution 1:100) for 30 min at 37°C. After washing the cells in HBSS once, the spleen cells were then incubated with selected rabbit complement for 30 min at 37°C. The treated cells were washed twice and readjusted to the desired cell concentration.
In Vitro Generation of and Assay for Cell-mediated Lympholysis. The potential of the treated Fx hybrid mice to generate cytotoxic T cells in vitro was tested 8-14 d after inoculation of the parental spleen cells. In most cases, spleen cells from two mice per group were pooled. The responding cells were sensitized a~ainst 2,000-rad irradiated allogeneic stimulators for 5 d. The effector cells were tested in a 4-h ICr release assay on concanavalin A-stimulated splenic blasts. These conditions for sensitization and assay have been described previously (13). The percent lysis is expressed above medium control. Standard errors of the mean were usually <3% and have been excluded from the graphs for simplicity. The data of Fig. 1 show the allogeneic CTL response of (B10 X B10.A)FI spleen cells from mice treated with 0.1 ml anti-H-2 a alloantiserum (cytotoxic titer 1:64), and subsequently inoculated with B10.A parental spleen cells. The cytotoxic activity of the effectors from the antiserum-treated mice was compared with the response of untreated Fz animals or of mice injected with normal mouse serum (NMS). The results indicate that pretreatment of the F1 mice with anti-H-2 ~ alloantiserum before inoculation of the parental lymphocytes resulted in an allogeneic cytotoxic response ( Fig. 1 F) almost equivalent to that of normal mice ( Fig. 1 A). In contrast, the CTL potential of spleen cells from F1 mice that received no antiserum or NMS before injection with the same number of NMS parental spleen cells was strongly suppressed ( Fig. 1 B, D).
Since the anti-H-2 a antiserum could have acted on host cells, donor cells, or both, it was important to determine whether antiserum that is specific for the host only would protect against GvH-induced suppression. Therefore, the effect of anti-H-2 b alloantiserum on parental spleen cell-induced suppression was tested. (B 10 × B 10.A) FI mice were treated according to the protocol described above. Fig. 1 H shows that suppression of the allogeneic CTL response was likewise reduced if F1 mice were injected with 0.1 ml of anti-H-2 b alloantiserum (cytotoxic titer 1:32) before inoculation of the B10.A parental spleen cells. Thus, these data indicate that alloantibodies directed against either parental haplotype of the Fx reduced parental spleen cell induced immunosuppression. One possible explanation for the reduction of immunosuppression in the antibodytreated mice could be that circulating alloantibodies in the F~ host were absorbed by the injected B10.A spleen cells and that the parental lymphocytes were eliminated. Therefore, the serum of Fx mice was assayed 24 h after injection of 0.1 ml of undiluted antiserum for complement-dependent cytotoxicity. Table III shows that serum from F1 mice previously treated with anti-H-2 a or anti-H-2 b alloantiserum did not exhibit any detectable cytotoxicity on the specific target cells. In contrast, serum from B 10 or B 10.A mice injected with the same amount of nonspecific alloantiserum still contained circulating antibodies with cytotoxic activity when tested 24 h later.
Specificity of the antibodies was tested by injecting alloantiserum that did not recognize antigens expressed by either parental haplotype in the FI. As demonstrated in Fig. 2, pretreatment of (B10 × B10.A)F1 mice with an irrelevant alloantiserum (anti-H-2Kd-Ia d) before the injection of B10.A parental spleen cells had no effect on the suppression of the allogeneic CTL response (Fig. 2 F). These data indicate that specific antibodies against either haplotype of the F1 strain, but not antibodies specific for antigens not expressed by the F1 host, are capable of reducing the parental spleen cell-induced immunosuppression.
To determine the amount of alloantibodies required to abrogate B10.A parental spleen cell-induced suppression in the (B 10 × B 10.A) F1 recipients, mice were injected Fig. 3 demonstrate that CTL suppression could be abrogated using a 1:4 dilution of the anti-H-2 a alloantiserum diluted to a cytotoxic titer of 1:16 ( Fig. 3 C). The data thus suggest that low amounts of alloantibodies provide protection from parental spleen cell-induced suppression. However, treatment with a further twofold dilution of the antiserum to 1:8 had no effect on the induction of CTL suppression in the F1 recipients (Fig. 3 D).
Treatment of the F1 Mice with Monoclonal Antibodies against H-2 Subregion Determinants.
In an attempt to map the protective effect of the specific alloantisera, (B10 X B 10.A)F1 mice were injected with monoclonal antibodies against different H-2 and Ia determinants. The various monoclonal reagents were adjusted to the same titer (1: 25) and 0.5 ml of the diluted reagents was injected into each of the F1 mice 24 h before the injection of 15 × 106 B10.A spleen cells, The potential for generation of alloreactive CTL in vitro was tested 8 d later. The data, summarized in Fig. 4, demonstrate that monoclonal reagents binding to K k antigens (Fig. 4 B) or to I-E k antigens (Fig. 4 C) completely abrogated B10.A spleen cell-induced suppression. In contrast, a monoclonal reagent binding to D k antigens, an irrelevant antibody for the B 10.A strain (Fig. 4 D), was ineffective.
Using the same monoclonal reagents, the effect of GvH-associated immune suppression was also tested in the (B 10 × BR)FI strain. The protocol for injection of the antibodies was identical to that used above. 9 d after inoculation of 20 X 10 s BR parental spleen cells the generation of cytotoxic T cells against B 10.A alloantigen was tested in vivo. Results similar to those observed in the (B10 X B10.A)Fx strain were obtained in the (B10 X BR)Fa mice, when treated with the anti-K k or anti-I-E k monocional reagents (Fig. 5). In addition, injection of monoclonal antibodies against D k antigens prior to inoculation of BR parental spleen cells also diminished CTL immune suppression (Fig. 5 D). It should be noted that this anti-D k reagent did not protect (B10 X B10.A)F1 mice from suppression ( Fig. 4 E), which illustrates the specificity of these reagents in protecting against CTL suppression. Monoclonal anti-D d antibodies, which are not reactive with either Fa haplotype in the (B10 X BR)F1, provided no protection from suppression (Fig. 5 E), although this reagent did protect against parental suppression in the (B10 X B10.A)F1 (data not shown). These data indicate that specific antibodies directed against H-2 determinants of either haplotype protect the Fa recipients against parental spleen cell induced suppression. However, this protective effect may not necessarily be limited only to anti-MHC antibodies.
Activation of F1 Regulatory Cells by Anti-H-2 a Antiserum.
Among several possible explanations, protection against GvH-associated immunosuppression could be due to a regulatory process that involves the stimulation of a lymphoid cell population. If antibodies activate such regulatory cells, it should be possible to protect Fx mice by syngeneic lymphocytes adoptively transferred from alloantibody-treated animals. To test this, (B10 X B10.A)F1 mice were inoculated with 40 X 106 spleen cells from syngeneic F1 donor mice injected with 0.1 ml anti-H-2 a alloantiserum 24 h previously. The same recipients were also inoculated intravenously with 10 X 106 B10.A parental spleen cells, which would normally be suppressive. The potential to generate a CTL response against BR alloantigen was tested 9 d later. Fig. 6C shows that the alloreactivity of spleen cells from these mice was comparable to the response of the untreated controls (Fig. 6A). In contrast, adoptive transfer of F1 spleen cells from untreated F~ donor mice, as shown in Fig. 6 D, did not protect against CTL suppression (Fig. 6 B). Thus, these data demonstrate that a regulatory cell population in the Fa host is activated by the alloantibodies that protect from GvH-associated CTL suppression.
To partially characterize this regulatory cell population, similar adoptive transfer studies were performed with irradiated or T cell-depleted spleen cells from syngeneic F1 mice. These F1 spleen cell donor mice were injected with 0.5 ml diluted monoclonal anti-K k antibodies 24 h before adoptive transfer. Thus, (B10 X B10.A)Fa recipients were inoculated with 50 × l0 G Fa spleen cells (from antibody-treated donors) either treated with anti-Thy-l.2 plus complement or irradiated with 2,000 rad in addition to 10 X 106 B10.A parental lymphocytes. 8 d later the CTL potential of spleen cells from these recipients was compared with mice inoculated with untreated F~ spleen cells from anti-K k injected donors. As shown in Fig. 7, the protective effect of anti-K kactivated Fl cells was completely eliminated after T cell depletion or irradiation with 2,000 rad. Thus, the anti-H-2-activated F~ cells that counteract CTL suppression are radiosensitive T lymphocytes.
Discussion
Previous studies have shown that unirradiated F~ hybrid mice injected with parental spleen cells develop some characteristics of GvH reactivity associated with nonspecific suppression of the potential to generate a CTL response in vitro (8,9). In the present study we have attempted to protect the F1 recipients from CTL suppression by injecting mice with specific alloantibodies before inoculation of the parental spleen cells. The main findings of this study demonstrate that (a) F1 mice can be protected from CTL suppression by alloantiserum directed against either parental haplotype; (b) protection can be achieved equally well with monoclonal reagents against H-2K, D, or I-E antigens; (c) protection is only provided by specific antibodies; (d) low titers ofalloantibodies are sufficient to exert protection; and (e) protection can be adoptively transferred by radiosensitive, Thy-l-positive spleen ceils from syngeneic donors previously treated with specific alloantiserum. It has been previously, reported that GvH disease can be decreased either by injecting the recipients (10,11,14,15) or by treating the donor cells with antirecipient alloantibodies (16,17). However, the mechanism of abrogation of GvH reactivity as assessed in those studies by splenomegaly and mortality rates is conjectural. Several explanations could account for the alloantibody-induced protection against immunosuppression. The first, covering of host major histocompatibility complex (MHC) determinants by the antibodies, which prevents their recognition of the inoculated parental effector cells, has been suggested previously (10). Such a mechanism can only account for protection by alloantibodies directed against H-2 determinants of the F1 strain that are foreign to the inoculated parental cells. This does not explain protection by alloantibodies specific for the same haplotype of the parental inoculum, unless steric hindrance of the foreign allodeterminants is considered. Furthermore, the amount of antibodies injected is probably too small to saturate all alloantigenic sites in the F1 host. A second explanation, binding of antibodies to donor MHC and thereby eliminating the relevant parental lymphocytes in vivo by complement-dependent lysis with the injected alloantibodies, is also unlikely. No evidence of circulating antibodies was obtained 24 h after serum injection which could be absorbed by the inoculated parental spleen cells.
Anti-idiotype antibodies have been reported to inhibit GvH proliferation (18) and GvH disease (19). Although the presence of anti-idiotype antibodies in the anti-MHC alloantisera or in the monoclonal reagents derived from ascites fluid is theoretically possible (20), this explanation seems unlikely, because the protocol for generation of anti-idiotypic antibodies is different from that used to raise these alloantibodies (21).
A more likely possibility is that alloantibodies activate a regulatory cell population in the F1 host, probable because protection can be adoptively transferred into syngeneic recipients. Spleen cells from mice previously treated with alloantibodies provide protection from GvH-associated suppression equal to that produced by injection of the recipients with antibodies. Therefore, it appears that such a regulatory effect is an active process triggered by antibodies specific for either haplotype expressed by the F1 mice. Since antibodies specific for both parents in the (B × A)F1 protected against suppression induced by parent A, the activation step need not be specific only for the haplotype of the parental cells inoculated. The function of this cell population in the GvH model seems to be to prevent suppression and perhaps to restore immune reactivity following severe GvH-associated immunosuppression.
GvH-associated immunosuppression is triggered by the injected parental spleen cells that recognize and react against the other parental haplotype in the F1 host (3,4). Such A-anti-B immune reactivity in the (B × A)F1 would be recognized by the host as anti-self reactivity. In the process of protecting itself against this "autoimmune" response, the entire immune system would be suppressed, because GvH-associated immunosuppression appears to be nonspecific and affects cell-mediated (8,9) as well as humoral immunity (22). To counter such drastic suppression, it could be advantageous to have a regulatory cell population capable of restoring normal immune functions. Similar regulatory networks have been reported for antibody (23) and T cell responses (24). The latter has been described in Fa rats, which could be protected against GvH by pretreatment with suboptimal doses of parental T cells. However, the regulatory system in rats is rapidly activated (within 2 d), and it is specific for the immunizing parental haplotype. In contrast, the regulatory system described in the present report is activated by antibodies, which would probably require a longer period if generated in vivo by the injected parental cells.
The present study indicates that such a regulatory cell population is activated by antibodies against host antigens. In addition, the injection of parental T cells into F1 mice can result in the production of autoantibodies, which were reported to be generated by F1 B cells (25). This phenomenon is based on the allogeneic effect, in which parental T cells can activate antibody production by F1 B cells (26). Although in the present study we have injected alloantibodies specific for F1 MHC antigens, it is possible that other antibodies against different host cell surface antigens, e.g., autoantibodies generated by F~ B cells as a result of the allogeneic effect, could reverse GvH-associated suppressed T cell immunity. Studies are in progress to test this possibility. Thus, the recovery of the suppressed immune function of F1 mice undergoing a chronic GvH reaction could be due to activation of regulatory T cells by anti-MHC antibodies produced in situ by parental B cells and/or autoantibodies produced by F1 B cells. Under more natural conditions such regulatory cells may be one component of a complex regulatory network composed of suppressor cells and regulatory cells activated by antibodies against self-components. In this model, it would be expected that the presence of autoantibodies is paralleled by low suppressor cell activity. This, in fact, is the case in the autoimmune NZB/W mouse strains (27). Such low suppressor cell activity might be due to the activation of regulatory cells (activated by autoantibodies), which serve to counteract the effect of suppressor cells. Studies are in progress to characterize the regulatory cells and to analyze the signals involved in activation of these regulatory events. A more thorough understanding of this system may be useful for treatments of GvH diseases in humans.
Summary
Injection of parental spleen cells into unirradiated F1 hybrid mice results in suppression of the potential to generate cytotoxic T lymphocyte (CTL) responses in vitro. In an attempt to protect the F1 mice from immunosuppression, the recipients were injected with antibodies specific for major histocompatibility complex (MHC)encoded antigens of the Fa mice 24 h before inoculation of the parental spleen cells. 8-14 d later, the generation of CTL responses in vitro against H-2 alloantigens was tested. Alloantiserum directed against either parental haplotype of the F~ strain markedly diminished the suppression of CTL activity. Furthermore, monoclonal antibodies recognizing H-2 or Ia antigens protected the F1 mice from parental spleen cell-induced suppression. Although this study has been limited to reagents that recognize host H-2 determinants, these findings do not necessarily imply that protection against graft vs. host (GvH) can be achieved only with anti-MHC antibodies. However, protection was observed only by antibodies reactive with F1 antigens, and small amounts of the alloantibodies were sufficient to diminish CTL suppression. Adoptive transfer of spleen cells from syngeneic F1 mice treated with anti-H-2 a alloantiserum 24 h previously provided protection equal to that of injection of the recipients with alloantibodies. The cells necessary for this effect were shown to be T cells and to be radiosensitive to 2000 rad. This cell population is induced by antisera against Ft cell surface antigens and effectively counteracts GvH-associated immunosuppression. | 2014-10-01T00:00:00.000Z | 1981-12-01T00:00:00.000 | {
"year": 1981,
"sha1": "56de555bedd531d799b722ac70116b7ea255869c",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/154/6/1922.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "56de555bedd531d799b722ac70116b7ea255869c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
2674798 | pes2o/s2orc | v3-fos-license | MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth Optimization
It is well known that the optimal convergence rate for stochastic optimization of smooth functions is $O(1/\sqrt{T})$, which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of $O(1/T^2)$. In this work, we consider a new setup for optimizing smooth functions, termed as {\bf Mixed Optimization}, which allows to access both a stochastic oracle and a full gradient oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses to the full gradient oracle. We show that, with an $O(\ln T)$ calls to the full gradient oracle and an $O(T)$ calls to the stochastic oracle, the proposed mixed optimization algorithm is able to achieve an optimization error of $O(1/T)$.
Introduction
Many machine learning algorithms follow the framework of empirical risk minimization, which often can be cast into the following generic optimization problem min w∈W G(w) := 1 where n is the number of training examples, g i (w) encodes the loss function related to the ith training example (x i , y i ), and W is a bounded convex domain that is introduced to regularize the solution w ∈ W (i.e., the smaller the size of W, the stronger the regularization is). In this study, we focus on the learning problems for which the loss function g i (w) is smooth. Examples of smooth loss functions include least square with g i (w) = (y i − w, x i ) 2 and logistic regression with g i (w) = log (1 + exp(−y i w, x i )). Since the regularization is enforced through the restricted domain W, we did not introduce a ℓ 2 regularizer λ w 2 /2 into the optimization problem and as a result, we do not assume the loss function to be strongly convex. We note that a small ℓ 2 regularizer does NOT improve the convergence rate of stochastic optimization. More specifically, the convergence rate for stochastically optimizing a ℓ 2 regularized loss function remains as O(1/ √ T ) when λ = O(1/ √ T ) [9, Theorem 1], a scenario that is often encountered in real-world applications.
Stochastic (SGD)
Mixed Optimization A preliminary approach for solving the optimization problem in (1) is the batch gradient descent (GD) algorithm [14]. It starts with some initial point, and iteratively updates the solution using the equation w t+1 = Π W (w t −η∇G(w t )), where Π W (·) is the orthogonal projection onto the convex domain W. It has been shown that for smooth objective functions, the convergence rate of standard GD is O(1/T ) [14], and can be improved to O(1/T 2 ) by an accelerated GD algorithm [13,14,16]. The main shortcoming of GD method is its high cost in computing the full gradient ∇G(w t ) when the number of training examples is large. Stochastic gradient descent (SGD) [3,11,19] alleviates this limitation of GD by sampling one (or a small set of) examples and computing a stochastic (sub)gradient at each iteration based on the sampled examples. Since the computational cost of SGD per iteration is independent of the size of the data (i.e., n), it is usually appealing for large-scale learning and optimization.
While SGD enjoys a high computational efficiency per iteration, it suffers from a slow convergence rate for optimizing smooth functions. It has been shown that the optimal convergence rate for stochastic optimization of smooth functions is only O(1/ √ T ) [12], which is significantly worse than GD that uses the full gradients for updating the solutions. In addition, as we can see from Table 1, for general Lipschitz continuous convex functions, SGD exhibits the same convergence rate as that for the smooth functions, implying that smoothness of the loss function is essentially not very useful and can not be exploited in stochastic optimization. The slow convergence rate for stochastically optimizing smooth loss functions is mostly due to the variance in stochastic gradients: unlike the full gradient case where the norm of a gradient approaches to zero when the solution is approaching to the optimal solution, in stochastic optimization, the norm of a stochastic gradient is constant even when the solution is close to the optimal solution. It is the variance in stochastic gradients that makes the convergence rate O(1/ √ T ) unimprovable for stochastic smooth optimization [12,1].
In this study, we are interested in designing an efficient algorithm that is in the same spirit of SGD but can effectively leverage the smoothness of the loss function to achieve a significantly faster convergence rate. To this end, we consider a new setup for optimization that allows us to interplay between stochastic and deterministic gradient descent methods. In particular, we assume that the optimization algorithm has an access to two oracles: • A stochastic oracle O s that returns the loss function g i (w) based on the sampled training example (x i , y i ) 1 , and 1 We note that the stochastic oracle assumed in our study is slightly stronger than the stochastic gradient oracle as it returns the sampled function instead of the stochastic gradient. * The convergence rate can be improved to O(1/T ) when the structure of the objective function is provided.
• A full gradient oracle O f that returns the gradient ∇G(w) for any given solution w ∈ W.
We refer to this new setting as mixed optimization in order to distinguish it from both stochastic and full gradient optimization models. The key question we examined in this study is: Is it possible to improve the convergence rate for stochastic optimization of smooth functions by having a small number of calls to the full gradient oracle O f ?
We give an affirmative answer to this question. We show that with an additional O(ln T ) accesses to the full gradient oracle O f , the proposed algorithm, referred to as MixedGrad, can improve the convergence rate for stochastic optimization of smooth functions to O(1/T ), the same rate for stochastically optimizing a strongly convex function [9,17,21]. Our result for mixed optimization is useful for the scenario when the full gradient of the objective function can be computed relatively efficient although it is still significantly more expensive than computing a stochastic gradient. An example of such a scenario is distributed computing where the computation of full gradients can be speeded up by having it run in parallel on many machines with each machine containing a relatively small subset of the entire training data. Of course, the latency due to the communication between machines will result in an additional cost for computing the full gradient in a distributed fashion.
Outline The rest of this paper is organized as follows. We begin in Section 2 by briefly reviewing the literature on deterministic and stochastic optimization. In Section 3, we introduce the necessary definitions and discuss the assumptions that underlie our analysis. Section 4 describes the MixedGrad algorithm and states the main result on its convergence rate. The proof of main result is given in Section 5. Finally, Section 6 concludes the paper and discusses few open questions.
Deterministic Smooth Optimization
The convergence rate of gradient based methods usually depends on the analytical properties of the objective function to be optimized. When the objective function is strongly convex and smooth, it is well known that a simple GD method can achieve a linear convergence rate [5]. For a non-smooth Lipschitz-continuous function, the optimal rate for the first order method is only O(1/ √ T ) [14]. Although O(1/ √ T ) rate is not improvable in general, several recent studies are able to improve this rate to O(1/T ) by exploiting the special structure of the objective function [16,15]. In the full gradient based convex optimization, smoothness is a highly desirable property. It has been shown that a simple GD achieves a convergence rate of O(1/T ) when the objective function is smooth, which is further can be improved to O(1/T 2 ) by using the accelerated gradient methods [13,16,14].
Stochastic Smooth Optimization
Unlike the optimization methods based on full gradients, the smoothness assumption was not exploited by most stochastic optimization methods. In fact, it was shown in [12] that the O(1/ √ T ) convergence rate for stochastic optimization cannot be improved even when the objective function is smooth. This classical result is further confirmed by the recent studies of composite bounds for the first order optimization methods [2,10]. The smoothness of the objective function is exploited extensively in mini-batch stochastic optimization [6,7], where the goal is not to improve the convergence rate but to reduce the variance in stochastic gradients and consequentially the number of times for updating the solutions [23]. We finally note that the smoothness assumption coupled with the strong convexity of function is beneficial in stochastic setting and yields a geometric convergence in expectation using Stochastic Average Gradient (SAG) and Stochastic Dual Coordinate Ascent (SDCA) algorithms proposed in [18] and [20], respectively.
Preliminaries
We use bold-face letters to denote vectors. For any two vectors w, w ′ ∈ W, we denote by w, w ′ the inner product between w and w ′ . Throughout this paper, we only consider the ℓ 2 -norm. We assume the objective function G(w) defined in (1) to be the average of n convex loss functions. The same assumption was made in [18,20]. We assume that G(w) is minimized at some w * ∈ W. Without loss of generality, we assume that W ⊂ B R , a ball of radius R. Besides convexity of individual functions, we will also assume that each g i (w) is β-smooth as formally defined below [14].
The smoothness assumption also implies that ∇f In stochastic first-order optimization setting, instead of having direct access to G(w), we only have access to a stochastic gradient oracle, which given a solution w ∈ W, returns the gradient ∇g i (w) where i is sampled uniformly at random from {1, 2, · · · , n}. The goal of stochastic optimization to use a bounded number T of oracle calls, and compute somew ∈ W such that the optimization error, G(w) − G(w * ), is as small as possible.
In the mixed optimization model considered in this study, we first relax the stochastic oracle O s by assuming that it will return a randomly sampled loss function g i (w), instead of the gradient ∇g i (w) for a given solution w 2 . Second, we assume that the learner also has an access to the full gradient oracle O f . Our goal is to significantly improve the convergence rate of stochastic gradient descent (SGD) by making a small number of calls to the full gradient oracle O f . In particular, we show that by having only O(log T ) accesses to the full gradient oracle and O(T ) accesses to the stochastic oracle, we can tolerate the noise in stochastic gradients and attain an O(1/T ) convergence rate for optimizing smooth functions. The analysis of the proposed algorithm relies on the strong convexity of intermediate loss functions introduced to facilitate the optimization as given below.
Definition 2 (Strong convexity). A function f (w) is said to be α-strongly convex w.r.t a norm
· , if there exists a constant α > 0 (often called the modulus of strong convexity) such that it holds Call the full gradient oracle O f for ∇G(w k )
5:
Compute Call stochastic oracle O s to return a randomly selected loss function Compute the stochastic gradient asĝ t Update the solution by end for 12:
Mixed Stochastic/Deterministic Gradient Descent
We now turn to describe the proposed mixed optimization algorithm and state its convergence rate. The key idea is to introduce a ℓ 2 regularizer into the objective function, and gradually reduce the amount of regularization over the iterations. The detailed steps of MixedGrad algorithm are shown in Algorithm 1. It follows the epoch gradient descent algorithm proposed in [9] for stochastically minimizing strongly convex functions and divides the optimization process into m epochs. Throughout the paper, we will use the subscript for the index of each epoch, and the superscript for the index of iterations within each epoch. Below, we describe the key idea behind MixedGrad.
Letw k be the solution obtained before the kth epoch, which is initialized to be 0 for the first epoch. Instead of searching for w * at the kth epoch, our goal is to find w * −w k , resulting in the following optimization problem for the kth epoch where ∆ k specifies the domain size of w and λ k is the regularization parameter introduced at the kth epoch. By introducing the ℓ 2 regularizer, the objective function in (2) becomes strongly convex, making it possible to exploit the technique for stochastic optimization of strongly convex function in order to improve the convergence rate. The domain size ∆ k and the regularization parameter λ k are initialized to be ∆ 1 > 0 and λ 1 > 0, respectively, and are reduced by a constant factor γ > 1 every epoch, i.e., ∆ k = ∆ 1 /γ k−1 and λ k = λ 1 /γ k−1 . By removing the constant term λ k w k 2 /2 from the objective function in (2), we obtain the following optimization problem for the kth epoch where W k = {w : w + w k ∈ W, w ≤ ∆ k }. We rewrite the objective function F k (w) as where The main reason for using g k i (w) instead of g i (w) is to tolerate the variance in the stochastic gradients. To see this, from the smoothness assumption of g i (w) we obtain the following inequality for the norm of g k i (w) as: As a result, since w ≤ ∆ k and ∆ k shrinks over epochs, then w will approach to zero over epochs and consequentially ∇ g k i (w) approaches to zero, which allows us to effectively control the variance in stochastic gradients, a key to improving the convergence of stochastic optimization for smooth functions to O(1/T ).
Using F k (w) in (4), at the tth iteration of the kth epoch, we call the stochastic oracle O s to randomly select a loss function g i k t (w) and update the solution by following the standard paradigm of SGD by where Π w∈W k (w) projects the solution w into the domain W k that shrinks over epochs. At the end of each epoch, we compute the average solution w k , and update the solution fromw k tow k+1 =w k + w k . Similar to the epoch gradient descent algorithm [9], we increase the number of iterations by a constant γ 2 for every epoch, i.e. T k = T 1 γ 2(k−1) .
In order to perform stochastic gradient updating given in (5), we need to compute vector g k at the beginning of the kth epoch, which requires an access to the full gradient oracle O f . It is easy to count that the number of accesses to the full gradient oracle O f is m, and the number of accesses to the stochastic oracle O s is Thus, if the total number of accesses to the stochastic gradient oracle is T , the number of access to the full gradient oracle required by Algorithm 1 is O(ln T ), consistent with our goal of making a small number of calls to the full gradient oracle. The theorem below shows that for smooth objective functions, by having O(ln T ) access to the full gradient oracle O f and O(T ) access to the stochastic oracle O s , by running MixedGrad algorithm, we achieve an optimization error of O(1/T ).
Convergence Analysis
Now we turn to proving the main theorem. The proof will be given in a series of lemmas and theorems where the proof of few are given in the Appendix. The proof of main theorem is based on induction. To this end, let w k * be the optimal solution that minimizes F k (w) defined in (3). The key to our analysis is show that when w k * ≤ ∆ k , with a high probability, it holds that is the optimal solution that minimizes F k+1 (w), as revealed by the following theorem.
Theorem 2. Let w k
* and w k+1 * be the optimal solutions that minimize F k (w) and F k+1 (w), respectively, and w k+1 be the average solution obtained at the end of kth epoch of MixedGrad algorithm. Suppose w k * ≤ ∆ k . By setting the step size η k = 1/ 2β √ 3T k , we have, with a probability 1 − 2δ, provided that δ ≤ e −9/2 and Taking this statement as given for the moment, we proceed with the proof of Theorem 1, returning later to establish the claim stated in Theorem 2.
Proof of Theorem 1. It is easy to check that for the first epoch, using the fact W ∈ B R , we have * be the optimal solution that minimizes F m (w) and let w m+1 * be the optimal solution obtained in the last epoch. Using Theorem 1, with a probability 1 − 2mδ, we have Hence, where the last step uses the fact w m+1 * where in the last step holds under the condition γ ≥ 2. By combining above inequalities, we obtain Our final goal is to relate F m (w) to min w G(w). Since w m * minimizes F m (w), for any w * ∈ arg min G(w), we have Thus, the key to bound |F(w m * ) − G(w * )| is to bound w * −w m . To this end, after the first m epoches, we run Algorithm 1 with full gradients. Letw m+1 ,w m+2 , . . . be the sequence of solutions generated by Algorithm 1 after the first m epochs. For this sequence of solutions, Theorem 2 will hold deterministically as we deploy the full gradient for updating, i.e., w k ≤ ∆ k for any k ≥ m + 1. Since we reduce λ k exponentially, λ k will approach to zero and therefore the sequence {w k } ∞ k=m+1 will converge to w * , one of the optimal solutions that minimize G(w). Since w * is the limit of sequence {w k } ∞ k=m+1 and w k ≤ ∆ k for any k ≥ m + 1, we have where the last step follows from the condition γ ≥ 2. Thus, By combining the bounds in (6) and (7), we have, with a probability 1 − 2mδ, We complete the proof by plugging in the stated values for γ, λ 1 and ∆ 1 .
Proof of Theorem 2
For the convenience of discussion, we drop the subscript k for epoch just to simplify our notation. Let λ = λ k , T = T k , ∆ = ∆ k , g = g k . Letw =w k be the solution obtained before the start of the epoch k, and letw ′ =w k+1 be the solution obtained after running through the kth epoch. We denote by F(w) and F ′ (w) the objective functions F k (w) and F k+1 (w). They are given by Let w * = w k * and w ′ * = w k+1 * be the optimal solutions that minimize F(w) and F ′ (w) over the domain W k and W k+1 , respectively. Under the assumption that w * ≤ ∆, our goal is to show The following lemma bounds F(w t ) − F( w * ) where the proof is deferred to Appendix.
By adding the inequality in Lemma 1 over all iterations, using the factw 1 = 0, we have Since g = ∇F(0) and and therefore The following lemmas bound A T , B T and C T .
The following lemma upper bounds B T and C T . The proof is based on the Bernstein's inequality for Martingales [4] and is given in the Appendix.
Lemma 3.
With a probability 1 − 2δ, we have Using Lemmas 2 and 3, by substituting the uppers bounds for A T , B T , and C T in (10), with a probability 1 − 2δ, we obtain and using the fact w = T +1 i=1 w t /(T + 1), we have Thus, when we have, with a probability 1 − 2δ, The next lemma relates w ′ * to w − w * .
Conclusions
We presented a new paradigm of optimization, termed as mixed optimization, that aims to improve the convergence rate of stochastic optimization by making a small number of calls to the full gradient oracle. We proposed the MixedGrad algorithm and showed that it is able to achieve an O(1/T ) convergence rate by accessing stochastic and full gradient oracles for O(T ) and O(log T ) times, respectively. We showed that the MixedGrad algorithm is able to exploit the smoothness of the function, which is believed to be not very useful in stochastic optimization.
In the future, we would like to examine the optimality of our algorithm, namely if it is possible to achieve a better convergence rate for stochastic optimization of smooth function using O(ln T ) accesses to the full gradient oracle. Furthermore, to alleviate the computational cost caused by O(log T ) accesses to the full gradient oracle, it would be interesting to empirically evaluate the proposed algorithm in a distributed framework by distributing the individual functions among processors to parallelize the full gradient computation at the beginning of each epoch which requires O(log T ) communications between the processors in total.
A Proof of Lemma 1
Before proving the lemmas we recall the definition of F(w), F ′ (w), g, and g i (w) as: We also recall that w * and w ′ * are the optimal solutions that minimize F(w) and F ′ (w) over the domain W k and W k+1 , respectively. Our goal is to show that: For each iteration t in the kth epoch, from the strong convexity of F(w) we have where F(w) = 1 n n i=1 g i (w). We now try to upper bound the first term in the right hand side. Since where the first inequality follows from the fact that w t+1 in the minimizer of the following optimization problem: Therefore, we obtain as desired.
B Proof of Lemmas 2 and 3
We now turn to prove the upper bound on A T as:
Proof. (of Lemma 2) We bound A T as
where the second inequality follows (a + b) 2 ≤ 2(a 2 + b 2 ) and the last inequality follows from the smoothness assumption.
We now turn to proving the upper bounds for B T and C T , i.e., with a probability 1 − 2δ, we have B T ≤ β∆ 2 ln 1 δ + 2T ln 1 δ and C T ≤ 2β∆ 2 ln 1 δ + 2T ln 1 δ The proof is based on the Berstein inequality for Martingales [4] which is restated here for completeness. Theorem 3. (Bernstein's inequality for martingales). Let X 1 , . . . , X n be a bounded martingale difference sequence with respect to the filtration F = (F i ) 1≤i≤n and with X i ≤ K. Let be the associated martingale. Denote the sum of the conditional variances by Equipped with this theorem, we are now in a position to upper bound B T and C T as follows.
Proof. (of Lemma 3) Denote X t = ∇ g it ( w * ) − ∇ F( w * ), w t − w * . We have that the conditional expectation of X t , given randomness in previous rounds, is E t−1 [X t ] = 0. We now apply Theorem 3 to the sum of martingale differences. In particular, we have, with a probability 1 − e −t , Hence, with a probability 1 − δ, we have B T ≤ β∆ 2 ln 1 δ + 2T ln 1 δ Similar, for C T , we have, with a probability 1 − δ, C T ≤ 2β∆ 2 ln 1 δ + 2T ln 1 δ | 2013-07-26T16:27:23.000Z | 2013-07-26T00:00:00.000 | {
"year": 2013,
"sha1": "1525b554ca968d515974dd3734da5994d7353995",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1525b554ca968d515974dd3734da5994d7353995",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
15207514 | pes2o/s2orc | v3-fos-license | Joint Mechanism That Mimics Elastic Characteristics in Human Running
Analysis of human running has revealed that the motion of the human leg can be modeled by a compression spring because the joints of the leg behave like a torsion spring in the stance phase. In this paper, we describe the development of a joint mechanism that mimics the elastic characteristics of the joints of the stance leg. The knee was equipped with a mechanism comprising two laminated leaf springs made of carbon fiber-reinforced plastic for adjusting the joint stiffness and a worm gear in order to achieve active movement. Using this mechanism, we were able to achieve joint stiffness mimicking that of a human knee joint that can be adjusted by varying the effective length of one of the laminated leaf springs. The equation proposed for calculating the joint stiffness considers the difference between the position of the fixed point of the leaf spring and the position of the rotational center of the joint. We evaluated the performance of the laminated leaf spring and the effectiveness of the proposed equation for joint stiffness. We were able to make a bipedal robot run with one leg using pelvic oscillation for storing energy produced by the resonance related to leg elasticity.
Introduction
We previously researched the development of a bipedal humanoid robot that can mimic various characteristics of human walking; this robot was used for investigating human mechanisms and human motion control [1][2][3].For improving the performance of this robot and using this robot for not only walking, but also hopping and running, we have initiated the development of a new bipedal humanoid robot that can run like a human.In human sciences and sports sciences, researchers usually perform motion capture experiments to realize human motions.In motion capture experiments, however, motions that pose a risk of injury to human subjects cannot be studied owing to ethical concerns [4], despite the possibility of improving those motions through coordinated training.If a robot that can mimic human hopping and running were developed, it would become possible to mimic other human motions, as well.Furthermore, if a robot could perform motions that would improve sports performance, but would be dangerous if a human being were to perform them, it would be possible to identify more effective human motions.
Relevant studies in human sciences have identified some characteristics of human running, such as the following:
‚
A stance leg acts like a linear spring, and a human leg can be modeled as a spring-loaded inverted pendulum (SLIP) [5][6][7].
‚
The knee and ankle joints of a stance leg act like torsion springs [8,9].
‚
The knee joint stiffness of the stance leg changes with running speed [8,10].
‚
Rapid knee bending occurs in the swing phase to avoid contact of the foot with the ground [11].
‚
Pelvis rotation in the frontal plane increases jumping force [12].
‚
Moment compensation is accomplished using the upper body and arms [13].
In human running, the joints of the leg require more than 1000 W of power [11,[14][15][16], which is greater than the power of the actuator found in an ordinary life-sized humanoid robot [1,[17][18][19][20].To exert a high output during the stance phase, humans utilize joint stiffness for storing the kinetic energy that the robot has during the flight phase.Therefore, we focused on the leg stiffness and joint stiffness of the stance leg as the characteristics that are absolutely necessary for attaining jumping power.There are some studies on running using humanoid robots, but few robots can mimic this characteristic.Some small humanoids can run, but the dynamics of these robots are different from those of humans [21].Life-sized humanoid robots [17,18], such as ASIMO [19] and Toyota's humanoid robot [20], do not have human-like leg elasticity.One athletic robot has a human-like musculoskeletal system in the leg and elastic parts in the foot, as with an artificial leg, but this robot's ankle joint does not mimic human ankle joint stiffness [22].MABEL [23] also has variable-stiffness joints for running and succeeded in running with axial constraints on the Y-axis, but it cannot vary its joint stiffness within a range equivalent to that of human joint stiffness.Elastic joint mechanisms were developed for walking humanoids, such as COMAN [24][25][26], so that they can interact with the environment safely or mimic human mechanisms [27,28].However, the joint mechanisms cannot output enough torque for a running life-sized humanoid robot.Some artificial legs have been developed for achieving natural locomotion; however, they mimic only ankle stiffness, not knee joint stiffness, and the amount of joint stiffness is not equal to that of a human [29][30][31][32].
Our intention in this study was to develop a joint mechanism with variable joint stiffness that mimics human joint stiffness and that can be utilized for running.In addition, the knee joint should bend in the swing phase to avoid contact between the foot and the ground.The ankle joint should produce a torque that is the same as that of the knee; however, the joint stiffness of the ankle does not change according to the running speed, and the ankle joint does not bend like a knee joint during the flight phase.Thus, the knee joint mechanism should incorporate joint stiffness and should move actively during the flight phase.We developed a mechanism comprising two laminated leaf springs made of carbon fiber-reinforced plastic (CFRP) for adjusting joint stiffness and a worm gear that can change the angle between two laminated leaf springs with an actuator for achieving active movement.We evaluated the laminated leaf spring and then performed a hopping experiment to evaluate the effectiveness of the proposed joint mechanism.
This paper is organized as follows.In Section 2, we describe the design of the joint mechanism that mimics human joint stiffness.In Section 3, we present and discuss experimental results.Finally, in Section 4, we present conclusions and propose future work.
Requirements for a Joint Mechanism Based on Human Running
During the stance phase of human running, the knee and ankle joints alternately lengthen and shrink like a spring, whereas the hip joint, unlike a spring, only lengthens [7,8,14,15].The leg stiffness is a result of knee and ankle joint stiffness, and joint stiffness is important for attaining jumping force.As mentioned above, leg stiffness and joint stiffness vary during the flight phase according to running speed [8,10].We began by determining the requirements for the robotic joint based on human running data; we considered running speeds between 4 m/s and 10 m/s.The requirements for achieving the joint stiffness of a running human leg were obtained from the results of our preliminary analysis and published running data (see Table 1) [12,14,16].From these data, we conclude that the knee joint should produce a torque of 177 Nm in the stance phase, bend 2.7 rad in the swing phase and be able to vary its stiffness as needed within the range of human knee joint stiffness, from 300 Nm/rad at a running speed of 4 m/s to 600 Nm/rad at a running speed of 10 m/s, during the swing phase, which has a duration of 0.4 s.Based on knee bending, we calculated a required angular velocity of 6.7 rad/s.
Design of a Joint Mechanism Mimicking the Joint Stiffness of a Human Leg
There are several methods of mimicking joint stiffness, including using an actuator to control the joint like a spring, implementing a spring and using a combination of these methods.In human running, each joint of the leg requires more than 1000 W [11,[14][15][16], whereas the output of the DC motors used in the legs of some humanoid robots is much lower: approximately 150 W [1,[17][18][19][20]. If motors with higher power were used, their size and weight would make it difficult to mimic a human leg.Although hydraulic actuators can be used to produce the required high output, they require large, heavy pumps.It is therefore very difficult to realize human running using existing actuators.Consequently, we considered the possibility of mimicking this characteristic by using elastic bodies, such as a compression coil spring, a torsion spring or a leaf spring.To mimic the variation of joint stiffness with running speed, we used a leaf spring, with which we can easily adjust the stiffness by varying the distance between a supporting point and a load point [33].Figure 1 is a schematic of the joint stiffness adjustment mechanism.The load point is fixed on Link A, and the leaf spring is fixed on Link B via the joint axis.When a force is applied to Link A, the force is transmitted to the leaf spring through the load point.Link A then rotates in accordance with the deformation of the leaf spring.In this way, we can adjust the joint stiffness by changing the position of the load point.This stiffness can be calculated as: where K j is the joint stiffness, M is the joint torque and ∆θ is the joint displacement.The formula for the joint displacement is the following: where δ is the deflection of the leaf spring and L is the effective length of the leaf spring.When the displacement is small, it can be approximated as follows: The deflection is expressed as follows: where E is the Young's modulus of the leaf spring, I is the area moment of inertia of the leaf spring, b is the width of the leaf spring and t is the thickness of the leaf spring.According to these equations, the joint stiffness can be approximated as follows: The joint stiffness adjustment mechanism allows for changing the stiffness of the joint.However, when we attempted to mimic low joint stiffness, it was difficult to install the leaf spring in a manner that is consistent with human physical structure.We need to position the load point far from the supporting point to mimic low joint stiffness, and furthermore, the leaf spring cannot withstand the force while in the stance phase if its thickness is reduced to decrease the stiffness.Thus, this leaf spring joint stiffness adjustment mechanism cannot achieve the required joint stiffness.To resolve this problem, we incorporated an additional leaf spring into the mechanism.The two leaf springs were implemented in series through the active actuator in the joint to realize low joint stiffness.This arrangement makes it possible to adjust the stiffness of the joint over a wide range.Moreover, the approximation of the displacement becomes more accurate because the deflection of a leaf spring becomes half of the displacement of the joint mechanism.Therefore, we used the above simplification (Equation (3)) for calculating joint stiffness.The theoretical formula for joint stiffness using two leaf springs is the following: where ' j K is the joint stiffness, adjustable K is the stiffness of the leaf spring whose effective length we can change and fixed K is the stiffness of the leaf spring whose effective length is fixed.Furthermore, we devised a new joint mechanism in which the angle between two leaf springs can be changed by an actuator in order to achieve active movement.The two leaf springs transmit the joint torque to an upper link and a lower link through the load point, which can be moved to change the effective length (see Figure 2a).When the active joint moves, the joint rotates (see Figure 2b).When the external torque is applied, the leaf springs bend, and the joint angle also changes (see Figure 2c).If this mechanism is to act like a torsion spring, the angle between the two leaf springs should be fixed, and only the leaf springs should bend to produce the joint torque while the robot is standing.To accomplish this, we used a worm gear to which the torque from an input shaft to an output shaft is transmitted; not all of the torque from the output shaft to the input shaft is transmitted to the worm gear.The transmission efficiency was changed according to the lead angle γ of the worm gear.The theoretical formulas for transmission efficiency from the input shaft to the output shaft ( 1 η ) and from the output shaft to the input shaft ( 2 η ) are as follows [34]: The deflection is expressed as follows:
Roller
where E is the Young's modulus of the leaf spring, I is the area moment of inertia of the leaf spring, b is the width of the leaf spring and t is the thickness of the leaf spring.According to these equations, the joint stiffness can be approximated as follows: The joint stiffness adjustment mechanism allows for changing the stiffness of the joint.However, when we attempted to mimic low joint stiffness, it was difficult to install the leaf spring in a manner that is consistent with human physical structure.We need to position the load point far from the supporting point to mimic low joint stiffness, and furthermore, the leaf spring cannot withstand the force while in the stance phase if its thickness is reduced to decrease the stiffness.Thus, this leaf spring joint stiffness adjustment mechanism cannot achieve the required joint stiffness.To resolve this problem, we incorporated an additional leaf spring into the mechanism.The two leaf springs were implemented in series through the active actuator in the joint to realize low joint stiffness.This arrangement makes it possible to adjust the stiffness of the joint over a wide range.Moreover, the approximation of the displacement becomes more accurate because the deflection of a leaf spring becomes half of the displacement of the joint mechanism.Therefore, we used the above simplification (Equation (3)) for calculating joint stiffness.The theoretical formula for joint stiffness using two leaf springs is the following: where K 1 j is the joint stiffness, K adjustable is the stiffness of the leaf spring whose effective length we can change and K f ixed is the stiffness of the leaf spring whose effective length is fixed.Furthermore, we devised a new joint mechanism in which the angle between two leaf springs can be changed by an actuator in order to achieve active movement.The two leaf springs transmit the joint torque to an upper link and a lower link through the load point, which can be moved to change the effective length (see Figure 2a).When the active joint moves, the joint rotates (see Figure 2b).When the external torque is applied, the leaf springs bend, and the joint angle also changes (see Figure 2c).If this mechanism is to act like a torsion spring, the angle between the two leaf springs should be fixed, and only the leaf springs should bend to produce the joint torque while the robot is standing.To accomplish this, we used a worm gear to which the torque from an input shaft to an output shaft is transmitted; not all of the torque from the output shaft to the input shaft is transmitted to the worm gear.The transmission efficiency was changed according to the lead angle γ of the worm gear.The theoretical formulas for transmission efficiency from the input shaft to the output shaft (η 1 ) and from the output shaft to the input shaft (η 2 ) are as follows [34]: where ρ is a parameter with a value of 0.14, determined by the material of which the worm gear is made and the angular velocity during running.These formulas are plotted in Figure 3.The lead angle was determined to be 8.73 ˝considering the feasibility of manufacturing and the back-drivability.Using these formulas, we designed the worm gear so that it would be possible to fix the angle between the leaf springs and move actively by using the motor, which can output 150 W and is small and light enough to be implemented in the leg.The torque transmission efficiency from the input shaft to the output shaft of the developed mechanism is 50%, and that from the output shaft to the input shaft is 2.6%.This knee mechanism (see Figure 4) can fix the angle between the two leaf springs in the stance phase and actively control the joint angle.
Machines 2016, 4, 5 5 of 15 where ρ is a parameter with a value of 0.14, determined by the material of which the worm gear is made and the angular velocity during running.These formulas are plotted in Figure 3.The lead angle was determined to be 8.73° considering the feasibility of manufacturing and the back-drivability.Using these formulas, we designed the worm gear so that it would be possible to fix the angle between the leaf springs and move actively by using the motor, which can output 150 W and is small and light enough to be implemented in the leg.The torque transmission efficiency from the input shaft to the output shaft of the developed mechanism is 50%, and that from the output shaft to the input shaft is 2.6%.This knee mechanism (see Figure 4) can fix the angle between the two leaf springs in the stance phase and actively control the joint angle.To vary the joint stiffness within the range of a human leg joint, the load point must move 130 mm in 0.4 s.In addition, the mechanism should withstand a load of 10,000 N in the direction perpendicular to the leaf spring and a load of 750 N in the direction parallel to the leaf spring.There are several ways of moving the load point, such as using a ball screw or a rack-and-pinion system.When a rack-and-pinion system is used, the actuator needs more power to move itself because it moves with the load point.Therefore, we decided to use a ball screw.When a ball screw is used, the large load in the direction parallel to the leaf spring produces a large moment that acts on the actuator.To make it possible for the actuator to withstand this large load and moment, we implemented an electrical where ρ is a parameter with a value of 0.14, determined by the material of which the worm gear is made and the angular velocity during running.These formulas are plotted in Figure 3.The lead angle was determined to be 8.73° considering the feasibility of manufacturing and the back-drivability.
Using these formulas, we designed the worm gear so that it would be possible to fix the angle between the leaf springs and move actively by using the motor, which can output 150 W and is small and light enough to be implemented in the leg.The torque transmission efficiency from the input shaft to the output shaft of the developed mechanism is 50%, and that from the output shaft to the input shaft is 2.6%.This knee mechanism (see Figure 4) can fix the angle between the two leaf springs in the stance phase and actively control the joint angle.To vary the joint stiffness within the range of a human leg joint, the load point must move 130 mm in 0.4 s.In addition, the mechanism should withstand a load of 10,000 N in the direction perpendicular to the leaf spring and a load of 750 N in the direction parallel to the leaf spring.There are several ways of moving the load point, such as using a ball screw or a rack-and-pinion system.When a rack-and-pinion system is used, the actuator needs more power to move itself because it moves with the load point.Therefore, we decided to use a ball screw.When a ball screw is used, the large load in the direction parallel to the leaf spring produces a large moment that acts on the actuator.To make it possible for the actuator to withstand this large load and moment, we implemented an electrical To vary the joint stiffness within the range of a human leg joint, the load point must move 130 mm in 0.4 s.In addition, the mechanism should withstand a load of 10,000 N in the direction perpendicular to the leaf spring and a load of 750 N in the direction parallel to the leaf spring.There are several ways of moving the load point, such as using a ball screw or a rack-and-pinion system.When a rack-and-pinion system is used, the actuator needs more power to move itself because it moves with the load point.Therefore, we decided to use a ball screw.When a ball screw is used, the large load in the direction parallel to the leaf spring produces a large moment that acts on the actuator.To make it possible for the actuator to withstand this large load and moment, we implemented an electrical break to control the moment and a linear guide for the load in the direction perpendicular to the leaf spring (see Figure 5).Thanks to this mechanism, the load point can be adjusted during the flight phase like a human.
Machines 2016, 4, 5 6 of 15 break to control the moment and a linear guide for the load in the direction perpendicular to the leaf spring (see Figure 5).Thanks to this mechanism, the load point can be adjusted during the flight phase like a human.
Joint Stiffness Equation Considering the Fixed Point
The position of the fixed point of the leaf spring is different from the rotational center of the joint in the developed joint mechanism (see Figure 6).Because the difference between the rotational center and the fixed point of the leaf spring influences the moment of the leaf spring, we modified the equation for the stiffness of the mechanism.
When the position of the fixed point and that of the rotational center are different, the positions can be expressed in terms of their two-dimensional coordinates.The coordinates of the rotational center are set as the origin of the coordinate system.The coordinates of the fixed point are ( a , h ), and those of the load point are ( a l + , h ).When moment M is applied to the rotational center, force ls F is applied to the load point of the leaf spring.This force is expressed as follows: Moment ls M applied to the load spring is expressed as follows: Based on these equations, this moment bending the leaf spring is calculated as follows: break to control the moment and a linear guide for the load in the direction perpendicular to the leaf spring (see Figure 5).Thanks to this mechanism, the load point can be adjusted during the flight phase like a human.
Joint Stiffness Equation Considering the Fixed Point
The position of the fixed point of the leaf spring is different from the rotational center of the joint in the developed joint mechanism (see Figure 6).Because the difference between the rotational center and the fixed point of the leaf spring influences the moment of the leaf spring, we modified the equation for the stiffness of the mechanism.
When the position of the fixed point and that of the rotational center are different, the positions can be expressed in terms of their two-dimensional coordinates.The coordinates of the rotational center are set as the origin of the coordinate system.The coordinates of the fixed point are ( a , h ), and those of the load point are ( a l + , h ).When moment M is applied to the rotational center, force ls F is applied to the load point of the leaf spring.This force is expressed as follows: Moment ls M applied to the load spring is expressed as follows: Based on these equations, this moment bending the leaf spring is calculated as follows:
Joint Stiffness Equation Considering the Fixed Point
The position of the fixed point of the leaf spring is different from the rotational center of the joint in the developed joint mechanism (see Figure 6).Because the difference between the rotational center and the fixed point of the leaf spring influences the moment of the leaf spring, we modified the equation for the stiffness of the mechanism.
When the position of the fixed point and that of the rotational center are different, the positions can be expressed in terms of their two-dimensional coordinates.The coordinates of the rotational center are set as the origin of the coordinate system.The coordinates of the fixed point are (a, h), and those of the load point are (a `l, h).When moment M is applied to the rotational center, force F ls is applied to the load point of the leaf spring.This force is expressed as follows: Moment M ls applied to the load spring is expressed as follows: M ls " LF ls (10) Based on these equations, this moment bending the leaf spring is calculated as follows: According to Equation (11), when the positions of the rotational center and the fixed point of the leaf spring are the same, i.e., a equals zero, the moment acting on the leaf spring is the same as that acting on the joint.However, when a > 0, the moment acting on the leaf spring is smaller than that acting on the joint.This means that the deflection of the leaf spring δ ls also becomes smaller: For calculating the joint stiffness with Equations ( 1) and ( 3), the deflection δ perpendicular to the line that passes through the rotational center and the load point is expressed as follows: where θ load is the angle between the X direction and the line that passes through the rotational center and the load point.According to the modified deflection given in Equation ( 13), the modified theoretical formula for the joint stiffness is the following: where K j 1 is the modified joint stiffness.This indicates that a greater difference between the rotational center and the fixed point of the leaf spring leads to increased joint stiffness.We took this into consideration in the design of the leaf spring.
Machines 2016, 4, 5 7 of 15 According to Equation (11), when the positions of the rotational center and the fixed point of the leaf spring are the same, i.e., a equals zero, the moment acting on the leaf spring is the same as that acting on the joint.However, when a > 0, the moment acting on the leaf spring is smaller than that acting on the joint.This means that the deflection of the leaf spring ls δ also becomes smaller: ) For calculating the joint stiffness with Equations ( 1) and ( 3), the deflection δ perpendicular to the line that passes through the rotational center and the load point is expressed as follows: where load θ is the angle between the X direction and the line that passes through the rotational center and the load point.According to the modified deflection given in Equation ( 13), the modified theoretical formula for the joint stiffness is the following: where ' j K is the modified joint stiffness.This indicates that a greater difference between the rotational center and the fixed point of the leaf spring leads to increased joint stiffness.We took this into consideration in the design of the leaf spring.
Laminated Leaf Spring Made of Carbon Fiber-Reinforced Plastic
In order to incorporate the leaf springs into the developed joint mechanism, the leaf springs must be able to withstand a large load while the robot is running.One option is to make the leaf spring out of iron.Such a leaf spring could withstand a large load, but it would be very heavy.If an iron leaf spring were implemented, the joint mechanism would not be able to mimic the mass of a human leg.
To resolve this problem, we used a leaf spring made of CFRP, which is extremely strong, yet also light.The specific strength of CFRP (2457 kNm/kg) is much higher than that of iron (254 kNm/kg), and the density of CFRP (1.5 g/cm 3 ) is much lower than that of iron (7.8 g/cm 3 ).On account of these characteristics, CFRP is used in some prosthetic legs [29].However, when the CFRP leaf spring was made small enough to be installed into a robotic leg equivalent in size to a human leg, the stress on
Laminated Leaf Spring Made of Carbon Fiber-Reinforced Plastic
In order to incorporate the leaf springs into the developed joint mechanism, the leaf springs must be able to withstand a large load while the robot is running.One option is to make the leaf spring out of iron.Such a leaf spring could withstand a large load, but it would be very heavy.If an iron leaf spring were implemented, the joint mechanism would not be able to mimic the mass of a human leg.
To resolve this problem, we used a leaf spring made of CFRP, which is extremely strong, yet also light.The specific strength of CFRP (2457 kNm/kg) is much higher than that of iron (254 kNm/kg), and the density of CFRP (1.5 g/cm 3 ) is much lower than that of iron (7.8 g/cm 3 ).On account of these characteristics, CFRP is used in some prosthetic legs [29].However, when the CFRP leaf spring was made small enough to be installed into a robotic leg equivalent in size to a human leg, the stress on the leaf spring exceeded its strength.To improve the strength, the thickness or width of the leaf spring should be increased.However, when the width is increased, it becomes difficult to incorporate the leaf spring into the leg.On the other hand, when the thickness is increased, the deflection of the leaf spring becomes smaller and the joint stiffness higher than that of a human leg.To resolve this problem, we stacked two leaf springs, one upon another.The maximum stress σ is expressed as follows: In contrast, the deflection of the laminated leaf spring and the maximum stress on one leaf spring in the laminated leaf spring are expressed as follows: where t1 is the thickness of each leaf spring and n is the number of leaf springs.Thus, t1 is expressed as follows: Considering Equation (18), Equations ( 16) and ( 17) can be expressed as follows: Based on Equations ( 19) and (20), when the number of leaf springs increases, the deflection and the stress also increase.However, the number of leaf springs influences the deflection more than the stress.Thus, we can adjust the total thickness to modify the strength and the number of leaf springs to modify the joint stiffness.To increase the strength of the leaf spring and decrease the joint stiffness, we used these formulas to design a laminated leaf spring made of two CFRP leaf springs.Compared to an iron leaf spring whose joint stiffness is the same as that of the laminated CFRP leaf spring, the laminated leaf spring is thicker and almost the same in length and width, and the mass of the laminated leaf spring (200 g) is much lower than that of the iron leaf spring (600 g) (see Table 2).Furthermore, the mass of the joint mechanism using the CFRP laminated leaf springs is 3000 g, compared to 3800 g for the joint with iron leaf springs.Thus, the use of CFRP laminated leaf springs reduces the mass of the joint by approximately 21%.
Implementation of the Joint Mechanism
We designed a robotic leg that incorporates the developed joint mechanism.The developed leg has a knee mechanism comprising two leaf springs, the worm gear and the joint stiffness adjustment mechanism, as well as an ankle comprising two leaf springs.The joint stiffness of the ankle does not vary as widely as that of the knee joint (see Table 1), and the ankle's range of motion during the swing phase, approximately 30 ˝, is more restricted than that of the knee, approximately 60 ˝ [7].Therefore, in order to keep the mass of the ankle more consistent with the mass of a human ankle, we did not implement the worm gear and the joint stiffness adjustment mechanism in the ankle joint.In the foot, we implemented a rubber hemisphere at the end of each toe for point grounding.
In addition, the developed leg was implemented with a pelvis mechanism (see Figure 7a).We used 150-W DC motors, timing belts and harmonic drives to actuate the pelvis roll joint and hip joints.To perform human-like motions, the robot, which weighs 60 kg, must be approximately the same size as a human [35].The weight of the robot's upper body is designed such that the location of the center of mass and the moment of inertia about the center of mass are consistent with those of the human body [36].The mass of the robot is close to that of a human, and the height is similar to that of a human's chest.Moreover, the mass of each part of the robot is similar to the mass of the corresponding part of the human body.This was possible because the variable stiffness actuator mechanism was made lighter by using the worm gear and the CFRP-laminated leaf springs.Table 3 describes the configuration of the robot.This robot has nine actuators, can move its pelvis in the same way a human does and can jump because it can store energy via leg elasticity and use it efficiently via resonance based on pelvic oscillation.Robot motion was restricted to the vertical and horizontal directions using a developed guide (see Figure 7b).The guide has two passive joints, and it was connected to the robot's body to ensure that the robot moves around the guide.
Machines 2016, 4, 5 9 of 15 as a human [35].The weight of the robot's upper body is designed such that the location of the center of mass and the moment of inertia about the center of mass are consistent with those of the human body [36].The mass of the robot is close to that of a human, and the height is similar to that of a human's chest.Moreover, the mass of each part of the robot is similar to the mass of the corresponding part of the human body.This was possible because the variable stiffness actuator mechanism was made lighter by using the worm gear and the CFRP-laminated leaf springs.Table 3 describes the configuration of the robot.This robot has nine actuators, can move its pelvis in the same way a human does and can jump because it can store energy via leg elasticity and use it efficiently via resonance based on pelvic oscillation.Robot motion was restricted to the vertical and horizontal directions using a developed guide (see Figure 7b).The guide has two passive joints, and it was connected to the robot's body to ensure that the robot moves around the guide.
Verification of the CFRP-Laminated Leaf Spring
To verify the stiffness of the CFRP-laminated leaf spring, we subjected it to a load test.We developed a test fixture for the leaf spring and applied a load in the vertical direction using a load testing machine (see Figure 8).The leaf spring is attached to the lower part of the test fixture, and the upper part of the test fixture can move freely in the vertical direction.When the load testing machine applies a load, the leaf spring is loaded through the upper part of the test fixture.By changing the horizontal position of the lower part of the test fixture, we can change the effective length of the leaf spring.We measured the applied load and the deflection of the leaf spring.To evaluate the loading capacity of the leaf spring, we applied loads as high as 177 Nm, which is the torque applied to a human knee joint during running.
Figure 9 presents experimental results and theoretical values for the deflection of the laminated leaf spring, a single leaf spring from the laminated leaf spring and a single leaf spring that is as thick as the laminated leaf spring.The theoretical values are not included if the leaf spring was not able to withstand the load.As the results show, the laminated leaf spring was able to withstand 177 Nm, and the mean value of the measured stiffness of the laminated leaf spring 650 Nm/rad was close to the
Verification of the CFRP-Laminated Leaf Spring
To verify the stiffness of the CFRP-laminated leaf spring, we subjected it to a load test.We developed a test fixture for the leaf spring and applied a load in the vertical direction using a load testing machine (see Figure 8).The leaf spring is attached to the lower part of the test fixture, and the upper part of the test fixture can move freely in the vertical direction.When the load testing machine applies a load, the leaf spring is loaded through the upper part of the test fixture.By changing the horizontal position of the lower part of the test fixture, we can change the effective length of the leaf spring.We measured the applied load and the deflection of the leaf spring.To evaluate the loading capacity of the leaf spring, we applied loads as high as 177 Nm, which is the torque applied to a human knee joint during running.
Figure 9 presents experimental results and theoretical values for the deflection of the laminated leaf spring, a single leaf spring from the laminated leaf spring and a single leaf spring that is as thick as the laminated leaf spring.The theoretical values are not included if the leaf spring was not able to withstand the load.As the results show, the laminated leaf spring was able to withstand 177 Nm, and the mean value of the measured stiffness of the laminated leaf spring 650 Nm/rad was close to the theoretical value 610 Nm/rad.It is assumed that the difference between the measured value and the theoretical value is caused by approximating the joint displacement.
The stiffness of the laminated leaf spring was lower than that of the single leaf spring with the thickness equal to the thickness of the laminated leaf spring.With the developed joint mechanism, we can increase the joint stiffness by decreasing the effective length of the leaf spring or decrease the joint stiffness by increasing the effective length.However, we cannot implement a leaf spring that is longer than a human thigh or shank.Thus, the lower joint stiffness offered by the laminated leaf spring is beneficial for the developed joint mechanism.The above discussion confirms that the laminated leaf spring is advantageous in terms of loading capacity and size of the joint mechanism.
theoretical value 610 Nm/rad.It is assumed that the difference between the measured value and the theoretical value is caused by approximating the joint displacement.
The stiffness of the laminated leaf spring was lower than that of the single leaf spring with the thickness equal to the thickness of the laminated leaf spring.With the developed joint mechanism, we can increase the joint stiffness by decreasing the effective length of the leaf spring or decrease the joint stiffness by increasing the effective length.However, we cannot implement a leaf spring that is longer than a human thigh or shank.Thus, the lower joint stiffness offered by the laminated leaf spring is beneficial for the developed joint mechanism.The above discussion confirms that the laminated leaf spring is advantageous in terms of loading capacity and size of the joint mechanism.
Verification of Joint Stiffness
We conducted an experiment to evaluate the effectiveness of the joint mechanism comprising two laminated leaf springs in mimicking human knee joint stiffness.In this experiment, we made the ankle joint passive in order to exclude any influence from the ankle.The robot was lifted then lowered to apply to the leg a vertical downward force proportional to the mass of the robot.To determine the theoretical value 610 Nm/rad.It is assumed that the difference between the measured value and the theoretical value is caused by approximating the joint displacement.
The stiffness of the laminated leaf spring was lower than that of the single leaf spring with the thickness equal to the thickness of the laminated leaf spring.With the developed joint mechanism, we can increase the joint stiffness by decreasing the effective length of the leaf spring or decrease the joint stiffness by increasing the effective length.However, we cannot implement a leaf spring that is longer than a human thigh or shank.Thus, the lower joint stiffness offered by the laminated leaf spring is beneficial for the developed joint mechanism.The above discussion confirms that the laminated leaf spring is advantageous in terms of loading capacity and size of the joint mechanism.
Verification of Joint Stiffness
We conducted an experiment to evaluate the effectiveness of the joint mechanism comprising two laminated leaf springs in mimicking human knee joint stiffness.In this experiment, we made the ankle joint passive in order to exclude any influence from the ankle.The robot was lifted then lowered to apply to the leg a vertical downward force proportional to the mass of the robot.To determine the
Verification of Joint Stiffness
We conducted an experiment to evaluate the effectiveness of the joint mechanism comprising two laminated leaf springs in mimicking human knee joint stiffness.In this experiment, we made the ankle joint passive in order to exclude any influence from the ankle.The robot was lifted then lowered to apply to the leg a vertical downward force proportional to the mass of the robot.To determine the knee joint stiffness, we measured the knee joint angle using an encoder attached to the knee joint, and we measured the downward force using a force sensor located on the floor.The joint torque was calculated from the knee joint angle displacement, thigh length and downward force.In this experiment, the robotic motion was restricted to the vertical direction using linear guides.
The experimental results are summarized in Table 4.As the results show, the range of the joint stiffness of the developed joint is wider than that of a human knee joint.In addition, the theoretical value was almost the same as the measured value by taking into account the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint mentioned in Section 2.3.
Hopping Experiment
To confirm that the developed robot can use its joint elasticity for running, we performed a hopping experiment and measured the mass displacement and the flight time, in order to verify that the developed joint can be used to attain jumping power.We previously developed a running control method using the resonance related to pelvic movement and leg elasticity [37], and we used that method in this experiment (Figure 10).The robot moved its pelvis in the stance phase to attain a jumping force with a jumping height controller.Moreover, the robot changed leg joint angles for stabilization with a ground reaction force estimation and controlled the running speed with a running speed controller in the flight phase.The experimental conditions are listed in Table 5.The amplitude of the pelvic oscillation and the joint stiffness were selected based on human running data.In this experiment, the robot initially stood, then started to move its pelvis according to the pelvic oscillation control method.When the robot was able to jump, it moved its pelvis to the landing angle by the next landing and moved its hip pitch joint according to the running speed control method, with a reference running speed of 0.2 m/s.In the running speed control, we used the value of the leg length calculated according to the link length and the joint angles of the leg when the robot landed.The gain for the running speed control was determined experimentally.knee joint stiffness, we measured the knee joint angle using an encoder attached to the knee joint, and we measured the downward force using a force sensor located on the floor.The joint torque was calculated from the knee joint angle displacement, thigh length and downward force.In this experiment, the robotic motion was restricted to the vertical direction using linear guides.
The experimental results are summarized in Table 4.As the results show, the range of the joint stiffness of the developed joint is wider than that of a human knee joint.In addition, the theoretical value was almost the same as the measured value by taking into account the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint mentioned in Section 2.3.
Hopping Experiment
To confirm that the developed robot can use its joint elasticity for running, we performed a hopping experiment and measured the mass displacement and the flight time, in order to verify that the developed joint can be used to attain jumping power.We previously developed a running control method using the resonance related to pelvic movement and leg elasticity [37], and we used that method in this experiment (Figure 10).The robot moved its pelvis in the stance phase to attain a jumping force with a jumping height controller.Moreover, the robot changed leg joint angles for stabilization with a ground reaction force estimation and controlled the running speed with a running speed controller in the flight phase.The experimental conditions are listed in Table 5.The amplitude of the pelvic oscillation and the joint stiffness were selected based on human running data.In this experiment, the robot initially stood, then started to move its pelvis according to the pelvic oscillation control method.When the robot was able to jump, it moved its pelvis to the landing angle by the next landing and moved its hip pitch joint according to the running speed control method, with a reference running speed of 0.2 m/s.In the running speed control, we used the value of the leg length calculated according to the link length and the joint angles of the leg when the robot landed.The gain for the running speed control was determined experimentally.Figure 11 presents photographs of the running experiment, and Figure 12 depicts the vertical displacement of the center of mass of the robot.In Figure 12, the orange area means that the robot took off the ground.The robot started its pelvic oscillation and started to hop and run after a few oscillations.This indicates that the joint mechanism can withstand the large torque that occurs during the stance phase of running.During the flight phase, the robot used running speed control.Based on experimental results, the time of the stance phase was approximately 270 ms, and the time of the flight phase was approximately 120 ms.The running robot can bend and stretch its knee within 100 ms during the flight phase to achieve alternate bipedal running.The time of the stance phase in human running is approximately 260 ms, and that of the flight phase is approximately 100 ms [7].The robot's gait timing was similar to human gait timing.Moreover, the maximum power that the joint mechanism output in the hopping experiment was approximately 1000 W, which is similar to the human data [11,[14][15][16].It is much greater than the ordinary joint mechanisms for humanoids, which can output approximately 150 W at most.
The proposed mechanism would be advantageous if the robot could run at a higher speed.However, this is difficult because of a lack of power in the hip joint.For knee and ankle joints, we achieved high power output by mimicking human joint stiffness.However, it is difficult to achieve high power output in a hip joint by using the developed mechanism, because the hip joint of a human does not move like a spring.We plan to develop a hip joint that can achieve high power output; nevertheless, in this study, we confirmed that the performance of the developed joint fulfills the requirements based on human running data and is adequate for higher speed running in terms of energy storage capacity, maximum output torque, maximum output power, maximum deflection angle, movable range and angular velocity.The specifications of the developed joint mechanism are listed in Table 6.
In this experiment, we implemented some control method, such as the foot placement control, and the ground reaction force estimation, as our previous study [37].However, to achieve more stable running or running at higher speed, we consider that the upper body movement is needed for stabilization.This is derived from humans, which are considered to use their trunk and arms for stabilization during running [13].
Conclusions and Future Work
Our long-term goal is mimicking human running with a whole body robot.However, this involves various challenging problems, such as power shortage, stabilization, and so on.Therefore, in this paper, we reported the solution of the power shortage by mimicking human joint stiffness.We described the development of a robotic joint that incorporates a joint stiffness adjustment mechanism that uses two laminated leaf springs made of CFRP and a worm gear to mimic the joint stiffness of a human leg, and we incorporated this joint mechanism into the leg of a bipedal robot.With the new mechanism, it is possible to adjust the joint stiffness by changing the effective length of the leaf springs, and the joint stiffness can be calculated using a proposed equation, which takes into account the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint.The CFRP-laminated leaf spring was lighter than an equivalent iron leaf spring.We confirmed the effectiveness of the laminated leaf spring in terms of loading capacity and size, and we verified that the developed joint mechanism can mimic the joint stiffness of a human leg.We also confirmed the effectiveness of a proposed equation for calculating the joint stiffness according to the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint.The developed robot achieved hopping via resonance, which confirms that the developed joint mechanism can be used for storing energy for jumping.This energy-storage mechanism is based on human motion analysis and improves the performance of the robot, and it could be applied also to other humanoid robots or new prosthetic legs.We intend to develop, in the near future, an upper body that will allow us to construct a new full-body running robot that can mimic various characteristics of human running, for example stabilization using the upper body.Engineering, Waseda University, the Institute of Advanced Active Aging Research, Waseda University, and as part of the humanoid project at the Humanoid Robotics Institute, Waseda University.It was also supported in
Conclusions and Future Work
Our long-term goal is mimicking human running with a whole body robot.However, this involves various challenging problems, such as power shortage, stabilization, and so on.Therefore, in this paper, we reported the solution of the power shortage by mimicking human joint stiffness.We described the development of a robotic joint that incorporates a joint stiffness adjustment mechanism that uses two laminated leaf springs made of CFRP and a worm gear to mimic the joint stiffness of a human leg, and we incorporated this joint mechanism into the leg of a bipedal robot.With the new mechanism, it is possible to adjust the joint stiffness by changing the effective length of the leaf springs, and the joint stiffness can be calculated using a proposed equation, which takes into account the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint.The CFRP-laminated leaf spring was lighter than an equivalent iron leaf spring.We confirmed the effectiveness of the laminated leaf spring in terms of loading capacity and size, and we verified that the developed joint mechanism can mimic the joint stiffness of a human leg.We also confirmed the effectiveness of a proposed equation for calculating the joint stiffness according to the difference between the positions of the fixed point of the leaf spring and the rotational center of the joint.The developed robot achieved hopping via resonance, which confirms that the developed joint mechanism can be used for storing energy for jumping.This energy-storage mechanism is based on human motion analysis and improves the performance of the robot, and it could be applied also to other humanoid robots or new prosthetic legs.We intend to develop, in the near future, an upper body that will allow us to construct a new full-body running robot that can mimic various characteristics of human running, for example stabilization using the upper body.
Figure 1 .
Figure 1.Schematic of the joint stiffness adjustment mechanism.
Figure 2 .
Figure 2. Schematic of the knee mechanism comprising two leaf springs.
Figure 3 .
Figure 3. Influence of lead angle on torque transmission efficiency.
2 ηFigure 2 .
Figure 2. Schematic of the knee mechanism comprising two leaf springs.
Figure 2 .
Figure 2. Schematic of the knee mechanism comprising two leaf springs.
Figure 3 .
Figure 3. Influence of lead angle on torque transmission efficiency.
Figure 4 .
Figure 4. CAD model for the knee mechanism comprising two leaf springs and a worm gear.
Figure 5 .
Figure 5. CAD model for the joint stiffness adjustment mechanism.
Figure 4 .
Figure 4. CAD model for the knee mechanism comprising two leaf springs and a worm gear.
Figure 4 .
Figure 4. CAD model for the knee mechanism comprising two leaf springs and a worm gear.
Figure 5 .
Figure 5. CAD model for the joint stiffness adjustment mechanism.
Figure 5 .
Figure 5. CAD model for the joint stiffness adjustment mechanism.
Figure 6 .
Figure 6.Influence of the difference between the rotational center and the fixed point of the leaf spring.
Figure 6 .
Figure 6.Influence of the difference between the rotational center and the fixed point of the leaf spring.
(a) Photograph of the robot (b) Degrees of freedom configuration
Figure 7 .
Figure 7. Developed robot used in the experiments.
Figure 7 .
Figure 7. Developed robot used in the experiments.
Figure 8 .
Figure 8. Test fixture used to measure the deflection of the leaf springs under applied vertical loads.
Figure 9 .
Figure 9. Theoretical and measured deflection of leaf springs in relation to the magnitude of the applied vertical load.
Figure 8 .
Figure 8. Test fixture used to measure the deflection of the leaf springs under applied vertical loads.
Figure 8 .
Figure 8. Test fixture used to measure the deflection of the leaf springs under applied vertical loads.
Figure 9 .
Figure 9. Theoretical and measured deflection of leaf springs in relation to the magnitude of the applied vertical load.
Figure 9 .
Figure 9. Theoretical and measured deflection of leaf springs in relation to the magnitude of the applied vertical load.
Figure 10 .Figure 10 .
Figure 10.Block diagram of the control system in the hopping experiment.
Figure 12 .
Figure 12.Mass vertical displacement in the hopping experiment.
Figure 12 .
Figure 12.Mass vertical displacement in the hopping experiment.
Table 1 .
Joint stiffness for human running.
Table 6 .
Summary of the developed joint mechanism. | 2016-03-22T00:56:01.885Z | 2016-01-25T00:00:00.000 | {
"year": 2016,
"sha1": "8ad5e67bf49721f23c962a157d26044c16bc78db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/4/1/5/pdf?version=1453812929",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "8ad5e67bf49721f23c962a157d26044c16bc78db",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
233262450 | pes2o/s2orc | v3-fos-license | Simplified Predictive Stator Current Phase Angle Control of Induction Motor With a Reference Manipulation Technique
Finite control set model predictive control (FCS-MPC) is a simple method and has an appropriate dynamic response for the drive applications. Applying additional control objects, e.g., the maximum torque per ampere (MTPA), is easy in FCS-MPC because of its characteristics. A direct application of FCS-MPC to MTPA is the predictive direct angle control method. Though this method eased the MTPA process, the good result is highly sensitive to the proper selection of the weighting factor. Furthermore, finding the best phase angle needs a complicated optimization process. In this paper, the application of simplified predictive control is proposed for angle control. By this proposed method, not only the weighting factor is eliminated but also the constraints of the motor are considered in the control strategy. In this way, the phase angle is automatically controlled in the proper value due to the torque while tedious computation is avoided. Therefore, the proposed method is valid in a wide range of operating points while no optimization process is performed due to changes in speed and torque. This proposed method is evaluated by simulations and experiments.
I. INTRODUCTION
Finite control set model predictive control has a lot of progress in different fields, and because of the simplicity and good dynamic response, there is a lot of interest in studying it [1], [1]- [3]. The performance of this method is based on minimizing the cost function and considering the discrete nature of the inverter to select the most appropriate voltage vector [4]- [8]. Good dynamic response and controllable switching losses are the advantages of this method over the vector control [9]. One of the features of this method is the capability of considering multi-objective cost functions. One of these objectives that can be considered in the cost function is the maximum torque per ampere (MTPA) criterion [10].
The MTPA is a method to reduce the copper losses. It can increase efficiency especially when the copper loss The associate editor coordinating the review of this manuscript and approving it for publication was Alfeu J. Sguarezi Filho .
predominates [11]. In this method, the torque is obtained by a vector of current that is the minimum current vectors [12].
In [10], [13]- [15], the FCS-MPC method is applied to the MTPA in different ways. The main differences between these methods are the type and the number of weighting factors.
In [13], the MTPA procedure is considered for calculations of the references. The reference are calculated based on the operating region. The operating region is divided to the fluxincreased and flux-limited regions. The MTPA is performed by considering these regions. The transition between those regions are also covered. In this method, the weighting factor in the cost function is a weighting matrix. This weighting matrix is calculated from the optimization techniques and the inequality of linear matrices. The complexity of the state reference calculation and weighting factor tuning are the drawbacks of this method.
In [14]- [16], the cost function is multi-objective. One of these objectives is dedicated to MTPA control. This method avoid the problem of lack of coordination in cascaded form of MTPA and predictive control. In [14], [16], three objectives form the cost function. The first part is the torque control. Instead of flux control, the second objective is the MTPA criterion. The third part is a limitation objective which is applied to avoid wrong convergence. In [15], the first objective is current control and the second one is the current magnitude. This is direct application of MTPA which needs less computation. The drawback of this method is coordination of several weighting factors in the cost function.
To overcome the problem of several weighting factors, a cost function with less number of objects was introduced as predictive direct angle control in [10]. Compared to methods in [13]- [15], predictive direct angle control is the most compact combination between MTPA and FCS-MPC. In this method, the cost function has two objectives. The first is the tracking errors of the torque and the second is the error of the current angle. In each period, to minimize the cost function, the torque and the current angle are predicted for all voltage vectors that can be applied to the inverter. The reference angle of the current in the cost function is equal to the MTPA angle. The MTPA angle for the induction motor is 45 • [17]. Though this method is much easier compared to several objective methods, the disadvantages of this method is still a needed weighting factor. The presence of even one weighting factor can affect the quality of the results because it may be dependent on the operating point and motor parameters. Furthermore, the phase angle equal to 45 • is not the proper reference when the electromagnetic torque reaches to the flux limitation borders of the motor. These disadvantages are due to the inherent problems of the multi-objective FCS-MPC method.
Lately, the simplified FCS-MPC method is introduced, which is a new form of the FCS-MPC method and reduces the problems of the multi-objective FCS-MPC method. The cost function in this method is single-objective, and the weighting factor is eliminated. This advantage is achieved at the cost of the lack of other constrains in the cost function. Two different types of this method have been proposed to date, i.e., flux vector based cost function, and voltage vector based cost function.
In [18]- [20], the torque is controlled through the flux angle. Therefore, the tracking error in the cost function is just the flux vector. In these methods, the flux vector must be predicted for all applicable voltage vectors. These values are compared to the calculated flux vector reference via a single-objective cost function. The flux vector reference controls the torque automatically. The amount of computation is still high in these methods.
In [1], [21], [22], the methods are based on the dead beat control. In these methods, one reference voltage vector is calculated to control the torque and the flux magnitude in each interval. The voltage vector reference is directly examined in the cost function. This feature reduces the computation significantly.
Therefore, the simplified FCS-MPC based on dead-beat control has received more attention due to its simplicity and less computation. On the other hand, to increase the features of the simplified FCS-MPC method, criteria such as the MTPA method can be added. In [23], the simplified FCS-MPC with the MTPA method is proposed. In this method, the MTPA trajectory is used to calculate the optimum flux magnitude reference. Although the weight factor has been eliminated, the computation is still complicated because of the MTPA process for the optimum flux calculation and load angle prediction.
Therefore, there is a research gap for applying the MTPA technique in predictive control. The simplified predictive control needs flux optimization and the predictive angle control needs weighting factor optimization. Furthermore, the proper phase angle is not considered for all operating points. In this research, the voltage reference of the simplified predictive control is calculated to automatically control the optimum phase angle. The prominent feature of the proposed method is the elimination of the weighting factor from the cost function which eliminates the need for tedious presimulations for tuning the weighting factor. Also, automatically manipulation of the phase angle reference from 45 • is achieved by considering the flux limitation based on the electromagnetic torque. Thus, there is no need to calculate the optimal flux or optimal phase angle.
II. PREDICTIVE DIRECT ANGLE CONTROL
In this method, the MTPA is applied to the FCS-MPC with this property that the flux is controlled indirectly. The cost function of this method consists of the errors of the torque and the angle of the current.
In this method, the flux is controlled automatically through the control of the current angle. The current angle in rotor flux oriented frame is controlled at the reference which will minimize the current magnitude [10]. Thus, the MTPA goal is fulfilled.
In the induction motor, the minimum current magnitude is obtained when the angle between the rotor flux vector and the stator current vector is equal to 45 • [17]. In this way, this angle is selected as the reference of the current angle in the cost function, which causes the current magnitude to be minimized in each period. To perform this idea, the cost function is as follows: where T * em is the reference of the torque, T i,k+1 and α r i,k+1 are the predicted torque and the phase angle of the current, respectively. i is the index of seven applicable voltage vectors to the inverter. Q is the weighting factor that is used to coordinate the current angle error and the torque error.
In [10], T em,i,k+1 and α ri,k+1 are expressed as follows: where p is the number of the pole pair, L m and L r are the mutual and rotor inductance, respectively. λ r,k+1 , I s,k+1 are the predicted rotor flux and stator current vectors in instant k + 1, respectively. Note that (1) and (2) are valid in any reference frame and any time instant. They are used at k + 1 time instant in order to form the prediction model. In the (2) and (3), I s,k+1 and λ r,k+1 are calculated as follows: where σ = 1 − L 2 m /L s L r . I s,k and λ s,k are the estimated current and flux stator, respectively. L s is the stator inductance, T s is the sampling interval, τ r is the rotor time constant, and ω r is the rotor angular frequency. Note that the proof is presented in the appendix.
Though the cost function is simpler than that of the previous methods with several objectives due to the MTPA, tuning even one weighting factor needs pre-simulations. Furthermore, tuning the weighting factor to coordinate these two particular objectives is harder because both of the angle and torque are quick variable while the flux is slower in conventional FCS-MPC. On the other hand, calculating the phase angle difference for seven vectors increases the needed computational power. Also, in some cases phase angle equal to 45 • is not the best solution, e.g., in very low torque condition the magnetizing component of the current will be smaller than the needed current, and in rated torque condition the core will be saturated. In this research, these drawbacks are tried to be improved by using the simplified predictive technique.
III. PROPOSED SIMPLIFIED PREDICTIVE DIRECT ANGLE CONTROL A. CONCEPTION
The concept of controlling the torque and the current phase angle is applied similar to the basic direct angle control. But how the idea is applied is different for the proposed method. In the proposed method, the reference voltage vector that will fulfill the torque and phase angle control will be predicted in each interval. Then, the voltage-based single-objective cost function is used to choose the switching state. Therefore, the weighting factor is eliminated by this method. Furthermore, the prediction stage is not needed to be performed for seven times which reduces the needed computational power. The needed prediction stage will be the calculation of the voltage vector which produces the torque to the reference value and keeps the current phase angle at 45 • in rotor flux oriented frame. By this method, the up and down limitation of the flux can be considered before the voltage calculation. (2), the torque equation in the d-q frame can be rewritten as follows [10]: To consider the MTPA method in the induction motor, the following equation should be fulfilled. Note that the automatic flux limitations are not considered at this stage but they will be noticed in the next section.
To reach to the above equation, and express it based on the stator current and the current angle, the current angle must be equal to 45 • . So: I sq = I s cos 45 (9) By putting (8) and (9) in (6), the torque equation will as follows: On the other hand, according to flux current equation of the induction machine, the stator flux equation can be calculated in terms of stator current and rotor flux as follows: Then, by transferring (11) to the rotating frame it can be expressed in a separated form.
C. CONTROL LOOP
By expressing (10) in the k + 2 instant and setting the torque reference equal to T em,k+2 , the predicted stator current can be calculated in this instant.
where T em * is the torque reference which is calculated by the proportional-integrator controller of the speed. Note that a current limiter should be used after this prediction. If the predicted current is higher than the maximum current of the motor it will set to the maximum value.
If the current conditions (8) and (9) are considered, and the predicted current phase angle is set to 45 • , the predicted stator flux will be as follows: On the other hand, according to [18], the stator flux equation for instant k + 1 is calculated as follows: Since the final applied voltage by the inverter is in the stationary frame, the voltage reference should be expressed in the stationary frame. For this purpose, the discrete form of the first difference equation of the induction motor is applied.
In this equation, λ s,k+1 is calculated by (19) which is in the stationary frame, and λ s,k+2 is calculated by (17) and (18) which should be transformed into the stationary frame as below: where λ r,k+1 is the phase angle of the rotor flux. The rotor flux is predicted by (5). Eventually, the decomposed form (21) is applied to predict the proper voltage references in the stationary frame. * Finally, in each period, the optimal voltage vector is selected by calculating the following cost function for seven possible switching states and selecting the minimum.
To further understand how the proposed method works, the block diagram of the proposed method is shown in Fig. 2. In this figure the MTPA block is the application of (14) and the flux prediction block is using (17) and (18). As it is shown in the block diagram, there is no need for flux optimization. Also, no direct flux control is needed. The flux will be regulated automatically by voltage reference prediction. The general computational process is shown in the form of a flowchart in Fig 3. In order to show the difference between the voltage selection of the proposed method and that of the previous angle control method [10], a numerical example is provided. Fig. 4 shows the numerical example in a case of voltage selection situation. In this figure, the vectors of stator and rotor fluxes, and also the current vector are depicted in αβ frame. In this frame, I s,k+1 = 0.2289 + j0.9056, λ s,k+1 = 0.7371 + j0.1539, λ r,k+1 = 0.7269 + j0.0488. Note that normalized variables are provided. The current vector trajectory that satisfies the torque reference generation is shown in this figure.
In the previous method, all possible voltage vectors are used to predict the next current phase angle and the torque. The phase angle should be close to π/4 and the torque should be close to the reference. By considering Fig. 4, it is clear that the errors are approximately acceptable for two candidates, i.e., j = 1,6. The error of the phase angle for j = 1 is 0.0526 and the torque error is 0.2672. Also, for j = 6, the error of phase angle is 0.1445 and the torque error is 0.2026. It is seen that the error are slightly different for these two options. Therefore, the weighting factor selection is essential in this case. For example if equal weighted cost function is used in (1), C 1 = 0.3198 and C 6 = 0.3472. So, j = 1 is selected. However, if the weighting factor Q = 2 is selected, C 1 = 0.587 and C 6 = 0.5498 and j = 6 is selected.
On the other hand, in the proposed method, the next flux reference is predicted, λ s,k+2 = 0.7862 + j0.1028. The voltage vector that can result in the predicted flux reference is only j = 1 and j = 6 is not a close solution while there is no weighting factor to be tuned.
D. REFERENCE MANIPULATION
In the proposed method, the reference value for the phase angle of the stator current is set to 45 • . There are some cases that this degree can result in improper performance. In the following, the cases that the reference manipulation is performed are explained.
1) VERY LIGHT TORQUE CONDITION
The reference 45 • cannot be effective in very light torque condition because both current components will be close to zero and the magnetization of the machine would not be performed. To avoid this condition, the minimum value of the direct component of the flux in (17) is limited [24].
where T nom is the rated torque of the motor. Based on (22-a), there is a minimum value for the stator flux for every torque value. If the calculated value by (17) is lower than that the minimum value will be selected. Also, the minimum accepted torque to be used in (24-a) is 5% rated torque. Thus, even for no-load condition the minimum value for the flux would be 0.4σ L r L 2 s T nom /3pL 2 m . Note that the value of 5% rated torque can be tuned by experiment as a tuning knob in order to final regulation of the responses. Therefore, in light load condition, the phase angle would be lower than π/4.
2) NEAR RATED TORQUE CONDITION
When the torque is close to the rated value, the phase angle 45 • results in a big value for direct component of the flux. The outcome of this effect is core saturation. In order to avoid that, the maximum value of the d-axis flux is limited.
where I m,nom is the rated value of the magnetizing current which is considered as the maximum allowed value for d-axis stator current. Furthermore, the magnitude of the stator flux vector should also be limited in order to avoid the saturation [24].
where I max and v max are the maximum values for the stator current and voltage. These values are also tunable based on the limitation and needed response. If the limit of (26) happens, the q-component of the stator flux is kept at the value calculated by (18) and its d-component will be set at the following value.
In this situation, the phase angle is set to a value higher than 45 • because the q-component is larger than the d-component. Fig. 5 summarizes the reference manipulation algorithm. In this algorithm, first the minimum torque is limited. Afterward, the predicted d-axis flux is checked. If it is lower than the minimum value, it is reserved at the minimum value. If the flux is higher than the maximum value it is limited at the maximum value. At the end, the magnitude of the stator flux is checked and limited in order to avoid core saturation.
IV. RESULTS AND DISCUSSION
The proposed method is evaluated by the simulations experiments. Table 1 shows the specifications of the motor which is used in laboratory tests. The model of that motor is used in the simulations. Also, a two-level inverter by PM25RSB120 package is used in the laboratory set. The inverter is controlled by the TMS320F28335 digital processor. To obtain the results, the following conditions are considered for simulation and laboratory testing: 1) The switching interval and sampling time are set to 100 µs. 2) The switching operation is done in the middle of the sampling interval to reduce the noise and delay time.
3) The saturation is considered in the simulated model.
A. SIMULATION RESULTS
Simulations are performed to compare the proposed method with the previous direct angle control method [10]. The importance of the elimination of the weighting factor in the proposed method is studied. In other words, the effect of the weighting factor on the result is investigated. For this purpose, the direct angle method was simulated for four values of weighting factors. The study is repeated for three torque load conditions, i.e., 5%, 50%, and 100% rated torque. The responses are shown in Fig. 6, Fig. 7, and Fig. 8, respectively. The result of the proposed method is also shown for the same condition in the related figure. In all simulations, the speed reference is 50% rated value. By investigating these figures, it is clear that the ripples are sensitive to the weighting factor for all torque conditions. Note that the cost function is normalized in order to ease the sense of comparison. It means that Q = 1 gives the equal control weight to the torque and the phase angle. When the weighting factor is set to a small value, i.e., Q = 0.01, which means the phase angle control is less important than the torque control, the torque ripple is also very high because the machine cannot be controlled only by the torque control. If Q = 0.71, the best result was achieved for 50% rated torque. It happens in Q = 0.39 for 5% rated torque, and in Q = 0.89 for 100% rated torque. These results are better than the results of equal weighted form (Q = 1). If the weighting factor is Q = 1.5 again all of the ripples were increased. On the other hand, the results of the proposed method showed that the ripples are similar to the best condition of the previous direct angle control method but without the need to tune the weighting factor.
Another conclusion which is achieved by Fig. 8 is the performance at near rated torque condition. This condition is the case which the manipulated boundaries will work for the proposed method to avoid the saturation in the motor core. It is seen that the phase angle for the proposed method is increased to 51 • because the d-axis component of the flux was limited based on (25) and (26). However, the previous method tried to keep the phase angle at 45 • . This results in a bigger d-axis component of the flux and the motor saturation consequently. On the other hand, phase angle equal to 45 • is not the solution of minimum current when the core is saturated and finding the optimum phase angle needs tedious offline and online calculations for the basic method. This problem is avoided by automatically increase of the phase angle in the proposed method. Thus, the stator current amplitude is slightly smaller than that in the previous method.
B. EXPERIMENTAL RESULTS
The experiments are also performed to verify the performance of the proposed method. Fig. 9 shows experimental set. Fig. 10 shows the responses of the proposed method and the previous predictive direct angle control method [10] at 50% rated speed and 50% nominal load torque. In Fig. 10-a, it can be seen that the torque has risen in less than a few steps, which indicates a fast dynamic response of the torque. The manipulated reference of the phase angle shows that it is increased to a value of more than 45 • during the transient state of the torque in order to maintain the needed torque without saturation in the core. After the rise of the torque, the reference manipulation has not occurred because the torque was less than the rated value. The limitation of the magnetization current is 1.44λ n /L s = 1.063A based on (25). Therefore, the torque response contains an overshoot, unlike the conventional predictive torque control. The result of this attitude is the dynamic current minimization. The same test was repeated for the predictive direct angle control method [10] and the results are depicted in Fig. 10-b. Based on the obtained simulation results, the weighting factor is set to 0.71 for this operating point. The results showed a smaller ripple for the phase angle because it was directly VOLUME 9, 2021 included in the cost function. However, the torque ripple is higher for the previous method. The quantified results are summarized in Table. 2. Fig. 11 shows the steady-state responses of the proposed and the previous method [10] at 60% rated speed and 90% nominal load torque. In Fig. 11-a which is the result of the proposed method, the phase angle is automatically increased to 50 • because of the flux limitation occurred based on (25) and (26) when the torque was close to rated value. The flux magnitude shows flux increase is avoided similar to the simulation result. The same scenario is checked for the previous direct angle control method [10] and the results are shown in Fig. 11-b. The weighting factor is set to 0.87 from the simulation results. The results showed that the achieved flux and current are higher than those of the proposed method. [10].
Also, the current shape showed that the saturation was more probable in this test.
The low speed performance is also studied. Fig. 12 shows the responses of the proposed method and the method in [10] at 20% rated speed and in light load condition. The comparison between the proposed method and the previous method at low speed operating point showed that both torque ripple and phase angle ripple are slightly lower for the proposed method. However, there is no big difference between the results of this test. Note that the weighting factor in the method of [10] is set to 0.5 by performing the try and error in the simulations.
The quantitative measures of Figs. 10, 11, and 12 between the proposed method and the previous method [10] are illustrated in Table 3. The results show that the tracking error is improved by the proposed method in all three tests. Also, the torque to current ratio is improved for two cases but it is equal for both methods in the low-speed test. The impressive improvement is the computational time because seven times prediction is eliminated by the proposed method.
The no-load condition was tested in 80% nominal speed condition and the results are reported in Fig. 13. After the startup, the ripple of the phase angle is increased because the variation estimated torque changes the minimum value of the λ sd based on (24). The torque ripple alternated the minimum value of the λ sd and the phase angle consequently. However, this phase alternation which is a part of the control algorithm resulted in the stability of the current, torque, and flux. To clearly indicate this effect, the load torque is slightly increased and the result is reported in Fig. 14. It can be seen that the phase angle ripple was reduced in this test.
The effect of the parameter mismatch is studied by Fig. 15 and Fig. 16. In Fig. 15, the uncertainty effect of the stator resistance is studied on the proposed method. It was tested at 50% rated speed and 30% nominal load torque. The results with accurate parameters are reported in Fig. 15-a. When the stator resistance was increased by approximately 100% and the test was repeated, Fig. 15-b reports the results. It is understandable from the comparison of these two figures that the torque dynamic response is slightly deteriorated. But in general, the proposed method retains its stability despite the change of the stator resistance. The uncertainty of the stator inductance is also studied in Fig. 16. The steady state results at 50% rated torque and 80% rated speed is studied. In Fig. 16-a, the accurate value of the inductance is used in the prediction model. The results with a 50% error of the inductance are shown in Fig. 16-b. It showed that the torque ripple, flux ripple, and the current distortion were increased but the current minimization was accomplished. Fig. 17 shows the performances of the proposed method and the method of [10] in applying a sudden load equal to 60% rated torque. In these tests, the speed was 60% nominal speed. As it can be seen in Fig. 17-a, the ripple of the angle was high before the load exertion because of the discussed issue about Fig. 13. Immediately after the load exertion, the phase angle of the current was controlled at 45 • . In this test, it is well visible that the proposed method maintains its sustainability after load disturbance. Fig. 17-b shows the result of the same test for the previous method. The comparison showed that the torque ripple was smaller for the proposed method because of the phase angle reference manipulation. However, the phase angle ripple was smaller for the previous method because the reference was kept equal to 45 • . The result of that fixed reference was a bigger amplitude of the current before load exertion. The current amplitude was 1.81A before load exertion for the proposed method but it was 2.15A for the previous method. Table 2, shows the average ripple value of the proposed method for the torque, the current angle, and the flux in the steady-state, relative to the nominal value of each at different operating point. It shows that the proposed method provided similar control for the torque and the flux in a wide range of operating point without a need for weighting factor tuning. The phase angle ripple is dependent on the operating point which is part of the angle manipulation scenario in order to keep the torque and flux control. Also, the sensitivity to the parameter mismatch is reported in this table. It can be seen that the proposed method is robust against 100% mismatch for the stator and rotor resistance and also 30% mismatch for the inductance.
V. CONCLUSION
A simplified predictive direct angle control is proposed in this paper. By this method, the features of the direct angle control were improved while there is no need for weighting factor calculation. Due to the use of the MTPA method, the predicted angle is set 45 • and the required torque is obtained for the minimum current vector.
In this method, the flux is automatically optimized by controlling the angle between the stator current and the rotor flux. Also, the phase angle reference is automatically manipulated by the limitation of the minimum and maximum value of the direct component of the stator flux. By this technique, the phase angle was automatically decreased in very low torque condition and increased in near rated torque condition. This effect was not possible in the previous version of the direct angle control.
The experimental tests on this method validated the effectiveness of that at different operating points. Also, it was shown that the method kept the stability in the load disturbance and the variation of the stator resistance.
To sum up, there are two advantages over the previous version of the direct angle control about the proposed method: 1) There is no need for weighting factor tuning 2) The optimum phase angle is not fixed on 45 • which is not optimum for the light and also the rated torque condition.
APPENDIX PROOF OF (5)
The second difference equation of the induction motor is the origin of (5) as below: On the other hand, the rotor current can be expressed by the following equation based on the relationship between the fluxes and currents.
If (29) is applied to eliminate the rotor current from (28) and the discrete form of that is considered (5) will be attained. | 2021-04-17T13:31:40.991Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e89cf4e9bf8ab0f16125559962763b8179ef6d92",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09393903.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "e89cf4e9bf8ab0f16125559962763b8179ef6d92",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
89992433 | pes2o/s2orc | v3-fos-license | Co-occurrence of birds and bats in natural nest-holes.
The number of natural nest-holes is considered a crucial element for cavity-nesting bats and birds (Newton 1994, Berthinussen et al. 2014), especially in human-modified landscapes where modern strategies of forest management have reduced the numbers of trees containing holes (Newton 1994, Remm et al. 2006, Myczko et al. 2014). Critically important are natural holes excavated by woodpeckers (Picidae), and they are preferred by hole-nesting mammals and birds (Czeszczewik et al. 2008, Cockle et al. 2011). Moreover, even where holes occur, not all are of sufficient quality to be inhabited by mammals and birds (Czeszczewik et al. 2008), and hence organisms compete for the best holes (Flux & Flux 1992, Juskaitis 2006, Remm et al. 2006). Important factors include security, availability and access to foraging places, for example to open habitats (Mazgajski 2000, Smith 2006, Czeszczewik et al. 2008). Among cavity-nesting species, selection promotes aggression and fighting for holes, both at an intraand inter-species level. A particular example of a species with marked aggression directed mainly at other species is the Starling Sturnus vulgaris, even for freshly made holes of woodpeckers (Mazgajski 2000, Smith 2006) or small mammals (Flux & Flux 1992, Juskaitis 2006, Czeszczewik et al. 2008). Interactions between cavity roosting and nesting bats and birds are especially interesting. Although natural holes made by birds and nest-boxes are both occupied by bats (e.g. Berthinussen et al. 2014), to date the simultaneous occurrence of bats and birds has not been reported. However, we suggest that this is a frequent occurrence with potentially profound implications. We describe four cases of spring reproductive co-occurrence of Noctule Bats Nyctalus noctula and Starlings in western Poland, and discuss the potential mechanisms explaining this phenomenon. The observations were made in forests of the suburban area of the city of Pozna n and in the Zielonka forest complex (52°270–52°370N, 16°490–17°120E) in western Poland in spring 2016. Woodlands cover 21% of the area and are mostly patchily distributed within the agricultural matrix and built-up areas. Most woodlands are of Scots Pine Pinus sylvestris forests, but mixed and pure deciduous stands, mostly dominated by oaks (Quercus spp.), also occur. The major land cover types surrounding the woodlands are cereal crops and grasslands (Myczko et al. 2014), but lakes and ponds also occur. Holes in the chosen forest stands were located before deciduous leafing and we also noted which were recently excavated woodpecker holes. Later, during the Starlings’ egg incubation phase, we inspected all these holes using a digital camera. A second inspection took place before fledging. During this second phase we also monitored holes in other forest stands. In total 672 holes were checked for the presence of bats and/or birds, and all holes were of woodpecker origin (mainly Great Spotted Woodpecker Dendrocopos major). To monitor nest-holes we used a modified internet camera Creative Live! Cam (Creative Labs Ltd, Dublin, Ireland) with additional LED lighting. We connected the camera to a laptop using a USB 2.0 Repeater Cable, 15 m DIGITUS (Assmann Electronic GmbH, Lüdenscheid, Germany). The camera was attached to a telescopic stick to reach higher holes. For videos and still pictures we used Creative Live! Central 3 software, version 3.01.26 (Creative Technology Ltd, Singapore). Visits with the camera started on 21 April 2016, and were repeated within 1 month. Starling chick age was determined according to the key by Kania (1983). Among the 672 natural holes, 271 were occupied by birds, six by bats, and four simultaneously by Bats and birds. The same combination of species occurred in all cases of simultaneous occupation, i.e. Noctule Bats and Starlings. The first coexistence was recorded on 12 May 2016, when Starling chicks were already 14–20 days old. Noctule Bats (with Starling chick numbers in parentheses) occurred in the four holes in the following numbers: 4 (3), 6 (4) 6 (4) and 7 (2) individuals, and the bats sat on top of and moved among the Starling chicks (Fig. 1, Movie S1). All four Starling broods were successful with young fledged from the nest. The other holes occupied solely by Noctules contained 1, 2, 5, 6 and 14 individuals. In addition we found one hole occupied by 11 Daubenton’s Bats Myotis daubentonii. The phenomenon of co-occurrence of Noctules and Starling chicks is probably quite common, especially for Noctules, for which coexistence occurred in four out of nine holes occupied by bats. This is very surprising because previous publications on the interactions *Corresponding author. Email: uki@up.poznan.pl
The number of natural nest-holes is considered a crucial element for cavity-nesting bats and birds (Newton 1994, Berthinussen et al. 2014, especially in human-modified landscapes where modern strategies of forest management have reduced the numbers of trees containing holes (Newton 1994, Remm et al. 2006, Myczko et al. 2014. Critically important are natural holes excavated by woodpeckers (Picidae), and they are preferred by hole-nesting mammals and birds (Czeszczewik et al. 2008, Cockle et al. 2011. Moreover, even where holes occur, not all are of sufficient quality to be inhabited by mammals and birds (Czeszczewik et al. 2008), and hence organisms compete for the best holes (Flux & Flux 1992, Juskaitis 2006, Remm et al. 2006. Important factors include security, availability and access to foraging places, for example to open habitats (Mazgajski 2000, Smith 2006, Czeszczewik et al. 2008.
Among cavity-nesting species, selection promotes aggression and fighting for holes, both at an intra-and inter-species level. A particular example of a species with marked aggression directed mainly at other species is the Starling Sturnus vulgaris, even for freshly made holes of woodpeckers (Mazgajski 2000, Smith 2006 or small mammals (Flux & Flux 1992, Juskaitis 2006, Czeszczewik et al. 2008. Interactions between cavity roosting and nesting bats and birds are especially interesting. Although natural holes made by birds and nest-boxes are both occupied by bats (e.g. Berthinussen et al. 2014), to date the simultaneous occurrence of bats and birds has not been reported. However, we suggest that this is a frequent occurrence with potentially profound implications. We describe four cases of spring reproductive co-occurrence of Noctule Bats Nyctalus noctula and Starlings in western Poland, and discuss the potential mechanisms explaining this phenomenon.
The observations were made in forests of the suburban area of the city of Pozna n and in the Zielonka forest complex (52°27 0 -52°37 0 N, 16°49 0 -17°12 0 E) in western Poland in spring 2016. Woodlands cover 21% of the area and are mostly patchily distributed within the agricultural matrix and built-up areas. Most woodlands are of Scots Pine Pinus sylvestris forests, but mixed and pure deciduous stands, mostly dominated by oaks (Quercus spp.), also occur. The major land cover types surrounding the woodlands are cereal crops and grasslands (Myczko et al. 2014), but lakes and ponds also occur.
Holes in the chosen forest stands were located before deciduous leafing and we also noted which were recently excavated woodpecker holes. Later, during the Starlings' egg incubation phase, we inspected all these holes using a digital camera. A second inspection took place before fledging. During this second phase we also monitored holes in other forest stands. In total 672 holes were checked for the presence of bats and/or birds, and all holes were of woodpecker origin (mainly Great Spotted Woodpecker Dendrocopos major).
To monitor nest-holes we used a modified internet camera Creative â Live! â Cam (Creative Labs Ltd, Dublin, Ireland) with additional LED lighting. We connected the camera to a laptop using a USB 2.0 Repeater Cable, 15 m DIGITUS â (Assmann Electronic GmbH, Lüdenscheid, Germany). The camera was attached to a telescopic stick to reach higher holes. For videos and still pictures we used Creative â Live! â Central 3 â software, version 3.01.26 (Creative Technology Ltd, Singapore).
Visits with the camera started on 21 April 2016, and were repeated within 1 month. Starling chick age was determined according to the key by Kania (1983).
Among the 672 natural holes, 271 were occupied by birds, six by bats, and four simultaneously by Bats and birds. The same combination of species occurred in all cases of simultaneous occupation, i.e. Noctule Bats and Starlings. The first coexistence was recorded on 12 May 2016, when Starling chicks were already 14-20 days old. Noctule Bats (with Starling chick numbers in parentheses) occurred in the four holes in the following numbers: 4 (3), 6 (4) 6 (4) and 7 (2) individuals, and the bats sat on top of and moved among the Starling chicks (Fig. 1, Movie S1). All four Starling broods were successful with young fledged from the nest. The other holes occupied solely by Noctules contained 1, 2, 5, 6 and 14 individuals. In addition we found one hole occupied by 11 Daubenton's Bats Myotis daubentonii.
The phenomenon of co-occurrence of Noctules and Starling chicks is probably quite common, especially for Noctules, for which coexistence occurred in four out of nine holes occupied by bats. This is very surprising because previous publications on the interactions *Corresponding author. Email: uki@up.poznan.pl between bats and birds (reviewed in Kowalski & Lesi nski 1994, Czeszczewik et al. 2008, Mikula et al. 2016 do not mention coexistence. However, the older papers were based mainly on data from artificial nest-boxes, because detailed monitoring of natural holes is very difficult (Czeszczewik et al. 2008, Zawadzka et al. 2016, and any representative data for natural holes are not available. Here, we solved the monitoring problem by using modern technology which allows for fast collection of large amounts of high-quality data from natural holes. Noctules started co-occupying holes with breeding Starlings in early May, which is a typical time for their arrival from wintering hibernation sites (Van Heerdt & Sluiter 1965, Ruczy nski & Bogdanowicz 2008. This raises the question of why Noctules would choose holes already used by Starlings. A likely explanation is the thermal benefit, because holes occupied by Starlings are likely to be warmer, because incubation and then growing chicks transfer heat to the cavity (Biebach 1986, Ward et al. 1999. Additionally a study in forests in eastern Poland showed that maternity roosts chosen by Noctules during late pregnancy and lactation were warmer than unoccupied cavities (Ruczynski 2006, Ruczy nski & Bogdanowicz 2008. We cannot also exclude the possibility that social information is obtained by bats from the presence of Starlings in the same natural holes. Occupying holes with successful Starlings could provide Noctules with information on the lack of predation during the breeding season, and hence the safety of the nest-hole (Sepp€ anen et al. 2007).
A second possibility which might explain this coexistence results from a shortage of safe and suitable nestholes because of intensive forest management. Currently almost all European forests are managed as intensive forestry, which significantly reduces the availability of natural holes (Newton 1994, Remm et al. 2006. Therefore, we can expect an increase in the phenomenon of coexistence of these two species. However, in the forest stands where we conducted our research, alternative nest-holes were available. Our data show that more than half of the holes in the vicinity of where bats and Starlings coexisted were unoccupied. Our data do not yet allow us to draw conclusions about the influence of coexisting species on reproductive success and fitness. However, these coexistences could have wider consequences. Coexistence could increase the transmission of parasites and diseases between species. Bird and bats are natural reservoirs of coronaviruses and influenza viruses (Tong et al. 2012, Chan et al. 2013. Therefore, coexistence could permit the mixing of different bird and mammal viruses for the generation of novel mutant, recombinant or reassortment of RNA viruses. Such a situation seems more likely given that both species are known to be hosts of influenza viruses -Noctules with H3N2 (L'Vov et al. 1979) and Starlings with H5N1 (Boon et al. 2007). Bats can also carry many other diseases (Petney et al. 2010, Smith & Wang 2013, Hall et al. 2016).
We would like to thank D. Czeszczewik | 2019-04-02T13:02:56.884Z | 2016-12-09T00:00:00.000 | {
"year": 2017,
"sha1": "92cbf283677975189c1d7a78e70a9759b9aa2bba",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ibi.12434",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0616d15bafe63464de07c38f268cbb71244b0a9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
1885120 | pes2o/s2orc | v3-fos-license | Nonsurgical Management of Amlodipine Induced Gingival Enlargement – A Case Report
Antihypertensive drugs in the calcium channel blocker group are extensively used in elderly patients. Gingival enlargement associated with Nifedipine was first reported in 1980’s and is very rarely reported to be associated with Amlodipine and Felodipine. The mechanism through which these medications trigger a connective tissue response are still poorly understood. The most effective treatment of drug induced gingival overgrowth is withdrawl or substitution of medication combined with meticulous oral hygiene, plaque control, and removal of local irritants. When these measures fails to resolve the enlargement, surgical intervention is recommended. This case reports a rare case of Amlodipine induced gingival enlargement. The patient was successfully managed by drug substitution and nonsurgical periodontal terapy.
Introduction
Gingival enlargement or gingival overgrowth" is the preferred term for all medication-related gingival lesions previously termed "gingival hyperplasia" or "gingival hypertrophy." These earlier terms did not accurately reflect the histologic composition of the pharmacologically modified gingiva.
An increasing number of medications are associated with gingival enlargement. Currently, more than 20 prescription medications are associated with gingival enlargement [1].
Drugs associated with gingival enlargement can be broadly divided into three categories: anticonvulsants, calcium channel blockers, immunosuppressants. Although pharmacologic effect of each of these drugs is different and directed toward various primary target tissues, all of them seem to act similarly on secondary target tissue, i.e., the gingival connective tissue, causing common clinical histo-pathological findings.
Calcium channel blockers are widely used in medical practice for the management of cardiovascular disorders. Gingival over growth is now a recognized unwanted effect associated with many of calcium channel blockers. Of this large group of drugs, the dihydropyridines are the agents most frequently implicated. [2] Amlodipine a newer agent of dihydropyridine, used for treatment of hypertension and angina, was first reported for causing gingival overgrowth as side effect, by Seymour et al in 1994 [3].
Clinical & Histological Features
Clinical manifestation of gingival enlargement frequently appears within 1 to 3 months after initiation of treatment with the associated medication. [5] Gingival overgrowth normally begins at the interdental papillae and is more frequently found in the anterior segment of the labial surfaces. [6] Gradually, gingival lobulations are formed that may appear inflamed or more fibrotic in nature, depending on the degree of local factorinduced inflammation. The fibrotic enlargement normally is confined to the attached gingival but may extend coronally and interfere with esthetics, mastication, or speech. [7] Disfiguring gingival overgrowth triggered by this medication is not only aesthetically displeasing but often impairs nutrition and access for oral hygiene, resulting in an increased susceptibility to oral infection, caries, and periodontal diseases [8].
Histologically, slight to moderate hyperkeratosis, thickening of the spinous layer, fibrosis of underlying connective tissue with fibroblastic proliferation, increase in the number of capillaries with slight chronic perivascular inflammation is seen.
Pathogenesis
The pathogenesis of gingival overgrowth is uncertain and treatment is still largely limited to the maintenance of an improved level of oral hygiene and surgical removal of the overgrowth tissues. A number of factors affect the relationship between drug and gingival overgrowth. Role of Fibroblasts Because only a subset of patients treated with this medication will develop gingival overgrowth, it has been hypothesized that these individuals have fibroblasts with an abnormal susceptibility to the drug. It has been showed that fibroblast from overgrown gingiva in these patients are characterized by elevated levels of protein synthesis, most of which is collagen. It also has been proposed that susceptibility or resistance to pharmacologically induced gingival enlargement may be governed by the existence of differential proportions of fibroblast subsets in each individual which exhibit a fibrogenic response to this medication. [9,10] Role of Inflammatory Cytokines A synergistic enhancement of collagenous protein synthesis by human gingival fibroblasts was found when these cells were simultaneously exposed to nifedipine and interleukin-1b(IL-1b), a proinflammatory cytokine that tis elevated in inflamed gingival tissues. [11] In addition to IL-1b, IL-6 may play a role in the fibrogenic responses of the gingiva to these medications [12].
Synthesis and Function
Because most types of pharmacological agents implicated in gingival enlargement have negative effects on calcium ion influx across cell membranes, it was postulated that such agents may interfere with the synthesis and function of collagenases [13].
Prevention and Treatment of Gingival Enlargement
Prevention In the susceptible patient, drug-associated gingival enlargement may be ameliorated, but not prevented by elimination of local factors, meticulous plaque control, and regular periodontal maintenance therapy. A 3-month interval for periodontal maintenance therapy has been recommended for patients taking drugs associated with gingival enlargement. [14] Each recall appointment should include detailed oral hygiene instruction and complete periodontal prophylaxis, with supra-and subgingival calculus removal as needed. In some instance orthodontic bands and/or appliances should be removed [15].
Treatment
Drug Substitution/withdrawl: The most effective treatment of drug-related gingival enlargement is withdrawl or substitution of medication. When this treatment approach is take as suggested by another case report, it may take from 1 to 8 weeks for resolution of gingival lesions. [16] Unfortunately, not all patients respond to this mode of treatment especially those with long standing gingival lesions [7].
Non-Surgical treatment: Professional debridement with scaling and root planning as needed has been to shown to offer some relief in gingival overgrowth patients [17].
Surgical Periodontal treatment: Because the anterior labial gingival is frequently involved, surgery is commonly performed for esthetic reasons before any functional consequences are present. The classical surgical approach has been the external bevel gingivectomy. However a total or partial internal gingivectomy approach has been suggested as an alternative. [7] This more technically demanding approach has the benefit of limiting the large denuded connective tissue wound that result from the external gingivectomy, thereby minimizing postoperative pain and bleeding.
The use of carbon dioxide lasers has shown some utility for reducing gingival enlargement, an approach which provides rapid post operative hemostasis. Consultation with the patient's physician prior to surgical treatment regarding antibiotic and steroid coverage should take place in the immunosuppressed patient [7].
Case Report
A 60 year female patient visited to the dept. of periodontics with the chief complain of bead like enlargement, bleeding and painful gum since a month.
The bead like enlargement appeared first in the interdental papilla of maxillary and mandibular anterior teeth and gradually involves the facial and lingual aspect. Enlargement slowly increased in size and spread to the posterior areas. Patient also complained of bleeding from the gingiva while brushing, soreness and deep gnawing pain.
Her Medical history revealed that she was hypertensive, and on Amlodipine (5 mg twice daily) therapy since a year. Patient was not suffering from any other illness/drug allergy and she was not taking any other kind of medication.
On Intraoral examination, Generalized gingival enlargement with increased severity in maxillary arch and mandibular anterior region was note. Oral hygiene maintenance was poor. The enlarged gingiva was Erythematous, soft and edematous, and showed a lobulated surface with absence of stippling. There was generalized bleeding on probing. Heavy presence of calculus was also noted. Periodontal Examination revealed generalized moderately deep pocket. 31 and 41 were grade 3 mobile with severe bone loss and were indicated for extraction. No significant Radiographic changes were observed except for a moderate generalized bone loss.
The case was diagnosed as Generalized chronic periodontitis with drug induced gingival enlargement (Combined enlargement -Inflammatory and Amlodipine induced.). request was sent to physician for the drug substitution and consent was taken for the planned periodontal treatment. Amlodipine was substituted with Losartan potassium and chlorothiazide combination (50 mg, 12.5 mg once daily). Since the patient was not willing for extraction of 31, and 41 phase I therapy was initiated. Scaling, root planing & curettage was performed under L.A.Oral hygiene instructions were reinforced and was prescribed 0.2% chlorohexidine mouthwash twice daily and Patient was recalled after 15 days.
15 Days after Phase I Therapy
On examination at the first follow up after nonsurgical periodontal therapy, patient had relief from soreness and painful gums. Intraoral examination revealed slight improvement in the condition of gingiva. The Intensity of erythema and bleeding on probing had subsided marginally. The degree of gingival enlargement was slightly reduced. Gingival curettage was repeated and oral hygiene instructions were reinforced. Probing depth was more than 6 mm irt 11,12,13,14,15,21,22,23,24,25,32,33,43 and 44. Patient was recalled after 21 days, but patient, failed to follow the appointment, and she returned after a gap of 4 months.
4 Months after Phase I Therapy
On examination, oral hygiene maintenance was good. +Complete resolution of gingival enlargement was noted. Pocket depth irt 11,12,13,14,21,22,23,24 had reduced. 13 showed a persistant mild gingival enlargement. Full mouth scaling and root planing was performed and curettage was repeated in 13.
Summary & Conclusions
The reported case is an example of slowly progressive periodontitis. This was superimposed by a combined type of gingival enlargement; basically a drug induced one, complicated by inflammatory changes due to plaque accumulation. Moreover, hormonal changes due to menopause appear to contribute further to the enlargement of gingival tissues. The use of medications with the potential to contribute to the development of gingival overgrowth is likely increase in the years to come. Among the old and relatively new pharmacologic agents involved in gingival enlargement, overall, phenytoin still has the highest prevalence rate (approximately 50%), with calcium channel blockers and Cyclosporine associated enlargements about half as prevalent. Current studies on the pathogenetic mechanism of drug associated enlargement are focusing on the direct and indirect effects of these drugs on gingival fibroblast metabolism. If possible, treatment is generally targeted on drug substitution and effective control of local inflammatory factors such as plaque and calculus. When these measures fail to cause resolution of the enlargement, surgical intervention is recommended. These treatment modalities, although effective, do not necessarily prevent recurrence of the lesions. Newer molecular approaches are needed to clearly establish the pathogenesis of gingival overgrowth and to provide novel information for the design of future preventative and therapeutic modalities. | 2017-10-11T06:15:28.570Z | 2014-01-23T00:00:00.000 | {
"year": 2014,
"sha1": "8a8a68bfeeec114c2744754746e6f856af53c3d9",
"oa_license": null,
"oa_url": "http://pubs.sciepub.com/ijdsr/2/6/4/ijdsr-2-6-4.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "db51d24dd14cfd6c7764a251d7ce3e8411b04ad9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1543297 | pes2o/s2orc | v3-fos-license | Monitoring the T-Cell Receptor Repertoire at Single-Clone Resolution
The adaptive immune system recognizes billions of unique antigens using highly variable T-cell receptors. The αβ T-cell receptor repertoire includes an estimated 106 different rearranged β chains per individual. This paper describes a novel micro-array based method that monitors the β chain repertoire with a resolution of a single T-cell clone. These T-arrays are quantitative and detect T-cell clones at a frequency of less than one T cell in a million, which is 2 logs more sensitive than spectratyping (immunoscope), the current standard in repertoire analysis. Using T-arrays we detected CMV-specific CD4+ and CD8+ T-cell clones that expanded early after viral antigen stimulation in vitro and in vivo. This approach will be useful in monitoring individual T-cell clones in diverse experimental settings, and in identification of T-cell clones associated with infectious disease, autoimmune disease and cancer.
INTRODUCTION
T cells are key players in the antigen specific immune responses. Antigen specificity is provided by the T-cell receptor (TCR), which is unique for each T-cell clone. Upon antigen recognition, individual T cell clones generally expand and acquire differential effector properties. Although the number of potential TCRs has been estimated at 10 15 different a/b combinations [1], the actual ab TCR repertoire per individual is estimated to include 10 6 different b chains [2], each pairing with a limited number of a chains [3]. There is no rapid technology available that can sensitively and quantitatively monitor this highly diverse T-cell receptor repertoire.
Current technology for screening the TCR repertoire for expanded T-cell clones relies on 'spectratyping' [4], often referred to as immunoscope, and/or individual cloning and sequencing of a sample of the T-cell population [2,[5][6][7][8][9]. In spectratyping analysis, PCR amplified TCR DNA is separated on size of the CDR3 region. This approach separates the TCRb repertoire in approximately 230 fractions, resulting from the use of ,23 primers for all functional Vb families and about 8 different CDR3 lengths per Vb [10]. A higher resolution can be attained when Vand J-region primers are used; however, this requires 23 ? 13 individual PCR reactions, and results in a resolution of approximately 23 ? 13 ? 8 peaks (Table S2).
Spectratyping itself generally does not identify individual T cell clones, and is therefore often followed by repetitive cloning and sequencing. Clonal peaks identified in the spectratype patterns are sequenced, typically 10 2 clones and maximally 10 4 clones per sample [2,[5][6][7][8][9] in previous publications. The sensitivity of this combined approach depends on the sensitivity of spectratyping for identification of clonal peaks, and on the number of T-cellreceptor rearrangements cloned and sequenced. Thus, although the combination of spectratyping with sequencing can attain sufficient resolution to analyze TCR diversity, the approach is laborious and time consuming as it requires PCR amplification, isolation of individual bands based on DNA size, purification, followed by repeated cloning and sequencing.
Here, we explore a novel approach which exploits the high capacity of DNA microarrays to monitor the expression of many T-cell receptor rearrangements in parallel. At present, it can be used to follow T cell responses in cases where type of Vb/Jb and length of Jb-gene segment are available, e.g. from prior immunoscope (spectratyping) experiments. The feasibility of this approach is shown, and validated both in vitro and in vivo. We show that T arrays quantitatively monitor the expansion of T-cell clones after viral infection with high sensitivity (1 in 10 6 cells), and with sufficient resolution to identify single clones in a background of polyclonal peripheral blood T-cells. While at present it allows monitoring a Vb/Jb-specific fraction of 0.03% of the T-cell receptor repertoire on a single 4000-spot slide, the microarraybased method can be scaled up to monitor and screen a large pool of the T-cell repertoire for dominant clonotypes. We envision that this sensitive and rapid technology will be useful for monitoring and screening of clonal T-cell expansions for many applications in medical research.
Creating single-clone resolution
To create adequate resolution between different potential TCRs we focussed on the highly variable complementarity determining region 3 (CDR3) of the TCR beta chain. This region consists of one out of 40-48 functional Vb and one out of 13 functional Jb segments, joined by Db gene segments [11][12][13]. The CDR3 is generated during VDJ-recombination by random deletion and addition of nucleotides at the V-, D-, and J-junctions [1] and produces the hypervariable NDN region, which can be used as a signature for each TCR (Fig. 1A). We developed a T-array protocol (Fig. 1C) to interrogate the first six nucleotides of the NDN region and the length of the Jb-gene segment. Resolution is created in three subsequent steps by: i) Vb-specific PCR amplification of the CDR3b (Fig. 1C1 and C2), ii) hybridization of a labelled oligonucleotide (''annealer'') specific for the Jb-family and for the number of Jb-nucleotides deleted (Fig. 1C4), and (iii) a ligation reaction specific for the first six nucleotides of the hypervariable NDN region on a universal hexamer microarray [14], encoding all permutations of a hexamer nucleotide (Fig. 1C5). In this way, the hexamer sequences on the array complementary to the first six nucleotides of the NDN region of a T-cell clone are ligated to the fluorescent annealer probes ( Fig. 1c6-7). The fluorescent signal of each hexamer sequence on a single microarray chip, quantitatively reflects the expansion of a certain T-cell clone. It should be noted that an annealer designed for a Jb gene with n nucleotides deleted from the germline sequence will also give a signal for TCRs with less than n nucleotides deleted. The latter TCRs will reveal part of the germline sequence of Jb in their hexamer sequences (1C4B).
The resolution of this T-array protocol depends on the number of Vb and Jb segments, the size of the microarrays, and the number of Jb-nucleotides deleted. To predict the potential resolution of the assay we analyzed the distribution of N-deletion in a random selection of 192 published CDR3b mRNA sequences (Table S1). For 99% of the sequences a maximum of 10 nucleotides is deleted from Vb genes, and a maximum of 11 nucleotides from Jb genes (Fig. 1B). Within these limits, an almost uniform distribution of the TCRs was observed over the number of nucleotides deleted. This enabled us to predict the potential resolution of the assay. Although the theoretical size of the TCR repertoire is estimated at 10 15 , extensive cloning experiments have shown that within one individual the beta-chain repertoire contains approximately 10 6 unique sequences [2], each of which pairs with a limited number of a chains [2,3]. Based on these numbers we estimate that after Vb/Jb-specific amplification on average 10 6 /1.4 ? 10 7 = 0.07 CDR3b sequences from the complete repertoire of a human individual will ligate to a single sequence on the universal hexamer microarray (see also Table S2C). In theory, the assay should therefore have sufficient resolution to detect single CDR3b sequences.
Testing sequence specificity and validity
The specificity of the protocol was tested using the T-cell clone Jurkat E6-1, for which the CDR3 is known. After PCR amplification of the Jurkat CDR3b region, we isolated the antisense strand and hybridized it to a fluorescently labelled oligonucleotide encoding the NDN-oriented end of the Jb1-2 sequence. Specificity of the ligation reaction for Jurkat NDN sequence was then tested with hexamers either complementary or not complementary to this NDN sequence ( Fig. 2B-C). Only in the presence of the complementary hexamer sequence (59-GTTCGG-39) the annealer oligonucleotide was elongated, in-dicating that the ligation is sequence specific for the Jurkat CDR3b. Similarly, when the sense strand was used as a template, the annealer was elongated only with the hexamer sequence complementary to the 59-end of NDNb ( Fig. 2D-E). When tested on a universal hexamer array, out of all 4096 possible sequences the Jurkat NDN sequence GTTCGG gave the strongest signal ( Fig. 2F-H). This shows that the protocol is sequence specific for the T-cell clone analyzed. However, some other spots did produce positive signals, albeit at much lower signal intensities, notably if the encoded sequence was identical in the 39-end nucleotides (NNTCGG). This suggests that, apart from the strongest signal at hexamer GTTCCG, the Jurkat NDN sequence ligated to hexamers with a 59-end mismatch.
Determination of sensitivity T-array
Having shown that the T-array is specific for the NDN sequence of the analyzed T-cell clone we then compared the sensitivity of the assay to that of spectratyping by diluting decreasing proportions of Jurkat cells in a background of peripheral blood CD4 + T-cells. Semi-quantitative PCR showed that TCR transcripts in Jurkat cells were not more abundant than in CD4+ cells obtained from a healthy blood donor (Fig. S1). For immunoscope, Jurkat/CD4+ mixtures were then PCR amplified with a Vb12-sense primer and a fluorescamine-labelled Cb reverse primer and size separated by capillary electrophoresis (Fig. 3A). As expected, the size difference between DNA peaks was 3 bp and the peak signals were normally (Gaussian) distributed [5]. Only in the case of a dilution of 1 Jurkat cell per 10 4 CD4 + blood cells, the peak associated with Jurkat CDR3b length (14 amino acids) is 47% of the total peak area. At dilutions of 1 in 10 5 to 1 in 10 7 the peak associated with 14 amino acids is 7-8% of the total peak area, and the peak distribution remains normally distributed, indicating no dominance of any CDR3 length in the Vb12 compartment. These results show that the sensitivity of spectratyping is approximately 1 T-cells in 10 4 cells, which is in agreement with other reports [15].
For T-arrays, the antisense strands were isolated and hybridized to the Jb-1-2-specific, Cy5-labeled oligonucleotide mentioned earlier and ligated on a universal hexamer microarray ( Fig. 3B-C). In the case of a 1 in 10 5 to 1 in 10 7 Jurkat/CD4 + ratio, the hexamer sequence GTTCGG, which is complementary to the 39 end of Jurkat NDN region, was quantitatively picked up ( Fig. 3C). In the case of a 1 in 10 7 dilution, the GTTCGG signal was similarly intense as the hexamer spots TGTCGG and CTTCGG. These sequences only differ in the nucleotides at the terminal end of the ligation product, suggesting that these are 59 mismatch ligations of the Jurkat sequence. These results show that in this format of the T-array protocol, individual T-cell clones are picked up with a sensitivity of 1 clone in 10 6 T-cells.
Detection of expanding T-cell clones after viral antigen stimulation
To test whether the T-array protocol would allow identification of T-cell clones that expand upon antigen activation an in vitro stimulation experiment was performed. Human peripheral blood cells from a healthy HLA-A2 + donor latently infected with the b herpes virus CMV were isolated and stimulated with the CMVpeptide NLVPMVATV. This 9-amino acid motif from the viral structural protein pp65 dominates the cytotoxic T-lymphocyte response against CMV [15]. In HLA-A2 + individuals, the CD8 + response to NLVPMVATV is Vb-restricted, in particular for but not limited to Vb13 + T-cells [16], which was in agreement with spectratyping analysis of our donor (data not shown). FACS analysis, using HLA-A2-NLV tetramer staining showed that the fraction of antigen specific T cells in the cytotoxic T-cell pool increased after stimulation with NLV peptide (Fig. 4A). Before stimulation (Day 0) a fraction of ,5% of the CD8 + cells was tetramer positive confirming CMV latency. Three days after peptide stimulation CMV-reactive T-cells were not detectable by FACS using tetramers, which can be attributed to TCR internalization after MHC/peptide recognition [17]. During the next 10 days, the fraction of tetramer-positive cells slowly increased to ,60%. From day 6, spectratyping analysis revealed that the Vb13 + compartment became restricted to a CDR3 length correlating to 14 amino acids (Fig. 4B), suggesting that either a single T-cell clone or only a limited number of clones in the Vb13 compartment had expanded.
While Vb-13/Cb-spectratyping (Fig. 4B) or tetramer analysis (Fig. 4A) only detected antigen-specific clones for CMV at day 6, this clone was detected at day 3 using a more specific Vb-13/Jb-1-2 spectratyping approach. The T-array, which was performed on non-sorted T-cells, identified the CMV-specific T-cell clone characterised by the NDN sequence CCTTTT already at day 0 (Fig. 4C). To exclude aspecific effects of the primer and annealer sequences used a T-array experiment was performed using the same primers and annealer oligonucleotide on a different sample; in this case no signal above background was observed at the CCTTTT hexamer (data not shown). Further validation of the CCTTTT hexamer sequence was acquired using extensive cloning and sequencing (See below).
Hexamer sequences other than CCTTTT gave signals above background intensity (Table S3, Fig. 4C). Of the top 100 signals 19 had a 39 CTA-end, identical to the terminal germline sequence of the Jb-1-2 gene segment. These sequences derive from TCRs that have no nucleotides deleted from the germline Jb-1-2 gene (Fig. S2), and therefore give 59-(NNN)CTA-39 signals. Similarly, sequences that have a (NNNN)TA-end (34 out of 100) or (NNNNN)A-end (62 out of 100) derive from TCRs encoding the germline Jb-1-2 gene with only one resp. two terminal nucleotide deleted. Indeed, these signals derive from such TCR sequences as shown by complete sequencing of these TCRb's (see also below). They can all be identified based on the terminal germline nucleotide (here ''A'') in the hexamer sequence.
Validating T-array data by sequencing of multiple T-cell clones
To test whether the T-array signal matches the frequency of these T-cell clones as estimated by repetitive cloning and sequencing, we sequenced Vb13 + /Jb1-2 + TCRs from three samples of the experiment shown in Figure 4 ( Table S4). Out of 52 clones sequenced from the Day 0 sample, 27 were found to have 3 or less nucleotides deleted from the Jb-1-2 gene, and can therefore be detected in the T-array shown in Figure 4. Fourteen of these sequences were unique. The two clones that were found at high frequency (7/52) gave the strongest signals on the T-array (hexamer sequences CCTTTT and GGACCG). One clone which was detected at lower frequency (CAGCTA, frequency 2/52) also gave a T-array signal well above background. Eleven clones were detected with a frequency of only 1 out of 52. Three of these gave T array signals above background (Table S5). Eight clones gave signals similar to background, suggesting that the concentration of these clones in the blood sample is below the detection limit of the T-array. The clonal frequencies measured at day 3 were in agreement with the expansion measured by the T-array, showing that the T-array protocol quantitatively detects clonal expansion.
Application of T-array protocol for in vivo detection
To test the applicability of the T-array protocol for detection of clones in vivo, we analyzed a well-characterized sample of FACS sorted, CMV-specific, IFNc-secreting CD4 + T-cells from a renal transplant recipient 9 weeks after primary CMV infection, at the peak of viral load [18]. This sample of 11,600 sorted CMV-specific T-cells was pre-amplified by anchored PCR [19,20], which was used here as pre-amplification step to generate sufficient cDNA from a relatively small amount of RNA (Fig. 5A-B). Spectratyping indicated a relatively broad repertoire [18]. Within the repertoire 11 Vb families were extensively analyzed by cloning and sequencing [18]. In the Vb6.1 pool, 60 clones were sequenced, revealing 12 unique sequences of which 4 were Jb2.7 + . A T-array was performed to screen the Vb6.1-Jb2.7 subpopulation with an annealer oligonucleotide that detects Jb2.7 sequences with 3 or less nucleotides deleted from the Jb2.7 gene (Fig. 5F). All 3 clones that meet these criteria were picked up with the T-array (Fig. 5E, Table S5). In addition, the T-array signal matched the clonal frequency of the T-cell clones identified. The clone with hexamer CGGCTC which was picked up in 5 out of 60 sequences gave the strongest signal, followed by clone GAGGAA (3 out of 60), and clone CCAGTC (1 out of 60), respectively. These data show that the T-array can detect in vivo expanded T-cell clones in a quantitative way.
DISCUSSION
The diverse repertoire of TCR rearrangements can potentially be analyzed using microarrays, which have a high capacity to differentiate and monitor many unique DNA rearrangements in parallel. However, the size of the TCRb repertoire at the DNA level is too large for full TCR repertoire analysis at single-clone resolution on a single microarray. The ab receptor diversity is estimated at 10 15 to 10 18 rearrangements [1,21], which is formed for a relatively large part by the b chain. Within one individual, however, the size of the b chain repertoire is much more limited Here, we use universal microarrays for this concept and show that this is feasible. In the design presented, T-arrays tag individual clones based on the sequence information in the NDN-J or NDN-V junction. The tag for each clone consists of the J-or V-family used, the number of terminal nucleotides deleted from this J-or Vgene segment and the first six nucleotides of the NDN region. This design creates tags that are specific for one in more than a million clones, which in theory allows single-clone analysis of the complete TCRb repertoire on high-density microarrays.
The validity of the T-array protocol was shown in several experiments. Firstly, the PCR fragments derived from the TCR of Jurkat cells were selectively ligated to hexamer oligonucleotides complementary to its NDN sequence both in solution and using hexamer arrays (Fig. 2). Second, the protocol allowed early, highly specific identification of an expanding T cell clone after in vitro stimulation with CMV-peptide (Fig. 4). Third, T-cell clones from blood taken form CMV-infected individuals that were identified using T-arrays, were also detected by multiple cloning and sequencing (Table S4). Lastly, in FACS-sorted CMVspecific, IFNc-secreting CD4+ T cells from a renal transplant patient 9 weeks after CMV infection T-arrays detected the dominant Vb6.1 + /Jb2.7 + clones identified earlier by extensive cloning and sequencing (Fig. 5).
The sensitivity of the protocol was determined after mixing a Jurkat T-cell clone in a background of peripheral blood CD4+ T-cells in a range of dilutions. The data show that the Jurkat TCR rearrangement was detected in a ratio of at least 1 in 10 6 (Fig. 3). This is 2 logs more sensitive than Vb-Cb spectratyping [4,15], which can detect a T-cell clone in 1 in 10 4 . Vb-Jb spectratyping, an alternative approach which is not widely used, is theoretically 12-fold higher than that of Vb spectratyping and therefore 10-fold less sensitive than the T-array approach. The superior sensitivity of the T array was confirmed by detection of a CMV-specific T-cell clone which was identified in the unstimulated population of circulating T cells obtained directly from a CMV-infected donor (Fig. 3). This clone was only detected after 3 days of antigen stimulation by Vb-Jb spectratyping and after 6 days by Vb-Cb spectratyping (Fig. 4). Thus T-arrays make highly sensitive detection and tracking of T cells possible. Figure 6 illustrates the sensitivity of various methods for the analysis of the TCRb repertoire.
In addition, the protocol allowed quantitative monitoring of T cell clones. With decreasing numbers of Jurkat cells in a CD4 background the signal clearly decreased (Fig. 3). The increasing frequency of the CMV-specific clone in the in vitro experiment as evidenced by tetramer staining, and by spectratyping, was also reflected in signal intensities on the arrays (Fig. 4). Likewise, the observed clonal frequencies of the CMV-specific clones in the in vivo experiment (Table S4) were quantitatively reflected in the T-array data.
Although the ligation reaction is highly specific for the correct hexamer sequence, ligation mismatches did occur. However, in every instance true positives gave the strongest signal, even in complex mixtures. Figure 2G supports previous data [14,22] which show that mismatches occur mainly in the two nucleotide positions opposite of the site of ligation. Algorithms, based on known ligation patterns [14,22], have been developed that identify false positives and reduce the loss of resolution when complex mixtures such as full-genome transcripts are analyzed on hexamer arrays [14]. Such algorithms may help to minimize the effect of cross ligations on the resolution of T-arrays and help to detect less frequent clones.
The technology described here can be applied to monitor a small selection of the TCRb repertoire quantitatively, and to track a subset of T-cell clones sensitively and quantitatively. While the combination of spectratyping, cloning and sequencing may take several weeks, the T-array method takes only a single day including scanning and quantification. Furthermore it is sensitive, and allows monitoring of growth kinetics at the clonal level. This rapid and sensitive method may find applications in the study of the relation between clonal expansion of T cells and autoimmune phenomena, e.g. responses to immunotherapy, retrospectively and prospectively. Recurrence of autoimmune disease could be predicted in the case of previously identified clones [23], or the fate of T-cells in adoptive therapy against cancer [24] could be monitored at single-clone level.
One of the prospects of this technology is that it could possibly be developed into a tool that screens the complete TCRb repertoire on a single array. The format presented here screens only 1/23 ? 1/144 = 0.03% of the repertoire (Table S2C). Recently, we successfully explored the feasibility of a protocol in which T-array analysis is preceded by simultaneous amplification of all Vb families in one PCR reaction using anchored PCR as described earlier [19,20] (data not shown). The resulting 144 arrays can then be housed in a high-density matrix of multiple arrays that can be individually loaded. Such matrices have recently become available [25]. Rapid, quantitative and sensitive full repertoire screening would have significant impact in immunological research and on the development of immunotherapeutics. Identical arrays might be built for the analysis of the TCRa, -c and -d repertoires and of the B-cell receptor repertoire in humans and other species.
In conclusion, here we show proof of concept of an approach to sensitively monitor changes in the frequency of unique TCR rearrangements using microarrays. The protocol is rapid and universal for the detection of all T-and B-cell receptor rearrangements. We propose that this technology will be useful for monitoring of clonal T-and B-cell expansions for many applications in medical research.
MATERIALS AND METHODS
Analysis of CDR3 sequences from public database TCR b-CDR3 mRNA sequences of human T-cell clones were collected from the public database of NCBI at NIH. Vb-, Jb-, Dbsegments were identified using the V-QUEST algorithm from the international ImMunoGeneTics information system [13]. A number of 50 sequences were validated manually, and assignment errors were identified only for N-deletions larger than 8 nucleotides. To exclude other assignment errors all CDR3b sequences with N-deletions larger than 7 nucleotides were therefore assigned manually.
Cells and flow cytometry
Jurkat cell line clone E6-1 (ATCC, Manassas, VA) was grown in DMEM culture medium (Sigma-Aldrich, St. Louis, MO) supplemented with 5% FCS. Human peripheral blood mononuclear cells (PBMC) (Figure 3) were isolated from buffy coats of healthy blood donors by density centrifugation with Ficoll-Isopaque (Pharmacia Biotech, Uppsala, S). Informed consent was obtained from blood donors. CD4 + T cells were isolated by using anti-CD4 microbeads (Miltenyi Biotec, Bergisch Gladbach, D), followed by positive selection with the VarioMACS (Miltenyi Biotec) according to the manufacturer's protocol. The purity of the CD4+ cells isolated was measured using anti-CD4 PerCP-conjugated antibodies (BD Biosciences, San Jose, CA).
Thawed PBMCs (Figure 4) were resuspended in IMDM (BioWhittaker, Verviers, Belgium), containing 10% FCS and antibiotics (100 U/ml sodium penicillin G and 100 mg/ml streptomycin sulfate). Cells were washed in PBS containing 0.01% (w/v) NaN 3 and 0.5% (w/v) BSA (PBA). A total of 250,000 PBMCs were incubated with an appropriate concentration of tetrameric complexes in a small volume for 10 min at 4uC. Subsequently, fluorescently labelled conjugated mAbs (concentra- tions according to manufacturer's instructions) were added and incubated for 30 min at 4uC. For analysis of expression of surface markers, the following Abs were used: the allophycocyaninconjugated HLA-A2 tetramer loaded with the CMV pp65-derived NLVPMVATV peptide [15], and anti-CD8 PerCP-conjugated antibodies (BD Biosciences, San Jose, CA).
CMV-specific IFN-c-producing CD4+ cells from a renal transplant recipient were isolated using IFNc Secretion Assay Detection Kit (PE) (Miltenyi Biotec, Amsterdam, The Netherlands) according to the manufacturer's conditions. At the moment of peak viral load, 9 weeks after transplantation, PBMCs were isolated and stimulated for 16 hours with CMV antigen (10 ml/ml) and incubated with IFNc Catch Reagent for 5 minutes at 4uC, incubated with IFNc Detection Antibody (PE), CD4 APC (BD Pharmingen, San Diego, USA) and sorted using FACsARIA (BD). The patient had given written informed consent, and the local medical ethics committee had approved the study.
Expansion of virus-specific autologous cytotoxic T-lymphocytes
PBMCs from a CMV seropositive, HLA-A2 + healthy volunteer donor were used for expansion of CMV specific CD8 + cells. Informed consent was obtained from the blood donor. PBMCs were stimulated in IMDM supplemented with 10% human pool serum, antibiotics, and 2-ME with CMVpp65-A2 peptide NLVPMVATV (1.25 mg/ml) and IL-2 (50 U/ml Biotest, Dreieich, D) in 24-well plates. After one week, cells were restimulated on a weekly basis with irradiated (30 Gy) CMV-pp65-A2 peptide loaded EBV transformed cell-lines expressing HLA-A2 + (5610 4 cells/ml) in the presence of IL-2.
Cloning and sequencing
Vb PCR products were purified and ligated into pGEM-T Easy Vector (Promega, Madison, WI) and cloned by transformation of competent DH5a E. coli. Selected colonies were amplified by PCR using M13 primers (Invitrogen -Life Technologies, Breda, NL) and then sequenced on the ABI Prism 3730 DNA automatic sequencer (Applied Biosystem, Foster City, CA) using the dye terminator cycle sequencing chemistry (v1.1) (Perkin Elmer, Foster City, CA). Clones that did not yield a PCR product using direct colony-PCR, were cultured in LB medium, plasmid DNA was purified using the Wizard Plus Minipreps DNA purification system (Promega, Madison, WI), and plasmids were sequenced similarly as described above.
PCR and Spectratyping analysis
RNA was isolated using the GenElute Mammalian Total RNA Kit (Sigma-Aldrich, Zwijndrecht, NL). For experiments shown in Figures 2, 3 and 4, cDNA was synthesized using Superscript RT II and oligo-dT primers (Sigma-Aldrich) according to the manufacturer's protocol (InVitrogen -Life Technologies, Breda, NL). For experiments shown in Figure 5, cDNA was synthesized using the Super Smart TM and the Smart TM cDNA synthesis kit (Clontech, Mountain View, CA), respectively. PCR was performed with TCR Vb primers [26] in combination with a TCR Cb primer, labelled with fluorescent dye fluorophore fluorescamine (FAM). Each amplification reaction was performed with 4 ml cDNA in the presence of 25 pmol 59 sense TCR Vb primer, 25 pmol 39 antisense TCR Cb primer, 0.5 mM MgCl 2 , 0.5 mM dNTP, 10 mM Tris-HCl (pH 8.4), 50 mM KCl, 4 mM KCl, 2.5 units AmpliTaq DNA polymerase (Perkin Elmer/Roche Molecular Systems Inc., Branchburg, NJ) in a total volume of 40 ml. PCR cycles were performed in a T1 Thermocycler (Biometra, Goettingen, D). The FAM-labelled PCR products were run on the ABI Prism 3100 Genetic Analyzer capillary system (Applied Biosystem, Foster City, CA) using POP6 as separation matrix, filter set D for the detection of fluorescent signals, and ROX500 as internal size standard. Genescan Software (Applied Biosystem, Foster City, CA) was used for size determination and quantification.
PCR amplification (Fig. 1C2) Biotinylated PCR products were obtained using sense biotinylated Vb primers against reverse, antisense Cb or Jb primers (PCR conditions as described above). For the analysis of CMV-cells in vivo (Fig. 5), cDNA was synthesized using the smart PCR cDNA synthesis kit (Clontech, Mountain View, CA).
Isolation of single strands (Fig. 1C3) 1.0 mg streptavidincoated magnetic beads (M-280 Dynabeads, Dynal Biotech, Oslo, N) were washed twice in B&W buffer (Dynal Biotech, Oslo, N) and biotinylated PCR products were linked to the magnetic beads according to the suppliers protocol. Non-bound DNA and nucleotides were removed by washing in 1x and subsequently 0.46 B&W buffer. The non-biotinylated single strands were released by 10 minutes incubation in 0.15 N NaOH. After magnetic separation, supernatant containing the non-biotinylated single strands was pH neutralized using neutralization buffer (0.75 HCl, 0.125 M Tris, 16.7 mM MgCl 2 , 1.67 mM DTT).
Ligation in solution
For experiments shown in Fig. 2B-E, ligation was performed in solution with single hexamer oligonucleotides. 1 pmol of hexamers, 4 units of T4 DNA ligase and 2 ml 56 DNA Ligase buffer (In vitrogen -Life Technologies, Breda, NL) and template/annealer complex were added in a total volume of 10 ml and incubated for 45 minutes at 16uC, followed by a 10 minutes denaturation step at 65uC. Ligation products were analyzed on the ABI Prism 3100 Genetic Analyzer capillary system and Genescan software as described above. | 2014-10-01T00:00:00.000Z | 2006-12-20T00:00:00.000 | {
"year": 2006,
"sha1": "0ccfd6761bb07006c4d2ac1418ba73cfc559ea3f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000055&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ccfd6761bb07006c4d2ac1418ba73cfc559ea3f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
23482238 | pes2o/s2orc | v3-fos-license | Regulation of 3-Hydroxy-3-methylglutaryl Coenzyme A Reductase Promoter by Nuclear Receptors Liver Receptor Homologue-1 and Small Heterodimer Partner
Cholesterol homeostasis in mammals involves pathways for biosynthesis, cellular uptake, and hepatic conversion to bile acids. Key genes for all three pathways are regulated by negative feedback control. Uptake and biosynthesis are directly regulated by cholesterol through its inhibition of the proteolytic activation of the sterol regulatory element binding proteins. The conversion of cholesterol into bile acids in the liver is regulated through the bile acid-dependent induction of the negatively acting small heterodimer partner nuclear receptor. In this report, we have shown that the small heterodimer partner also directly regulates cholesterol biosynthesis through inhibition of 3-hydroxy-3-methylglutaryl coenzyme A reductase but has no effect on low density lipoprotein receptor expression. This has significant metabolic significance, as it provides both a mechanism to independently regulate cholesterol synthesis from uptake (an essential regulatory feature known to occur in vivo) and a pathway for direct regulation of cholesterol biosynthesis by bile acids. This latter feature ensures that the early phase of bile acid synthesis (pre-cholesterol) is in metabolic communication with the later stages of the pathway to properly regulate whole pathway flux. This highlights an important regulatory feature that is shared with other key branched, multienzyme pathways, such as glycolysis, where pathway outflow through pyruvate kinase is regulated by the concentration of a key early intermediate, fructose 1,6-bisphosphate.
In the mammalian liver, cholesterol serves as a precursor in the synthesis of bile acids and as metabolite flow increases through the cholesterol pathway, bile acid production is increased. Bile acids act as feedback regulators of their biosynthesis by inhibiting the nuclear receptordependent activation of key bile acid biosynthetic target genes (1).
The nuclear receptor FXR 3 binds bile acids and induces the expres-sion of genes involved in bile acid export and the gene encoding another nuclear receptor, SHP (2,3). SHP lacks a DNA binding domain but interacts with the carboxyl-terminal activation domain of other DNAbound nuclear receptors and inhibits their activity. The best documented target of SHP repression is the LRH-1/FTF nuclear receptor, which binds DNA as a monomer and activates expression of the bile acid biosynthetic genes CYP7A1 (2,3) and CYP8B1 (4). Thus, as SHP levels rise in response to FXR-dependent activation, bile acid production is repressed through the negative effect of SHP. Cholesterol is an essential component of mammalian membranes, and its production is tightly controlled through the negative effects of cholesterol directly on the endoplasmic reticulum membrane proteins SREBPs and HMG-CoA reductase (5). As cholesterol accumulates, SREBP trafficking to and proteolytic activation in the Golgi are repressed, and the proteolytic release and subsequent degradation of HMG-CoA reductase from the endoplasmic reticular membrane is enhanced. The net effect is a decrease in both enzyme levels and metabolite flow through the pathway. This regulatory process is common to all mammalian cells.
However, bile acid production from cholesterol is unique to the mammalian liver, and it has been known for decades that bile acid feeding results in a similar down-regulation of cholesterol production in this organ (6). Traditionally, this has been attributed to the indirect effect that bile acids have through inhibiting CYP7A1, which would result in an increase in cholesterol followed by the predictable decline in mature SREBPs and HMG-CoA reductase activity.
Because the pathway leading to cholesterol in the liver corresponds to the "early" steps of bile acid synthesis, it would be more efficacious if a mechanism for the direct regulation of the early steps of the pathway directly through bile acids had evolved. In the current report, we present evidence supporting a direct mechanism for regulating cholesterol production by bile acids through the LRH-1/FTF and SHP nuclear receptors. We show that LRH-1/FTF activates and SHP represses HMG-CoA reductase transcription specifically, with no effect on LDL receptor expression. Overall, these studies define both a mechanism to independently regulate cholesterol synthesis from uptake, a key regulatory feature that has been documented in prior whole animal studies by Dietschy and co-workers (7). Additionally, these results also reveal a pathway for direct regulation of an early step in cholesterol biosynthesis by bile acids. This latter feature ensures that the early phase of bile acid synthesis (pre-cholesterol) is in metabolic communication with the later stages of the pathway to properly regulate whole pathway flux.
FLAG-tagged LRH-1/FTF and AF2 Domain Deletion (⌬AF-2)-PCR
oligos containing EcoRI and XbaI restriction enzyme sites were used to amplify human FTF (1-500 amino acids) from pCI FTF (a gift from Dr. Gregorio Gil, Virginia Medical College). A PCR oligo including base pairs to amino acid 466 of FTF along with the wild-type amino-terminal oligo were used to construct the AF-2 domain mutant. Digested PCR product was cloned into EcoRI-and XbaI-digested 2xFLAG pCDNA3.1 vector described previously (8). All sequences were confirmed by DNA sequencing. The construction of the FLAG-tagged SREBPs and documentation that the predicted fusion proteins are efficiently expressed in the transfected cells was also reported previously (8). CMV-SHP was from Dr. David Mangelsdorf (University of Texas Southwestern Medical Center) and CMV-HNF-4 was from Dr. Frances Sladek (University of California Riverside).
FTF DNA Binding Domain Mutant-The Stratagene QuikChange site-directed mutagenesis kit was used to introduce two point mutations, substituting an alanine for cysteine 1 in the "P-box" (9) of the DNA binding domain of FTF using the 2xFLAG FTF construct as template and oligos containing the relevant base pair change. The incorporation of the point mutation was confirmed by DNA sequencing. For glutathione S-transferase-FTF, PCR oligos containing EcoRI sites were used to amplify human FTF from pCI FTF, and the digested product was cloned into EcoRI-digested pGEX-2T. HMG-CoA reductase and LDL receptor promoter luciferase constructs have been described previously (10,11). The CYP8B1 promoter luciferase reporter construct (4) was a gift from Dr. Gregorio Gil (Virginia Medical College).
Mouse Studies and RNA Analyses-Wild-type and SHPϪ/Ϫ mice were housed, fed, and used in experiments as previously described (12), except as indicated in the figure legends. RNase protection assay and Northern blotting were performed with the indicated probes as previously described (12).
Immunoblotting Analysis of Protein Expression-These experiments were performed essentially as described previously (13). 293T cells were plated in normal medium (Dulbecco's modified Eagle's medium plus 10% (v/v) fetal bovine serum plus penicillin/streptomycin and glutamine) on day 0 at 450,000 cells/60-mm dish. On day 1, the cells were transfected with the appropriate plasmid construct along with a constant amount of CMV--galactosidase expression plasmid. The cells were washed twice with 1ϫ phosphate-buffered saline on day 2 and refed with normal medium. On day 4, the cells were harvested into nuclear and cytoplasmic extract as described previously (13). Equal amounts of total protein normalized for transfection efficiency using co-transfected CMV--galactosidase expression were analyzed by an SDS-polyacrylamide gel and transferred to nitrocellulose. Expression of specific proteins was detected by an antibody to the indicated epitope tag present.
Promoter Activation Studies-293T cells were plated on day 0 in normal medium at 350,000 cells/well of a 6-well plate. On day 1, the cells were transfected with luciferase reporter and protein expression plasmids by calcium phosphate co-precipitation. A CMV--galactosidase expression construct was included in every transfection as a normalization control. 5 h post-transfection, the cells were washed twice with 1ϫ phosphate-buffered saline and new medium added. For transfections using exogenously expressed SREBPs, the cells were refed with normal medium as described above. For transfections using endogenously expressed SREBPs, cells were refed with either induced medium (defined serum-free medium from Invitrogen), or suppressed medium (induced medium containing 12 g/ml cholesterol and 1 g/ml 25-hydroxycholesterol) to suppress SREBP activity. 12 h after refeeding, the cells were harvested using cell lysis buffer (13), and cell extracts were used to measure activity for luciferase and -galactosidase.
Recombinant FTF Protein Purification-Escherichia coli cells expressing glutathione S-transferase-FTF (4) were grown at 37°C to an OD of 0.6 and then induced with 1 mM isopropyl 1-thio--D-galactopyranoside at 37°C for 3 h. The cells were harvested by sonication in NETN buffer (100 mM NaCl, 1 mM EDTA, 20 mM Tris, pH 8.0, 0.5% Nonidet P-40), and the soluble lysate was fractionated over a glutathione-agarose column and eluted with 10 mM glutathione. FTF protein fractions were identified by SDS-PAGE and Coomassie Blue staining, pooled, and dialyzed against 20 mM Tris, pH 8.0, 0.2 mM EDTA, and 50 mM KCl.
Electrophoretic Mobility Shift Assay-Single-stranded DNA oligos containing the potential FTF site at Ϫ300 were annealed for 1 h at 65°C and then end-labeled with 32 P for 1 h at 37°C. Purified FTF protein (25 ng) was incubated with 0.2 pmol of labeled HMG-CoA reductase probe on ice for 20 min and loaded onto a 5% acrylamide gel and run in 1ϫ Tris borate-EDTA for 1-2 h. The gel was fixed for half an hour in 10% (v/v) acetic acid and 10% (v/v) methanol, dried, and then exposed to film. For competition experiments, purified FTF protein (25 ng) was incubated on ice for 15 min with cold probe containing the indicated DNA sequence in 50-or 200-fold molar excess of 32 P-labeled HMG-CoA reductase probe. Labeled HMG-CoA reductase probe (0.2 pmol) was then added to the reactions, followed by a further incubation on ice for an additional 20 min. The reactions were loaded onto gels as described above.
Chromatin Immunoprecipitation Analysis-Chromatin immunoprecipitation was performed essentially as previously described (14) with the following minor modifications. 293T cells were transfected with an expression plasmid for Myc-tagged LRH-1 by Lipofectamine (Invitrogen). 5 h post-transfection, the cells were refed with defined serum-free medium as described above (minus sterols) to induce SREBP expression for 24 h. Formaldehyde cross-linking (1% (v/v)) was done for 9 min. After processing, the sonicated chromatin was obtained as described previously (14), and samples were preincubated with protein A-agarose beads and purified mouse IgG (50 g) for 1 h at 4°C on a rotator. Nonspecifically bound material was removed by pelleting the agarose beads, and supernatant fractions were incubated overnight at 4°C with 50 g of anti-LRH-1 antibody (Santa Cruz Biotechnology catalog number SC-25389x) for the ϩAb sample or 50 g of purified mouse IgG for the ϪAb sample, followed by incubation with blocked protein A beads for 2 h at 4°C on a rotator. After washing and reversing the cross-linking (14), the samples were analyzed by PCR. For the PCRs, 5 l of DNA from the LRH-1 precipitation was used, and PCR oligonucleotides that amplify a 250-bp fragment from the human HMG-CoA reductase or a 120-bp fragment from the human LDL receptor promoter were used, respectively. Amplification reactions were performed in triplicate at 30 cycles and monitored for amplification to ensure that the signals were in the linear range of the PCR. To analyze specific immunoprecipitation of Myc-tagged LRH-1, an immunoblot using an anti-Myc antibody (Santa Cruz Biotechnology catalog number SC-40) was performed on the material recovered after each immunoprecipitation.
RESULTS
Previous studies showed that bile acid-dependent inhibition of CYP7A1 is compromised in SHPϪ/Ϫ mice (12,15,16). Additionally, one of these studies also provided evidence that bile acid-dependent regulation of cholesterol metabolic genes, such as HMG-CoA reductase and the LDL receptor, might also be altered in SHPϪ/Ϫ mice (12). To more directly evaluate the effect of SHP on HMG-CoA reductase and LDL receptor gene expression, we compared their mRNA levels in wildtype and SHPϪ/Ϫ mice fed diets with and without cholic acid (CA) supplementation. The results in Fig. 1A demonstrate that treatment of wild-type mice with CA reduced the expression of mRNAs for both HMG-CoA reductase and the LDL receptor. Interestingly, in animals fed a normal chow diet, there was an increase in HMG-CoA reductase mRNA in SHPϪ/Ϫ compared with wild-type mice, and the suppression by CA feeding was blunted in the SHPϪ/Ϫ animals. In contrast, expression and CA suppression of LDL receptor mRNA were indistinguishable in wild-type and SHPϪ/Ϫ mice. These data suggest that SHP specifically inhibits expression of HMG-CoA reductase but not the LDL receptor.
Bile acid feeding induces SHP through the bile acid-activated nuclear receptor FXR, but bile acids also have pleiotropic effects. Therefore, to more directly evaluate SHP and FXR in the regulation of HMG-CoA reductase, we analyzed the effects of a synthetic FXR agonist on HMG-CoA reductase and LDL receptor expression in wild-type and SHPϪ/Ϫ mice (Fig. 1B). In wild-type animals, the synthetic FXR agonist GW4064 decreased HMG-CoA reductase expression, but the effect was lost in the SHPϪ/Ϫ animals (Fig. 1B, compare lanes 1, 2 and 5, 6 with lanes 3, 4 and 7, 8). In contrast, the FXR agonist had no effect on the expression of LDL receptor mRNA in either wild-type or SHPϪ/Ϫ animals. As an additional control, similar administration of a synthetic agonist for retinoid X receptor LG00268 had no effect on mRNA levels for either HMG-CoA reductase or the LDL receptor (lanes 9 -12).
These results suggest that SHP specifically inhibits HMG-CoA reductase expression. For known SHP target genes, such as CYP7A1 and CYP8B1, inhibition occurs by interfering with the activation by the nuclear receptor LRH-1/FTF. To determine whether a similar mechanism was functioning for HMG-CoA reductase, we first evaluated whether LRH-1/FTF could bind to the endogenous HMG-CoA reductase promoter in cellular chromatin using a chromatin immunoprecipitation assay. An LRH-1 expression vector was transfected into 293 cells, and formaldehyde cross-linked chromatin was treated with control IgG or with an antibody to LRH-1. The LRH-1 antibody did precipitate the LRH-1 protein specifically (immunoblot in Fig. 2), and the DNA associated with the immunoprecipitation pellets was analyzed by PCR for the presence of the promoters for either HMG-CoA reductase or the LDL receptor as a control. The PCR results demonstrated that LRH-1 protein bound specifically to the endogenous HMG-CoA reductase promoter, and it was not associated with the LDL receptor promoter chromatin (Fig. 2).
In transient transfection assays, the activation of HMG-CoA reductase by SREBPs was always significantly lower compared with other target genes analyzed in parallel, suggesting there was some additional Human embryonic kidney 293T cells were transfected with an expression vector for LRH-1/FTF and processed for chromatin immunoprecipitation analyses as described under "Materials and Methods." A, binding to HMG-CoA reductase promoter. Oligonucleotides for the human HMG-CoA reductase (HMGR) promoter were used in PCR analyses. The input titration represents serial 3-fold dilutions of the input DNA performed in duplicate. After the immunoprecipitation with a control IgG or LRH-1 (LRH IP) antibody, the recovered material (5 l) was analyzed by PCR and resolved by neutral polyacrylamide gel electrophoresis and stained with ethidium bromide. The size for the HMG-CoA reductase promoter PCR product is 256 bp. Pictures of the gels are shown at the top, sample intensities (S.I.) were measured using quantity one software (Bio-Rad), and duplicate values were averaged and plotted on the graph below the gel pictures. Error bars are included to indicate the range for the two duplicate values. B, same as described for A, except that oligonucleotides designed to amplify the human LDL receptor (LDLR) promoter were used. Shown are input (in) or samples after the immunoprecipitation protocol were also analyzed for protein recovery using the LRH or control IgG. A picture of the developed immunoblot is also shown. 5% of the input was used for the analysis, and 1% of the immunoprecipitation samples was used. Wild-type (ϩ/ϩ) and SHP null mice (Ϫ/Ϫ), 5 animals/group, were fed a control diet (Con) or a diet supplemented with 0.5% cholic acid (CA) for 12 weeks. Total liver RNA was pooled from all mice in each group, and 20 g were used in a standard RNase protection assay as described previously (12). Red, HMG-CoA reductase. LDLR, low density lipoprotein receptor. The graphs show the normalized RNA levels relative to the actin signal, and the wild-type control-fed group was set at 1.0. Wt, wild-type; KO, knock-out. B, HMG-CoA reductase regulation by the FXR ligand is impaired in SHPϪ/Ϫ mice. Shown is a Northern blot. Two wild-type (ϩ/ϩ) or two SHP (Ϫ/Ϫ) mice were fed either a control (CON) diet or a diet supplemented with the FXR or retinoid X receptor (RXR) agonist by oral gavage for 1 day (12). RNA from each animal (20 g) was resolved in separate lanes and probed with 32 P-labeled HMG-CoA reductase (Red) or LDLR cDNA probes. The band representing HMG-CoA reductase RNA levels shows a slight migration anomaly across the gel, likely because of a small current imbalance as different samples migrated through the gel.
Direct Bile Acid Regulation of HMG-CoA Reductase Promoter
protein required that was missing (17). Based on the results presented above, we reasoned that this missing protein might be LRH-1/FTF. To test this idea, we performed a transient transfection assay in 293 cells using a culture protocol that activates the processing of endogenous SREBPs (14). Here, companion dishes of transfected cells are cultured with medium containing or lacking regulatory sterols, and endogenous SREBPs are cleaved from their membrane location and accumulate in the nucleus in the sterol-depleted samples (18,19).
Consistent with our earlier studies, the HMG-CoA reductase promoter was activated ϳ2-fold by this sterol depletion protocol (Fig. 3 compare lane 5 with 6). When an expression plasmid for LRH-1/FTF was co-transfected under sterol-depleted conditions, there was a significant increase in expression of the HMG-CoA reductase promoter (Fig. 3, lane 7), and this stimulation was specifically inhibited when an SHP expression plasmid was added on top of the LRH-1/FTF construct (lanes 8 -9). However, transfection of the SHP expression plasmid alone had no effect on the modest activation by endogenous SREBPs (Fig. 3, compare lane 6 with 10 and 11). This result suggests that SHP does not inhibit SREBP-mediated activation but only affects the promoter stimulation mediated by LRH-1/FTF.
For controls, we also analyzed the LDL receptor and CYP8B1 promoters (Fig. 3, lanes 1-4 and 12-17). Similar to our previous studies (17), the sterol depletion protocol resulted in a higher degree of activation of the LDL receptor promoter, and consistent with the studies in the SHPϪ/Ϫ mice, there was no effect of LRH-1/FTF or SHP on LDL receptor promoter activity. However, as a positive control, LRH-1/FTF addition stimulated the CYP8B1 promoter, and this was inhibited by the addition of SHP.
In the above experiments using sterol depletion, all three SREBPs were released from the membrane and accumulated in the nucleus; therefore, it was unclear whether LRH-1/FTF functions to enhance the activation of all three SREBPs or whether there is a preference for one of the three SREBP isoforms. To address this, we performed transient promoter activation experiments in cells cultured in the presence of exogenous sterols to suppress the activation of endogenous SREBPs. Addi-tionally, we transfected expression plasmids for each of the mature SREBP isoforms, alone or together with the LRH-1/FTF expression construct (Fig. 4A). Transfection of either SREBP-1a or -2 expression vectors resulted in an ϳ2-fold activation of the HMG-CoA reductase promoter luciferase reporter, and in each case, the addition of the LRH-1/ FTF construct further stimulated promoter activity significantly. The addition of the SREBP-1c expression plasmid alone had no effect on the 1 g) for the individual SREBP isoforms (BP-2, BP-1a, BP-1c) alone or in combination with expression vectors for LRH-1/FTF or SHP as indicated. Luciferase expression was normalized to an internal control -galactosidase expression plasmid, and fold activation was calculated as in Fig. 3. B, 293T cells were transfected as in Fig. 3 and in A, except an expression construct for HNF-4 was included in the place of LRH-1/FTF (0.1 g). 293T cells were transfected with a luciferase reporter construct for HMG-CoA reductase and expression constructs for SREBP-2 and LRH-1/FTF or ⌬AF-2, as indicated, along with the internal control CMV-galactosidase. Fold activation was calculated as described in the other figure legends. An immunoblot for wild-type and ⌬AF-2 expression using equal amounts of total cell protein (25 g) and an antibody to the FLAG epitope is shown in the inset. B, the wild-type (wt) and DNA binding domain mutant (DBDm) were individually transfected into 293T cells with luciferase reporter plasmids for HMG-CoA reductase (RED) or CYP8B1 and cultured in the presence (ϩ) or absence (Ϫ) of regulatory sterols to activate the endogenous SREBPs. Fold regulation was calculated as described in the legend to Fig. 3. Equal amounts of whole cell protein (25 g) were analyzed for protein expression from the DNA binding domain mutant construct and wt LRH-1/FTF using an immunoblotting procedure with the FLAG epitope antibody and is shown at the top of the figure. N corresponds to a sample analyzed from mock-transfected cells.
promoter by itself, consistent with previous reports where SREBP-1c is a weak activator compared with SREBP-1a or -2 (8). However, the addition of the LRH-1/FTF expression construct resulted in a 3-fold stimulation. This was consistently above the small stimulation that resulted by the addition of the LRH-1/FTF expression plasmid alone (Fig. 4A, compare lanes 8 and 9 with 11).
Additionally, regardless of which SREBP was analyzed, the addition of the SHP expression construct inhibited only the LRH-1/FTF-mediated effect, because the magnitude of activity after repression by SHP was equal to that stimulation by each SREBP alone. These results indicate that LRH-1/FTF can function with all three SREBPs to activate the HMG-CoA reductase promoter and that SHP inhibition only affects the LRH-1/FTF stimulatory effect.
Because SHP is known to inhibit activation by other nuclear receptors, such as HNF-4 (20), and because overexpression of transcription factors in transient assays may exaggerate normal physiological effects, as a control, we analyzed whether HNF-4 could activate the HMG-CoA reductase promoter along with SREBPs (Fig. 4B). Transfection of an HNF-4 expression construct in place of LRH-1/FTF had no effect on SREBP-dependent activation of the HMG-CoA reductase promoter, whereas it efficiently activated the HNF-4 target gene CYP8B1 (Fig. 4B).
LRH-1/FTF, similar to other nuclear receptors, requires its carboxylterminal AF-2 activation domain and a zinc finger DNA binding motif to activate target genes. To determine whether these critical functions are required for activation of HMG-CoA reductase, we deleted the AF-2 domain or introduced a point mutation at a critical cysteine residue of the DNA binding domain P-box (9) to alanine to inhibit DNA binding. Despite the fact that both of these mutant proteins were expressed efficiently in the transfected cells, neither one was able to activate the HMG-CoA reductase promoter like the wild-type protein (Fig. 5). Thus, both of the crucial nuclear receptor functional domains are required and suggest that LRH-1/FTF likely binds directly to the HMG-CoA reductase promoter.
In scanning the DNA sequence of the HMG-CoA reductase promoter used in these studies, we noted two putative recognition sites that are conserved between the hamster, mouse, and human promoters (Fig. 6A). To test whether these sites are important for LRH-1/FTF-mediated activation, we deleted them from the luciferase reporter construct. The activation studies shown in Fig. 6B show that deletion of the two sites resulted in a severe blunting of LRH-1/FTF-mediated activation, but these truncations had little effect on overall promoter activity or on stimulation by SREBPs alone (Fig. 6B) (21). Next, the DNA site at Ϫ300 was tested for DNA binding directly using recombinant LRH-1/FTF protein in an electrophoretic mobility shift assay (Fig. 6C). Recombinant LRH-1/FTF bound to this HMG-CoA reductase promoter site specifically, and a mutation that changed the sequence away from the predicted consensus recognition site failed to compete efficiently for binding. Additionally, oligonucleotides containing the LRH-1/FTF site from the CYP8B1 promoter or the cold wild-type HMG-CoA reductase DNA oligos used for the electrophoretic mobility shift assay competed efficiently for binding. Thus, DNA binding directly to the HMG-CoA reductase promoter is required for the LRH-1/FTF stimulatory effect.
DISCUSSION
In our previous studies, we noted that the magnitude of stimulation by sterol depletion or the addition of exogenous SREBP expression constructs was very modest for the HMG-CoA reductase promoter compared with the activation achieved with other SREBP target genes ana- showing the positions of the two putative LRH-1 sites relative to already characterized sites for SREBP, nuclear factor-Y (NF-Y), and cAMP-response-element binding protein (CREB) (10). At the bottom is a lineup of the genomic DNA from the Ϫ300 region of the hamster, human and mouse promoters showing the conservation of this putative LRH-1 site. The human site was also shown to bind directly to the LRH-1/FTF protein (S. Datta and T. F. Osborne, unpublished data). B, the wild-type and two deletion reporter constructs for the HMG-CoA reductase promoter (see A, B, and C in panel A) were analyzed for activation by SREBP-2 and LRH-1/FTF as indicated and as described in the previous figure legends. C, purified recombinant FTF (25 ng) was used in an electrophoretic mobility shift assay with 32 P-labeled probe containing the putative LRH-1/FTF response element from Ϫ300 in the HMG-CoA reductase promoter, and where indicated (lanes 3-8), a molar excess (50 or 200ϫ) of unlabeled, competitor (Comp.) DNAs were included in the binding reaction as described under "Materials and Methods." wt, wild-type hamster HMG-CoA reductase LRH-1/FTF site; mt, HMG-CoA reductase mutant competitor with a single base mutation that alters the putative LRH-1/FTF site response element; 8B, oligonucleotides containing a known LRH-1/FTF site promoter, the mouse CYP8B1 promoter.
Direct Bile Acid Regulation of HMG-CoA Reductase Promoter
lyzed in parallel (17). In contrast, HMG-CoA reductase gene expression was activated very robustly when SREBPs were overexpressed in mice (22). Although there might be additional differential post-initiation regulatory actions on the mRNAs that may partially account for these differences, the results are also consistent with a model where an additional protein was missing in our transient transfection assays for the HMG-CoA reductase promoter.
The current studies were initiated when we noted that regulation of HMG-CoA reductase was aberrant in SHPϪ/Ϫ mice. Additional studies presented here further support this idea and suggest that LRH-1/FTF is the missing protein. Additionally, the SHP effect exhibits specificity for HMG-CoA reductase, because neither the FXR agonist nor SHP itself had any effect on LDL receptor expression in any of the assays we utilized. Thus, the direct regulation through bile acids and SHP (Fig. 7, left) is specific to the cholesterol synthetic pathway. Our studies have focused on HMG-CoA reductase, because it is considered the classic rate-controlling enzyme of the pathway and because our earlier studies suggest there was a missing component in our transient expression assays. Whether additional enzymes of the pathway are similarly regulated remains to be determined.
When bile acid levels rise, CYP7A1 is inhibited and pathway flux is repressed. Without any alteration to earlier steps in the pathway, cholesterol levels would rise, which would inhibit SREBP maturation (Fig. 7, right). However, because the LRH-1/FTF activation of HMG-CoA reductase is inhibited by SHP, the studies presented here provide the first evidence that, in addition to the indirect effect of bile acids through cholesterol, they also have a direct inhibitory action on the expression of HMG-CoA reductase (Fig. 7, left). This indicates that the early and late sectors of the pathway are in metabolic communication with each other to more quickly adapt to changes in pathway influx and outflow. There is a similar regulatory mechanism in glycolysis, where the product of phosphofructokinase, fructose 1,6 bisphosphate, which measures early flux into the pathway, is a positive regulator of pyruvate kinase, which controls pathway outflow (23).
Spady et al. (7) report that endogenous cholesterol biosynthesis and cholesterol uptake through the LDL receptor pathway are independently regulated in the livers of mice. In these studies, the addition of the bile acid sequestrant cholestyramine to the diet significantly increased hepatic sterol synthetic rates, whereas LDL clearance rates were not altered relative to control fed mice. Cholestyramine reduces bile acid reabsorption and would effectively deplete the endogenous hepatic FXR agonist pool, which would be predicted to decrease SHP levels. In fact, we have documented that SHP expression is repressed by feeding mice a similar bile acid sequestrant. 4 In another study (24), Sheng et al. mentioned that feeding mice a bile acid sequestrant alone was ineffective at increasing nuclear levels of SREBPs in mice.
Taken together, these two studies indicate that the mechanism by which bile acid sequestrants increase sterol biosynthesis cannot solely be explained by an increase in nuclear SREBP levels. The current studies demonstrating that expression of HMG-CoA reductase is activated by LRH-1/FTF and repressed by SHP, without any change in LDL receptor expression, provides a reasonable molecular explanation that connects both of these important earlier studies together.
The LRH-1/FTF DNA sites are conserved in the human HMG-CoA reductase promoter. Therefore, our results also suggest that FXR agonists might be effective when combined with statins to treat hypercholesterolemia in humans. Statin therapy results in a compensatory upregulation of HMG-CoA reductase gene expression as the liver attempts to compensate for the decreased sterol production. The addition of an FXR ligand may work synergistically with statins to prevent this response through inhibiting activation by LRH-1/FTF. | 2018-04-03T05:29:06.148Z | 2006-01-13T00:00:00.000 | {
"year": 2006,
"sha1": "57b629d12519fdb1093c6035c27beb66d6131e76",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/2/807.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "95942ebaafdd71e45971ef23886b02f079debc20",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234074651 | pes2o/s2orc | v3-fos-license | Subcarrier and Interleaver Assisted Burst Impulsive Noise Mitigation in Power Line Communication
SUMMARY Impulsive noise (IN) is the most dominant factor degrading the performance of communication systems over powerlines. In order to improve performance of high-speed power line communication (PLC), this work focuses on mitigating burst IN e ff ects based on compressive sensing (CS), and an adaptive burst IN mitigation method, namely combination of adaptive interleaver and permutation of null carriers is designed. First, the long burst IN is dispersed by an interleaver at the receiver and the characteristic of noise is estimated by the method of moment estimation, finally, the generated sparse noise is reconstructed by changing the number of null carriers(NNC) adaptively according to noise environment. In our simulations, the results show that the proposed IN mitigation technique is simple and e ff ective for mitigating burst IN in PLC system, it shows the advantages to reduce the burst IN and to improve the overall system throughput. In addition, the performance of the proposed technique outpeformences other known nonlinear noise mitigation methods and CS methods.
munication link. In order to improve the reliability of PLC, it is essential to overcome a number of inherent challenges, such as frequency-selective fading and impulsive noise(IN). IN is generated due to the switching transients of electrical appliances and can be classified into three main types [3]: periodic IN asynchronous to the mains frequency, periodic IN synchronous to the mains frequency and asynchronous IN interferences. Most of all, asynchronous IN is the most dominant factor that degrades the communication signals, it varies from few microseconds to milliseconds in its duration. In order to overcome the effect of the IN from PLC channel, several techniques such as noise blanking, clipping [4] and filtering techniques [5] with different degrees of complexity have been proposed in early literature. However, threshold values and filter parameters of these techniques are determined according to practical experience and are not suitable for time-varying channel conditions [6].On the other hand, channel coding is used to cope with various noises and thereby promote the data detection performance [7], however, the complexity of these ways increased with the length of code exponentially [8]. Some sophisticated signal processing techniques that use null carriers(NC) to estimate noise [9] have been developed, they consider IN as sparse signal in time and using recently developed algorithms for sparse signal reconstruction [10]. Unfortunately, the disturbance ratio (number of impulses) of the IN changes considerably in practical systems [11], so it wastes a lot of carrier resource and result in low throughput when use fixed null carriers. Moreover, in some cases, the duration of IN may become substantially longer than the OFDM symbol duration, it will make a lot of error bits and the system's performance may be severely degraded [12].
In this paper, we exploit the fact that IN projected onto a signal-free subspace is sparse to estimate the locations and amplitudes of the IN at the receiver [13]. For the sake of making the long duration of burst IN sparse, a special interleaver is enabled to distribute IN across several OFDM blocks. Method of moment estimation is used to estimate the characteristic parameters of the sparse IN. In order to improve the BER and throughput of the system, we propose to increase or decrease the number of null carriers(NNC) and length of interleaver(LI) according to the state information of IN. Therefore, the contribution of this paper are as follows: First, an adaptive IN-mitigation system is proposed, which can achieve the performance balance in weak and heavy IN environment by changing NNC and LI adaptively.
Second, a prior auxiliary threshold for selecting the prior support set of IN is designed. In this mechanism, the threshold can be calculated based on the noise parameters estimated by the moment estimation. The results show that the proposed method not only improve the BER performance of the communication, but also achieves a higher throughout between the transmitter and the receiver according to the characteristics of the IN.
The rest of this paper is organized as follows. In Sect. 2, we mearsure and analyze the IN generated by indoor household appliances.In Sect. 3, the IN-mitigation technique is proposed based on CS and interleaver. A detailed discussion of IN-mitigation technique is presented in Sect. 4. The simulation results are presented in Sect. 5. Finally, conclusions are drawn in Sect. 6.
Analysis of Power Line IN
In order to analyze the characteristics of IN on power line, it is necessary to test the actual IN [14]. In this paper, the IN was tested on the 7th floor laboratory of electric institute of Hunan university.The main instruments used to test are Pico 5243B oscilloscope, power carrier communication coupler and filter power supply. Figure 1 is the time-domain and frequency-domain Figure 1 (a) is time-domain pulse waveform obtained from indoor measurement. The left and right sides are the measured pulse waveform when the fluorescent lamp and the electric oven opened respectively. The amplitude of the two kinds of IN is far beyond the background noise and its duration can reach hundreds of microseconds with the characteristics of burst. Figure 1 (b) is a figure of power spectral density of various IN. It can be seen from the diagram that the power spectral density of IN is generally higher than that of background noise about 10-25 dB, all of them will have a greater impact on the power line communication performance. Therefore, it is necessary to design noise suppression algorithm to suppress IN.
Proposed IN Mitigation Scheme
In order to suppress the IN on power line, the paper designs the communication system based on transmitter and receiver. Figure 2 denotes the ith OFDM subcarrier. In addition, we use φ ∈ Z N and φ Z N denote the indexes of subcarriers that used to send data or not respectively. For the sake of simplicity, an N × M 'selection matrix' S x is constructed, it contains only one element equal to 1 per column, and the remaining K = N − M elements of the corresponding column are all set to 0. The time-domain transmit signal can be represented as Then, the symbols d ′ inserted with null carriers are input into IFFT processing module, the transmitted timedomain OFDM vector x can be obtained as In addition, F is an orthonormal matrix satisfying F H F = FF H = I N . After OFDM modulation, to reduce the inter-symbol interferences (ISI) and maintain the orthogonality of the transmitted signal in multipath PLC channel, a cyclic prefix (CP) whose length equals to the number of channel taps is incorporated at the end of each OFDM block [15]. To overcome the burst IN, a special interleaver is designed to distribute the effect of burst IN in power line channel, the length of interleaver(LI) is adjusted according to the characteristics of burst IN. After cyclic prefix inserted, the data stream enter into interleaver can be denoted as Next, x is interleavered randomly with column. Finally, the data stream passed through digital to analog conversion (DAC), amplification and be sent to PLC channel.
Receiver Design
We assume the perfect time synchronization have been achieved by the receiver, and the CP is longer than the channel's maximum delay spread. In matrix form, the channel model is given by Where r ∈ C n and s ∈ C n are the time-domain received and transmitted OFDM signal blocks [16] respectively. z is the background noise that obey complex Gauss distribution z ∈ CN(0, σ 0 2 ), vector e with the dimension of N × 1 is the dispersed IN. Due to the presence of the cyclic prefix, H is a N × N cyclic convolution matrix. First, the received signal enter into the deinterleave module. After removing the cyclic prefix, an IN estimation module is used to estimate the IN characteristics parameters p and σ, where p is the probability of IN and σ is the power of the received signal.
In addtion, an IN mitigation method based on CS is used to resconstructe the burst IN, an optimal reconstructe threshold is designed with the estimated parameters for accurate recovery of IN. Further more, the NNC and LI are adjusted according to the IN characteristics parameters to improve the utilization rate of null subcarriers and to enhance the sparsity of burst IN respectivly.
IN Dispersion Using Interleaver
In the early literature, the noise model used for simulation is the classical noise model that the amplitude of IN is sparse. However, the results measured in Sect. 2 show that the average duration of impulse burst is one or two symbols of the OFDM symbol. In these cases, it is not practical to consider IN as sparse signal and the reconstruction of IN will be inaccurate [17]. To mitigate IN with long duration, it is necessary to change the structure of burst IN in transmitter and receiver to make IN more sparse. To meet this requirement, a random block interleaver with N rows and LI columns is adopted to permutate the columns of the interleaver randomly. When there is long burst IN, the interleaver can disperse the IN as evenly as possible. In addition, the interleaver is designed to guarantee the data symbols transmitted by the same subchannel will not permuted into the same data block.
As shown in Fig. 3, left is an interleaver block that contaminated by IN in time domain, some of the OFDM symbols are interfered totally by burst IN, it is difficult for CS to reconstruct impulse from such signals. So an interleaving operation is adapted on each subcarrier as the following expression where l represents the lth position of each row, m represents the mth row of the interleaver, l ′ is the new position of the corresponding element. After interleaving, the burst IN is spread out to different locations of the interleaver. For example, the IN in the 1th and 2nd position of the first row are moved randomly to the 5th and 8th position, respectively. Similarly, the 1th, 3th and 5th position of the 4th Fig. 3 Illustration of random interleaving process row are distributed randomly to the 7th, 1th and 9th position,respectively. With the spread effect of permutation, each de-interleaved OFDM symbol have less number of successive impulses, so it will be easier for CS algorithm to reconstruct IN. In addition, when experiencing time-varying background noise and IN over the indoor PLC channels, it is feasible to adjust the size of the interleaver by selecting a corresponding LI from a look up table.
Characteristics Parameters Estimation of IN
As shown in Fig. 2, after receiving the signal, the receiver first deinterleaves the received signal. To analyze the timevarying IN, the moment estimation method is used to estimate the interference rate and power parameters of IN. σ 2 s is assumed to be the power of transmitted signal that include no impulsive noise and background noise, σ 2 i denotes the power of the IN, and σ 2 w is the power of background noise. In addition, σ 2 1 = σ 2 s +σ 2 w equals to the power of the received signal without occurrence of IN, and σ 2 2 = σ 2 s + σ 2 w + σ 2 i equals to the power of the received signal with occurrence of IN. The probability of IN that needs to be estimated is p.
The expected value of the received signal r k is estimated by A, B, C using a multi-order moment [18]. The expression of the received signal estimation is as follows: where E(·) denotes the expectation. Let a = A √ π 2 , b = B, c = A √ 2π 16 C, the expression above can be rewritten as Combining (9)-(11), yields The expressions (12) and (13) satisfy the following conditions In practical, the simplest method to obtain a, b, and c is compute the following expressions over M observations Finally, combining (9)-(11) and (12)- (13), yieldŝ It can be seen from the expression above that the characteristics (p andμ) of IN can be easily derived by the estimation of a number of OFDM symbols. So it is easy to determine whether the channel is heavily disturbed or not. Using the estimated parametersp andμ, the system can adjust the NNC and LI to suppress the noise introduced by power line channel. Figure 4 depicts the difference between the real p and estimatedp values when the number of IN varies in an alternating voltage (AC) cycle, and LI is fixed at 20. It is observed that the estimatedp value approaches the actual p value when the number of IN changes from 1 to 11. Therefore, it is reasonable to use the estimatedp value instead of the real p value when there are only a few burst IN in an (AC) cycle.
According to previous studies [19], the state of IN can be divided into three types based on the parameters p and µ: Fig. 4 Difference between estimatedp value and actual p value 1) heavily disturbed (p = 0.25 and µ = 1000), 2) moderately disturbed (p = 0.1 and µ = 100) and 3) weakly disturbed (p = 0.01 and µ = 10). In the following sections, this classification has been used to analyse and simulate the performance result of the proposed algorithm.
Reconstruction of IN Using CS
As shown in Fig. 2, after being interleaved by the interleaver, the received signal r become a sparse signal, so it is possible to reconstruct IN by using compression sensing method. To reconstruct IN, the received signal r is transformed by Fourier transform and can be expressed as , z ′ can be considered as Gaussian noise and i ′ is the IN after DFT. If there is no IN and background noise, the transmitted data can be easily recovered using the element-by-element relationship, In contrary, when there is IN during transmission process, the whole OFDM block will be affected and the recovery of the OFDM block becomes difficult. Therefore, null carriers is used to estimate IN and IN-mitigation method based on CS is adopted to cancel the IN from the received signal. For the sake of simplicity, we use S denotes a K × N matrix with a single element equal to 1 per row, the location of 1 is the index of null subcarriers. So the signal projected into the null carrier subspace [20] is given bŷ Wherev is an observation vector with a dimension of K, S and S x is a pair of orthogonal matrix. In order to use the principle of compressive sensing, we denote the sparse matrix by Focusing on the expression in Eq. (22), it can be rewritten aŝ where vectorû is a K × 1 vector obtained from the received data, e is a N ×1 sparse IN signal andẑ is a stochastic random process. Take the CS measurement model into account, the problem of (24) is transformed into estimating an optimum sparse vector e contaminated byẑ and can be expressed as With these parameters, the priori information of IN partial support can be identified as The partial support will help improve the performance of the CS algorithm for IN recovery, especially in bad conditions where the INR is relatively low or the sparsity level is large.
Adjustment of NNC and LI
In practical power line communication systems, the sparsity of IN changes considerably over time, there are more IN disturbances in some hours of the day than the others. Therefore, a flexible IN mitigation method is proposed to suppress IN by changing NNC and LI according to the current characteristic parameters of IN. When the disturbance ratio of IN is high, we need more null subcarriers and longer LI for CS to estimate IN. On the other hand, when disturbance ratio is low, the IN can be estimated by less null subcarriers and shorter LI, which saves the bandwidth of the system and improve the overall throughput. Figure 5 shows the BER performance of the proposed compress sensing algorithm affected by varied LI and NNC. The modulation mode is QPSK, two IN samples measured in Sect. 2 were used, one samples is the weak IN (p = 0.01 and µ = 10), and the other is heavy IN (p = 0.20 and µ = 1000).
In Fig. 5(a), when weak burst IN occured, there is a general trend of the BER performance increasing with the growth of NNC and LI. This trend is more pronounced as NNC goes below 130, when NNC is greater than 130 and LI greater than 10, the BER performance becomes very flat. This observation can be explained that the higher value of LI, the higher dispersion capability of interleaver becomes. In addition, higher NNC results in improvement of performance for reconstructing IN. On the other hand, when heavy burst IN occured, small NNC and LI cannot work anymore, so the NNC and LI should be increased to maintain low BER, the appropriate NNC and LI should be selected as 170 and 20 respectively. In practical communication system, the most effective method is to change LI and NNC adaptively according to the actual noise conditions.
System Simulation
In our simulations, the DFT size N is 512, the number of data sub-carrier is 128, the total NNC is 384, and the CP length is 32. To analyze and simulate the performance result of the proposed algorithm, two types of IN with different p and µ are used in simulation, both of which are generated by opening fluorescent lamp and electric oven. Due to the focus of the paper is IN mitigation, a frequency-flat channel response is assumed and the impulse response of the channel is known in advance for all subcarriers. In the following section, we first designed a time-varying noise model based on the measured indoor IN, then the performance of proposed algorithm is compared with other IN mitigation methods based on CS, finally, the proposed algorithm is compared with nonlinear noise suppression methods.
Time-Varying Burst IN
In practical power line communication system, due to the
Performance Comparison of IN Mitigation Algorithms
Based on CS Figure 7 shows interleaver' made the worst BER performance which is as poor as IN-mitigation algorithm is not applied. It can be explained that when interleaver is not used, the received signal is not sparse enough for CS algorithm to reconstruct the IN, so it is important to interleave the burst IN before it processed by CS algorithms and select the NNC and LI adaptively according to the practical situation.
Performance Comparison between Different IN Mitigation Methods
In this section, we give the results obtained by comparing the proposed algorithm with conventional clipping, blanking and blanking-clipping algorithms. The BER performance of the proposed algorithm, blanking, clipping and combined blanking-clipping algorithms are depicted in Fig. 8. It is clearly seen that the proposed algorithm exceeds other nonlinear algorithms when Eb/N0 is bigger than 2dB. In addition, when increase NNC and LI, the performance of the proposed algorithm is gradually improved. For example, when E b /N 0 is fixed at 6 dB, the proposed algorithm(NNC=140, LI=12 ), (NNC=300, LI=20) and (NNC=380,LI=25) results in BER of 5.9 × 10 −4 , 5.0 × 10 −5 , 8.2 × 10 −6 respectively. The gaps of BER between proposed algorithm and other IN mitigation algorithms are significantly enhanced with the increasing of E b /N 0 . This observation can be explained that the received signal is distorted when use conventional nonlinear IN mitigation algorithms.
To the contrary, when use the proposed algorithm, IN is reconstructed accurately by using large NNC and can be well reduced by subtracting it from the received data. For the sake of comparison, the output BER of the typical OFDM receiver which named 'no IN' is also included in this picture, it represents the system contaminated by WGN only. It is evident that the BER performance of the proposed algorithm is very close to the 'no IN' situation when use large NNC and LI.
Conclusions
IN can cause serious problems in OFDM-based PLC systems and has become one of the major challenges in power line communications. | 2021-05-10T00:04:01.555Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "feb92de046fd47dae5259a9426b9d266c55a9fa3",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E104.D/2/E104.D_2020EDP7157/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "594664e43291759de86d63edb9b0b450e430143d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
21682361 | pes2o/s2orc | v3-fos-license | The rise and fall of an ancient Adélie penguin ‘supercolony’ at Cape Adare, Antarctica
We report new discoveries and radiocarbon dates on active and abandoned Adélie penguin (Pygoscelis adeliae) colonies at Cape Adare, Antarctica. This colony, first established at approximately 2000 BP (calendar years before present, i.e. 1950), is currently the largest for this species with approximately 338 000 breeding pairs, most located on low-lying Ridley Beach. We hypothesize that this colony first formed after fast ice began blocking open-water access by breeding penguins to the Scott Coast in the southern Ross Sea during a cooling period also at approximately 2000 BP. Our results suggest that the new colony at Cape Adare continued to grow, expanding to a large upper terrace above Ridley Beach, until it exceeded approximately 500 000 breeding pairs (a ‘supercolony’) by approximately 1200 BP. The high marine productivity associated with the Ross Sea polynya and continental shelf break supported this growth, but the colony collapsed to its present size for unknown reasons after approximately 1200 BP. Ridley Beach will probably be abandoned in the near future due to rising sea level in this region. We predict that penguins will retreat to higher elevations at Cape Adare and that the Scott Coast will be reoccupied by breeding penguins as fast ice continues to dissipate earlier each summer, restoring open-water access to beaches there.
Introduction
The Adélie penguin (Pygoscelis adeliae) is one of only two endemic species of penguin in Antarctica; it is circum-Antarctic in distribution and numbers in the millions [1]. This species also is distributed in a manner with several very large colonies (greater than 100 000 breeding pairs) at key locations, with smaller colonies distributed nearby [2]. Most colonies are located near coastal polynyas (areas of persistent open-water surrounded by sea ice), which provide open water access to breeding sites as well as highly productive marine food webs in the marginal ice zones [1,3]. The consistently largest colony in Antarctica is located at the entrance to the Ross Sea at Cape Adare (figure 1), where population estimates have ranged from 220 900 to 282 307 breeding pairs in the 1980s [4]. The population declined to 169 200 breeding pairs in the 1990s followed by a large increase with the current population estimated at 227 000 in 2012 (from colony counts) [5] and 338 231 nesting pairs in 2014 based on satellite imagery [6]. Most nesting penguins at Cape Adare are located on long, parallel ridges or mounds that cross Ridley Beach, a large triangular beach with an area of approximately 0.8 km 2 (figure 2). Previous excavations of these ridges have revealed that they are composed entirely of ornithogenic soils, indicating that they are all penguin-formed with natural beach sands at the base [7]. With the beach completely covered with nesting penguins, additional penguins have nesting sites that extend 300 m up a steep slope to a large upper terrace that extends to the south and southwest. A few small active subcolonies are located at the edge of this terrace overlooking Ridley Beach. We first investigated Ridley Beach and this upper terrace in January 2005 and completed excavations at active and abandoned penguin mounds to determine the age and extent of this large colony. At that time, our surveys were limited to a central area of the upper terrace, where numerous abandoned sites were located. Three of these sites (sites S1-S3; figure 2) were sampled with excavations. We also sampled four of the mounds associated with active subcolonies on Ridley Beach (mounds M1-M4; figure 2) and determined from radiocarbon analyses that this beach has been continuously occupied by breeding Adélie penguins for at least the past approximately 2000 years [7]. The upper terrace was colonized slightly later than the beach at approximately 1700 BP, indicating that this terrace was not occupied by breeding penguins until after all potential nest sites on the beach were taken.
In January 2016, we revisited the upper terrace at Cape Adare to conduct additional surveys and sampling. Though ground time was limited owing to rapidly changing weather conditions, we were able to locate numerous other abandoned penguin sites that extend to the south edge of this terrace. Further, these sites all had dry, ancient ornithogenic soils indicating none had been occupied recently. Two such sites were sampled with excavations, the most distant being approximately 1 km south of the terrace edge overlooking Ridley Beach. Here, we report new radiocarbon dates from these excavations, as well as additional dates on samples collected during the 2005 excavations, and present evidence that the entire Cape Adare colony was once nearly twice the size that it is today with occupation of most of the upper terrace by approximately 1200 BP. We also use stable isotope analysis of modern penguin egg membrane to determine if dietary differences among penguin colonies in the Ross Sea help explain the large occupation at Cape Adare, past and present.
Excavations and sampling
Abandoned mounds and subcolonies were mapped using a handheld Garmin GPSMAP 78s. Locations of sites (lat./long.) were imported into Google Earth Pro (v. 7.3.0). Area measurements in square kilometres were obtained by using the polygon tool in this software. Nine sites were excavated and/or sampled following previously published methods in [8] during both visits to Cape Adare in January 2005 and 2016. At each sampling site, a 1 × 1 or 0.5 × 0.5 m test pit (size of pit varied based on surface conditions and time available in the field) was placed at the centre of the abandoned subcolony, or the sites were probed and sampled with a trowel. Surface pebbles were removed and placed on a tarpaulin. Excavations proceeded in 5 cm levels with all excavated sediments dry-screened through two nested screens with mesh sizes of 0.64 and 0.32 cm 2 , respectively. Organic remains were separated from the larger mesh screen in the field and sediment from the 0.32 cm 2 mesh screen was placed into a large sediment bag by level and saved for additional analysis and sorting in the laboratory. Excavations continued until the bottom of ornithogenic soils was reached as recognized by a change in colour and texture of the soil. The pits were then backfilled and all surface pebbles were replaced. These methods were used on three sites (S1-S3) on the upper terrace in 2005 with two additional sites (S4 and S5) excavated in 2016 (figure 2).
Penguin mounds on Ridley Beach in 2005 were too deep and extensive to sample using these methods. Instead, one mound (M3) was found eroded through its centre, exposing a greater than 1 m deep profile of the entire mound. It was sampled by cleaning the profile and obtaining organic remains (penguin bones, feathers) from the upper, middle and lower layers, including the bottom interface where volcanic beach gravels and sand indicated the base of the ornithogenic deposits. Three other mounds (M1-M2, M4; figure 2) were sampled by excavating small holes with a hand trowel to probe to the base of the mounds and obtain additional organic remains for analysis.
In 2005, five recently hatched penguin eggshell samples were collected by active subcolonies located at the edge of the upper terrace overlooking Ridley Beach. Sampling activities in 2016 were restricted to areas away from these active subcolonies and no additional modern eggshell samples were collected. However, we collected recently hatched eggshell at three other active Adélie penguin colonies farther south in the Ross Sea in 2016 at Cape Hallett, Adélie Cove and Inexpressible Island (figure 1).
Radiocarbon analysis
Eight radiocarbon dates were completed on samples collected in 2005 by Beta Analytic, Inc., and are reported in [7]. Seven additional radiocarbon dates were completed on eggshell from M2 sampled in 2005 (two dates), and from feather, eggshell and egg membrane from S4 to S5 sampled in 2016. These seven samples were submitted to the Woods Hole radiocarbon laboratory (NOSAMS) for accelerator mass spectrometry dating and are reported with NOSAMS sample numbers. Each of the 15 radiocarbon dates (in radiocarbon years before present, BP) was corrected and calibrated for the marine carbon reservoir effect using CALIB 7.1 and the Marine13 calibration curve [9-10] with a R = 750 years and are reported here in calendar years BP. This calibration provided a 2-σ range and median age for estimating the true age of each sample.
Stable isotope analysis
Stable isotope analysis of carbon and nitrogen from modern penguin egg membrane was completed at the Stable Isotope Laboratory, University of Saskatchewan, Saskatoon, Canada. Stable isotope values were obtained using a Thermo Finnigan Flash 1112 EA coupled to a Thermo Finnigan Delta Plus XL via a Conflo III. Carbon isotope ratios are reported in per mil notation relative to the VPDB scale. Nitrogen isotope ratios are reported in per mil notation relative to AIR. Carbon isotope data are calibrated against the international standards L-SVEC (δ 13 Table 1. Radiocarbon dates on Adélie penguin tissues from ornithogenic soils at Cape Adare, Antarctica. (Uncorrected dates are in radiocarbon years before present (BP); dates were corrected for the marine carbon reservoir effect and calibrated with the Marine13 calibration curve using CALIB 7.1 [9,10] and a R = 750 ± 50 years to provide 2-sigma ranges and median dates in calendar years BP. Absence of 2-sigma values are dates that were too young for calibration and essentially modern in age. All dates with OS laboratory numbers are samples collected in 2016 and were analysed at the Woods Hole National Ocean Sciences Accelerator Mass Spectrometry (NOSAMS) facility; dates with Beta laboratory numbers are from samples collected in 2005 and analysed at Beta Analytic, Inc., Coral Gables, Florida, and were previously reported in [7] and recalibrated using the newer version of CALIB (CALIB 7.1).) laboratory no. location material uncorrected 14 respectively (n = 18, 2σ ). %C and %N measurements have a precision of ±10% of the reported percentage. We used one-way ANOVA and a Shapiro-Wilk's normality test using SIGMAPLOT 13 (Systat Software, Inc.) to test for differences in stable isotope values in egg membrane among the four penguin colonies.
Radiocarbon dates and occupation at Cape Adare
The seven new radiocarbon dates on penguin tissues collected in 2005 and 2016 produced calibrated calendar ages ranging from 785 to 1386 BP (median ages, table 1) except for one date on a feather from S4 which was too young in age for calibration and essentially modern. Increasingly, younger dates occurred with sites on the upper terrace above Ridley Beach and more distant to the south and southeast. The oldest date from Cape Adare, first reported in [7] from the base of ornithogenic soils exposed at M3 on Ridley Beach, has a median age of 1962 BP (table 1). The youngest median dates reported here are from M4 middle sediment (780 BP) and S4 level 1 (785 BP). S4 is located approximately 1 km southeast of the edge of the upper terrace where active penguin colonies are currently located and is the most distant site from the water so far discovered at Cape Adare (figure 1).
Discussion
Our results suggest that Cape Adare, first occupied by breeding penguins at approximately 2000 BP [7] follows an expected colonization pattern with Ridley Beach completely occupied by approximately 1200 BP and remaining so today. Colonization of the upper terrace began by approximately 1700 BP with the most distant colonies to the south on this terrace occupied by approximately 1200 BP (based on median ages, table 1), with at least some remaining active until approximately 800 BP. The relative brief occupation of these sites also is suggested by their relatively shallow (one level, or 5-8 cm) ornithogenic soils. Only the northern edge of this terrace overlooking Ridley Beach remains occupied today. Numerous other abandoned pebble mounds were located on the upper terrace at Cape Adare, but ground time was too limited in 2016 for additional sampling. We believe all of these sites, owing to their similar appearance on the surface, were colonized during the same sequence of occupation as S1-S5. If so, the large area of occupation on this terrace (approx. 0.78 km 2 , or similar to area on Ridley Beach at 0.80 km 2 ; figure 2) is conservatively estimated to have supported an additional approximately 200 000 breeding pairs of Adélie penguins at peak occupation. Thus Cape Adare, though currently one of the largest Adélie penguin colonies in Antarctica, was possibly up to twice as large by approximately 1200 BP than it is today and consisted of approximately 500 000 breeding pairs at that time. We hypothesize that this 'supercolony' (a penguin colony with greater than 500 000 nests) underwent continuous growth after initial colonization at approximately 2000 BP until approximately 800 BP when it began declining to its present size. What factors were driving the increase in this penguin 'supercolony' during this period? Other events in the Ross Sea at that time may help explain this hypothesized growth at Cape Adare. From 4000 to 2000 BP, the Scott Coast as well as other locations on Beaufort and Franklin Island were occupied by breeding penguins during a warm period known as the penguin 'optimum' [7,11,12]. The Scott Coast was completely abandoned after 3000-2000 BP, with the youngest site at Marble Point (figure 1), probably owing to a cooling period that caused increased fast ice that blocked access to beaches along this coastline, preventing penguins from breeding there [12]. This fast ice in western McMurdo Sound persists well into the summer months today and the Scott Coast has remained abandoned by breeding penguins to the present. We hypothesize that colonization at Cape Adare began as the Scott Coast was being abandoned by approximately 2000 BP, signifying a large-scale movement of breeding penguins from the southern to the northern Ross Sea. Further, upwelling of Circumpolar Deep Water in the northern Ross Sea from the continental shelf break has maintained open water in the Ross Sea polynya near Cape Adare, along with the high marine productivity at the marginal ice zone [13,14]. The northern Ross Sea and associated polynya have provided and continue to support large swarms of krill that in turn support large populations of breeding penguins that currently occur at Cape Adare, Cape Hallett (61 160 nests) and Coulman Island (19 437 nests) [6]. Though Cape Adare has been ice-free for thousands of years [15], Ridley Beach may not have been accessible to breeding penguins prior to 2000 BP owing to its low elevation, causing it to be either submerged at slightly higher sea level or too exposed to storm surges in the warming phase of the penguin optimum. Once the beach did become accessible, it was able to support increasing numbers of breeding penguins, especially with development of the higherelevation ornithogenic mounds that currently transect the beach. Though the upper terrace could have been occupied prior to 2000 BP, older ornithogenic deposits have yet to be discovered and additional investigation is warranted.
Stable isotope data also indicate that penguins at Cape Hallett and Cape Adare presently feed significantly more on krill than fishes, as indicated by lower δ 15 N values in egg membrane from these sites compared to similar samples from active colonies in the Terra Nova Bay and southern Ross Sea regions. Although the sample size of modern egg membrane from Cape Adare is small (n = 5), these results support an earlier study on modern penguin eggshell δ 13 C and δ 15 N from colonies in the southern Ross Sea compared with Cape Hallett to the north that indicated a more krill-based diet at this latter site [16]. While these studies on eggshell and membrane only represent diet of female penguins prior to egg laying, other dietary studies at modern and ancient colonies support these dietary differences between northern and southern colonies in the Ross Sea. For example, investigation of Adélie penguin diet at Cape Hallett, based on stomach flushing and contents [17], found that these penguins feed largely on krill during the guard stage of chick-rearing and prey increasingly on fishes as the season progresses. Satellite tracking of foraging penguins also revealed that most move to the continental shelf break. Moreover, stable isotope analyses of ancient penguin guano also suggest a diet based more on fishes in the southern Ross Sea during the Holocene [18] that persists with colonies on Ross Island today (colonies at Cape Royds, Bird and Crozier, figure 1) [19]. Prey remains and otoliths from ornithogenic soils excavated at abandoned colonies on Ross Island also indicate that Antarctic silverfish (Pleuragramma antarcticum) has been a major component of penguin diet there for at least the past millennium [20].
Given the high marine productivity and krill availability in the northern Ross Sea today, which factors caused a decline in penguins at Cape Adare after the 'supercolony' reached its maximum extent by approximately 1200 BP? We have no explanation for this decline except that it was probably associated with changes in wind patterns, air temperatures, and the size of the Ross Sea polynya that ultimately affected marine productivity [21]. Alterations of this nature have caused total breeding failure, lowered chick production and high mortality events in Adélie penguins in the Ross Sea and East Antarctica [22,23] and continuing impacts over geological time could result in colony decline or abandonment. The relatively brief occupation indicated by the shallow ornithogenic soils that characterizes S4-S5 on the upper terrace supports this conclusion.
Ridley Beach remains fully occupied at Cape Adare today, but the beach remains at or near sea level with occasional flooding from storm surges and winds, as indicated by the pools of standing water in low areas between the ornithogenic ridges. Glacial melt and climate warming now occurring in Antarctica are causing sea level to rise at an enhanced rate of at least 2.0 ± 0.8 mm per year above the mean for oceans south of 50°S [24]. Moreover, average summer temperatures have been increasing by 0.5°C per decade at McMurdo Station since the 1980s [25]. We believe that the penguin colony on Ridley Beach is thus highly endangered and probably will be abandoned owing to sea level rise and increased impacts of storms and storm surges on nesting penguins. As this beach becomes uninhabitable, it is conceivable that penguins will seek higher ground and again begin to occupy former colonies on the upper terrace, a situation currently taking place at Beaufort Island [25]. Cape Hallett also is on a large beach at or near current sea level and will probably be abandoned with future sea level rise as well, but there are no higher-elevation terraces at that location for breeding penguins to retreat to. Thus, gradual displacement of hundreds of thousands of breeding Adélie penguins can be expected in the northern Ross Sea if current warming trends and rates of sea level rise continue at their current pace. The abandonment of Cape Adare and Cape Hallett could also result in a reverse of the large-scale population movements that occurred at approximately 2000 BP. The current warming trends are causing more frequent breakouts of the fast ice blocking the Scott Coast in summer each year [26] and this ice will eventually disappear, allowing breeding penguins access to beaches and former breeding sites that have remained unoccupied for the past 2000 years.
Penguin occupation in the Ross Sea continues to be a dynamic process with new colonies forming and others abandoned over geological time with changes in sea ice conditions, access to breeding sites and climate change [7,27,28]. What we are witnessing today in the Ross Sea is an example of how penguins have responded to climate change over millennia, except that it is occurring at a faster pace as warming trends and sea level rise accelerate in this region.
Ethics. No animals were handled or harmed in this research. Research was conducted under an Antarctic Conservation | 2018-05-21T22:38:44.931Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "edc5152c4610bb83f18a72234baeed52b22fc59d",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.172032",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb6975b5ca33111de95ed14ab093f9d1af669201",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
247748678 | pes2o/s2orc | v3-fos-license | Power Network Uniqueness and Synchronization Stability from a Higher-order Structure Perspective
Triadic subgraph analysis reveals the structural features in power networks based on higher-order connectivity patterns. Power networks have a unique triad significance profile (TSP) of the five unidirectional triadic subgraphs in comparison with the scale-free, small-world and random networks. Notably, the triadic closure has the highest significance in power networks. Thus, the unique TSP can serve as a structural identifier to differentiate power networks from other complex networks. Power networks form a network superfamily. Furthermore, synthetic power networks based on the random growth model grow up to be networks belonging to the superfamily with a fewer number of transmission lines. The significance of triadic closures strongly correlates with the construction cost measured by network redundancy. The trade off between the synchronization stability and the construction cost leads to the power network superfamily. The power network characterized by the unique TSP is the consequence of the trade-off essentially. The uniqueness of the power network superfamily tells an important fact that power networks.
maintain a high level of synchronization stability at a low construction cost.
Introduction
A power system generally consists of power stations, electrical substations, transmission lines, and electricity consumers. All of these components are linked together. Therefore, a power system can be naturally described as a complex network. The complex network analysis provides valuable insights into the characteristics of power system structures and dynamics. For instance, it is used to rank the importance of components [1], detect clusters [2] and recognize global structural features of networks [3,4,5]. But most previous studies on characteristics of power system structures and dynamics are investigated by lower-order properties based on single nodes and edges.
Power grids exhibit rich, higher-order connectivity in the real world. Establishing the relationship between higher-order structures and dynamics for power grids has great theoretical value and practical significance.
The higher-order structures of networks arise from the local functional unit in complex systems. Network motifs, the small repeated subgraphs, are considered as the building blocks of complex networks. The higher-order structural features of networks can be found [6,7] more effectively at the level of motifs than at the level of individual nodes and edges. Moreover, the subgraphs are organized in a particular way to form complex networks so that the organization of subgraphs is used to characterize different types of networks [8]. Understanding the functional role of the higher-order struc-tures in power networks is an open problem, i.e., recent studies show that the motif-based network models can bring new insights into the complicated topology-dynamics relation [9,10] beyond the node-based or edge-based network models.
Previous studies mainly focus on two aspects of the topology-dynamics relation in power systems: stability of frequency synchronization and robustness against cascading failures. Dynamic models estimate the synchronization stability. The power grids are considered to be coupled oscillator networks whose dynamics are governed by the swing equation. Some distinct higher-order structures are identified to emphasize the important role of the local structures in synchronization. For example, the tree-like local structures [11] can strongly diminish synchronization stability. On the other hand, the nodes in triangles with low betweenness, called detour nodes [12], often own high stability. Furthermore, the modular structure of a power grid affects the distribution of the operational resilience of nodes [13]. However, it should be mentioned that the special performance of higher-order structures is highly influenced in the context of global network structures, i.e., the global synchronization of complex networks may be reduced when networks become more clustered with an increasing number of triangles [14]. Also, the basin stability transitions with the increasing coupling strength for all 4-node and 6-node motif structures are studied without the context of networks [15].
But in the context of networks, it is difficult to discuss the functional role of the subgraphs [16].
Quasi-static models generally estimate robustness against cascading failures. The power grids are considered pure complex networks whose robust-ness is calculated based on complex network theory and DC/AC models [17].
The local subgraph structures are identified to study the network robustness. For example, network motifs can be used as a warning signal for the higher risk of large outages under continuous line/node removal scenario [18], and motif concentrations are potentially used as alternative local metrics of robustness under attacks [19,20]. The real-world power grids evolve according to the engineering guidance and standards [21] so that there are unique structural features in their networks. However, to our knowledge, few studies are focused on the relationship between the distinct motif-based topological features of power grids in the real world and their impacts on dynamics. Although the robustness against cascading failures has been well studied based on the quasi-static model from a subgraph perspective [18,19,20], we aim to reveal the hidden mechanism of synchronization stability and the high-order structures based on the swing equation.
In this paper, we use the significance profile of triadic subgraphs to identify the higher-order structural features and find that power networks have a unique triad significance profile (TSP) compared to the scale-free, smallworld and random networks. We use the random growth model [22], which is especially proposed to describe the expanding of power grids, to build synthetic power networks, and find that the power network superfamily is formed with a fewer number of transmission lines. Furthermore, we explain that the power network superfamily can maintain a high level of global synchronization stability at a low construction cost, i.e., the shorter transmission lines statistically measured by network redundancies.
The paper is organized into four sections. In Section 2, we discuss the five triadic subgraphs and their significance in typical power networks. In Section 3, we discuss the triad significance profile and the power network superfamily. In Section 4, we discuss the unique network structure of the power network superfamily due to the trade-off between the network synchronization stability and network redundancy. The final discussion and concluding remarks are given in Section 5.
Subgraph Significance in Power Networks
A high-voltage transmission power grid can be modeled as a unidirectional complex network [23] G(V , E, A), where V is the set of nodes, E the set of edges and A the unidirectional adjacency matrix. The node V i ∈ V represents a power/transformer station in power grid. The edge E ij ∈ E represents the transmission line between nodes V i and V j . Considering that the electric energy flows one node to the other through the edge, A is used to record the unidirectional topology where A ij = 1 denotes an energy transmission from V i to V j in E ij , otherwise A ij = 0 means there is no edge between V i and V j or the direction of power flow in E ij is from V j to V i . For simplicity, the direction of power transmission in E ij is determined by phases [24] of the two nodes based on the DC model.
Triadic Subgraph Significance
Network motifs are recurrent and significant subgraphs and considered building blocks for complex networks [29]. Due to the sparseness of power grids [22], only small local subgraphs could significantly exist in networks.
Therefore, all the unidirectional triadic subgraphs are used to be found as the candidate motifs in power networks, which are illustrated in Figure 1. The functional roles of local subgraphs are still unclear in power systems. Although the beneficial function of the subgraphs may not be clear, deviations from null-hypothesis models provide a strong indication that some certain local structures are important to the whole system. To identify which subgraph is significant in power networks, the statistic significance of a triadic subgraph M can be described by the Z score [8], where N grid is the number of M instances in power networks, N rand and std(N rand ) stand for the mean and standard deviation of the number of the M instances in null-hypothesis randomized networks. To remove the misleading significance of subgraphs, the null-hypothesis randomized networks are used as the reference networks, which are generated by the following two-step swap scheme, Step 1 : Remove two randomly selected unidirectional lines of A → a and B → b, and create new unidirectional lines A → b and B → a.
Step 2 : If one of the lines already exists, no swap is carried out, and go back to Step 1.
Since complex networks in the real world are built based on specific design principles and functional constraints, small local subgraphs may have different significance in complex networks [29,8,6]. However, the null-hypothesis randomized networks are generated without any design principle or functional constraint. But, these randomized networks maintain the same number of nodes and edges and the same degree sequence as the observed power networks. More specifically, the randomized networks maintain the same lower-order edge-based feature in the observed power networks, such as the number of incoming and outgoing edges for each node. Thus, the significance of local subgraph structures in the observed power networks can be described in comparison with the null-hypothesis randomized networks. We take five typical power networks including IEEE 57-bus test system, IEEE 118-bus test system, IEEE 300-bus test system [30], the UK Power Grid [31] and the North European Grid in Figure 2, to evaluate the significance of the five unidirectional triadic subgraphs based on Z scores. Figure 2 shows that the five unidirectional triadic subgraphs have the similar relative significance of Z scores for the typical power networks. Z 1 , Z 2 and Z 3 are equal to 0s. Z 4 is negative within (−2, 0), and Z 5 is positive within (5,12). The observed power networks and null-hypothesis randomized networks have the same number of triadic closure has high significance in power networks.
Zero Z Score in Power Networks
Z score becomes zero only when N grid in power networks is equal to N rand in randomized networks. Figure 3 shows the swap scheme to explain why subgraphs M 1 , M 2 and M 3 have zero Z scores. The randomized networks are still unidirectional due to the swap scheme. As shown in Figure 3(a), we randomly choose two lines A → a and B → b in a unidirectional network G 1 . When a swap happens, the two new lines A → b and B → a are created and the original two lines A → a and B → b are removed, which is shown in Figure 3(b). The unidirectional network after the swap is denoted as G 2 .
For the G 1 and G 2 networks, the only difference is the swap links. Figure 3 shows the number of M 1 , M 2 and M 3 instances in the G 1 and G 2 network is equal according to the swap scheme. The proof details given below, 1. The number of the M 1 instances in G 1 and G 2 : 2. The number of the M 2 instances in G 1 and G 2 : 3. The number of the M 3 instances in G 1 and G 2 : where N(•|•) represents the number, • represents triadic subgraph instances, and • refers to the G 1 network or the G 2 network. As a result, the swap scheme can not change the number of M 1 , M 2 and M 3 instances. Thus, the Z scores of M 1 , M 2 and M 3 subgraphs are exactly zeros in reference to the randomized networks.
Overrepresented Significance of M 5 Motif
The five typical power networks in Figure Therefore, to confirm the overrepresented significance of M 5 triadic closure in power networks, the alternative null-hypothesis randomized networks without M 4 triadic closure are generated by the alternative two-step swap scheme: Step 1 : Remove two randomly selected unidirectional lines A → a and B → b, and create new unidirectional lines A → b and B → a.
Step 2 : If one of the lines exists, or any of the a → X appears, no swap is carried out and go back to Step 1.
Here X represents an arbitary node, and the arrow lines '→' and '←' represent unidirectional lines in networks.
The alternative swap scheme by automatically discarding M 4 instances generates the degree-preserving null-hypothesis randomized networks which obey Kirchhoff's law of the DC model as power networks. Grid. Typically, 2 is the threshold of Z score to judge the significance of a subgraph in a network [8] as indicated in red line in Figure 4. It is clear that all five typical power networks are far above the threshold. Therefore, the two types of Z 5 for the five typical power networks in Figure 2
Triadic Local Structures of Power Networks
Z scores of power networks with different sizes are not convenient for comparison. For example, it is hard to say that which is more significant, Z 5 in IEEE 118-bus test system or Z 5 in IEEE 300-bus test system shown in Figure 2. The normalized Z scores of triadic subgraphs, aka the triad significance profile(TSP) [8], are defined as, The normalization emphasizes the relative significance of these triadic sub- The SW networks [3] with 118 nodes and 179 edges are generated as the following steps: Step 1 : Create a ring of 118 nodes: Each node in the ring is connected with its two nearest neighbors; Randomly choose different 62 nodes.
For each selected node V i , create an edge E i,i+2 ; Step 2 : Create shortcuts by rewiring edges: For each line E uv in the ring, a node w is randomly chosen and a new line E uw is added with rewiring probability p re . Once the new line E uw is created, the old line E uv must be removed; Step 3 : Check whether the network after rewiring is connected: If the network is not connected, repeat Step 1 and 2 again.
When the rewiring probability p re = 0, the networks are regular. As p re increases, the networks become more random and the small-worldness η (C/C r ) / (L/L r ) changes, where C is clustering coefficient, C r is the average clustering coefficient of the equivalent random networks with same degree, L is the path length and L r is the average path length of the equivalent random networks. When p re achieves 100%, the SW networks become completely random. In this paper, the typical SW networks are defined as those SW networks with the highest small-worldness η (about p re = 18%), and the completely random SW networks are those SW networks generated by p re = 100%.
The scale-free (SF) networks [4] with 118 nodes and 179 edges are generated as the following steps: Step 1 : Create a random minimum spanning tree with 56 nodes and 55 edges initially; Step 2 : Generate a new node to link with two different existing nodes by 2 new lines. The nodes are selected from the existing nodes randomly according to the probability Π (w) = deg(w) j∈V deg(j) where w represents the selected node and deg(·) the nodal centrality of degree. Repeat this step 62 times to make the number of existing nodes become 118.
The comparison of the four networks reveals the structural particularity of the power network superfamily. Figure 6 shows the TSP pairs for the IEEE 118-bus test system, the typical SW networks (p re = 18%), the completely random SW networks (p re = 100%) and the SF networks. The TSP pair for the IEEE 118-bus test system is roughly similar to the pair of the typical SW networks, while much different from the pairs of completely random SW networks and the SF networks. Figure 6 implies that power networks have small-worldness to a certain extent. The design principles of power systems may favor the small-worldness of power grids, i.e., the need for short path length between two nodes far away geographically, N-1 security criterion to promote loop structures, and diminishing dead tree structures [11]. So the power networks cannot be completely random. Furthermore, since the hub nodes and numerous leaf structures can be weak points in power systems, the power networks cannot be scale-free either.
Synthetic Power Networks in Power Network Superfamily
For the typical SW networks, their TSP pair has T SP 4 = −0.058 and T SP 5 = 0.998 in Figure 6, but the TSP pair of power network has T SP 4 = −0.113 ± 0.027 and T SP 5 = 0.993 ± 0.003. Therefore, although the typical SW networks have similar structures with the power networks to some extent, power networks still have their unique TSP pair at the triadic subgraph level.
With the random growth (RG) model [22] to generate the synthetic power networks, we can see how a network grows to a power network at the triadic subgraph level. The growth mechanisms in the RG model are found in realworld power grids. Therefore, the unique TSP pair can tell whether the RG model reproduces a similar structure of the real-world power network. Regarding the growth of power networks, the RG model has been proposed [22] to generate synthetic power networks through two phases, an initialization κ can range from 2.12 to 3.02 since κ = 2 + 2p + 2q. The remaining three parameters are n 0 = 10, r = 1, and s = 0 which are determined according to the Western US power grid [22]. Figure 7 shows that as κ increases, the synthetic power networks grow larger and denser and approach the realistic power grids gradually in terms of the TSP pair. When κ is larger than 2.62, the change of the TSP pair is not apparent anymore. Empirically, κ of the realistic power grids [32] is around κ = 2.8, which is close to 2.62. Larger κ indicates more transmission lines in power grids with fixed nodes. Since the power network superfamily is characterized by its unique TSP pair, Figure 7 tells that a network grows and approaches the structure belonging to the power network superfamily with a fewer number of transmission lines.
Closure
The TSP pair is a higher-order structural feature that differentiates power networks from other complex networks, as discussed in Section 3. The
Triadic Closure Significance and Power Network Stability
Power systems have multiple synchronous states for the transient dynamics of power systemss [33,23] according to the swing equation in Eq. 6, where N is the number of nodes in power grid, θ i is the phase difference from the operating synchronous steady state of node V i in a reference frame corotating at the rated frequency. The operating synchronous steady state of the system is denoted as (θ * ,θ * = 0), where θ * represents [θ * 1 , · · · , θ * N ] anḋ θ * = 0 means [θ * 1 = 0, · · · ,θ * N = 0]. α i is the damping coefficient of node V i . p m,i represents the power injection of node V i , and K ij is the capacity of transmission line E ij .
Basin stability [37] is a measure of global synchronization stability related to the volume of attraction in state space. Figure 8 shows the basin stability of the load node in a uniform two-node power system. The generator node V G has a power input p G = +1.0, and the load node V L has a power output p L = −1.0. The coupling strength is K = 8, and the damping coefficient is α = 0.1 for two nodes. The initial states of the node phase and frequency are drawn uniformly from the state space Q = [−π, π] × [−15, +15]. Figure 8(a) shows the basin stability of the load node. There are three different regions.
The green region is the basin of attraction for the operating synchronous closure has little impact on the S 1 basin stability. Theoretically, the nonzero maximum Lyapunov exponents based on Eq. 6 [34,11] can be used to measure the S 1 basin stability for networks because the S 1 basin of attraction is close to the operating synchronous state and its volume is small. For the SW networks in Figure 9(b), their non-zero maximum Lyapunov exponents are all − α 2 = −0.1, which can be used to explain the invariance of S 1 basin stability. But, for most cases, the S 2 basin stability determines the network synchronization stability because the S 1 basin stability is too small, and the S 2 basin stability dominates. The smoothed color areas of S 2 basin stability represent the probability density distribution of BS and Z 5 for the SW networks. The color density is calculated by Bivariate Kernel Density Estimator [40]. Figure 9(b) shows that the probability density distribution of S 2 basin stability become more scattered with the increasing Z 5 . In other words, the probability of finding networks with large basin stability decreases as Z 5 rises. In Figure 9 When the number of nodes and edges are fixed, more long-range transmission lines and fewer local triadic closures indicate that longer transmission lines are needed to connect with power nodes, which means the power grid has a higher cost. However, building more long-range transmission lines cannot promote the network basin stability at all for the typical power networks from
Discussion and Concluding Remarks
Triadic subgraph analysis reveals the structural features in power networks based on higher-order connectivity patterns. Five unidirectional triadic subgraphs are identified in power grids. The triad significance profiles (TSP) of the five subgraphs are estimated compared to the randomized networks. As a result, power grids demonstrate the unique TSP and form a network superfamily. We compare the power networks to small-world, scalefree and random networks to verify the uniqueness of TSP for the power network superfamily. Furthermore, we use the random growth model to generate synthetic power networks to understand the power grids. When the synthetic power networks get denser and their average degree is beyond 2.62, their structures will approach the real-world power networks and fall into the power network superfamily in terms of TSP. Note that the real-world power grids are complex networks whose average degree is 2.8 statistically, close to the threshold value of 2.62 of the synthetic power networks. In other words, the real-world power grid in the superfamily has the least dense network structure as well as the fewest transmission lines.
From the triadic subgraph perspective, power grids have optimized network structures to balance synchronization stability and network redundancy. (d), each dark green dot represents a sample network generated by the SW model. The rewiring probability p re ranges from [0, 2%, 4%, · · · , 98%, 100%], and for each p re , 100 sample networks are randomly generated. The black curves are obtained by the 4th degree polynomial fitting with R-Square = 0.86 in (a), the 4th degree polynomial fitting with R-Square = 0.65 in (c) and the linear fitting with R-Square = 0.98 in (d). Each light green square dot in panel (b) represents S 1 basin stability, and each red dot represents S 2 basin stability for a network generated by the SW model. The smoothed color areas represent the underlying probability density of generating the networks with a certain BS and Z 5 . As the color turns darker, the probability density becomes higher. The red area circled by the black contour line has a probability density larger than 0.26, the half of the largest probability density.
Statistically, the network redundancy is related to the construction cost. Therefore, the local subgraph structures of complex networks influence the evolution of power grids. In this paper, the structural features based on higher-order connectivity patterns are revealed and explained by the performance of global synchronization for power systems. The unique network structure in the power network superfamily allows power networks to maintain high synchronization stability at a low construction cost.
For the most artificial complex systems, the trade-off between performance and cost is always a vital issue. Therefore, a better understanding of how the subgraph structures influence the behaviors of complex networks can help design resilient and stable power systems. In the future, discovering the higher-order organization in power complex networks at the level of small network subgraphs demands more work. It can allow us to understand the higher-order organization's functional role in the power networks. It will be instrumental in optimizing complex power networks and making them more robust and stable.
Acknowledgments
Xin Chen acknowledges the funding support from the National Natural Science Foundation of China under grant No. 21773182 and the support of HPC Platform, Xi'an Jiaotong University.
Hao Liu and Xin Chen contributed equally to this work. | 2022-03-28T01:15:23.894Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "3916cd55f9f480bce993b23d924b37350e56085e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2203.13256",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5b6c3e689c1b7512b1d946a0f7eac81f75f84d72",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Engineering"
]
} |
256911962 | pes2o/s2orc | v3-fos-license | Increased stem cells delivered using a silk gel/scaffold complex for enhanced bone regeneration
The low in vivo survival rate of scaffold-seeded cells is still a challenge in stem cell-based bone regeneration. This study seeks to use a silk hydrogel to deliver more stem cells into a bone defect area and prolong the viability of these cells after implantation. Rat bone marrow stem cells were mingled with silk hydrogels at the concentrations of 1.0 × 105/mL, 1.0 × 106/mL and 1.0 × 107/mL before gelation, added dropwise to a silk scaffold and applied to a rat calvarial defect. A cell tracing experiment was included to observe the preservation of cell viability and function. The results show that the hydrogel with 1.0 × 107/mL stem cells exhibited the best osteogenic effect both in vitro and in vivo. The cell-tracing experiment shows that cells in the 1.0 × 107 group still survive and actively participate in new bone formation 8 weeks after implantation. The strategy of pre-mingling stem cells with the hydrogel had the effect of delivering more stem cells for bone engineering while preserving the viability and functions of these cells in vivo.
molecules and large proteins into the gel, which might enhance the survival rate of inner seeded BMSCs. After osteogenic induction in vitro, the silk gel/scaffold complex containing 1.0 × 10 7 /mL stem cells was implanted into critical-sized calvarial defects in rats to evaluate the bone regeneration effects. The schematic illustration of this strategy is shown in Fig. 1.
Results
Nutrient transportation performance of the silk gel/scaffold complex. This experiment was carried out after the cell-free silk hydrogel was added dropwise to the scaffold (the SEM images and FT-IR spectra of the silk scaffold can be found online as Supplementary Fig. S1 and Supplementary Fig. S2, respectively). The results showed that the alizarin red solution can consistently permeate through the silk gel/scaffold complex. At 1 min and 5 min, the alizarin red solution permeates gradually from every interface to the core of the complex. From gross observation, the alizarin red solution almost went through the complex in 20 min from both the sagittal and coronal angles (Fig. 2a). The silk gel/scaffold complex could also absorb and release large protein molecules, such as bovine serum albumin (BSA), in a short period of time. The OD value of the 1 min group was significantly higher that of the control group (p < 0.01), which indicates that the BSA was rapidly absorbed by the complex from the first minute. In addition, there were also significant differences between the 1 and 5 min groups and the 5 and 20 min groups (p < 0.01). These results indicate a consistent absorbance of BSA by the silk gel/scaffold complex during the first 20 min. Noticeably, there is no significant difference between the 20 min group and the pure BSA group, where we speculate that the complex could almost thoroughly absorb the BSA in 20 min (Fig. 2b and c).
Cell interactions within the hydrogel. Both microscopy and confocal laser scanning microscopy (CLSM)
showed the intensiveness of the cell distribution in the 1.0 × 10 7 group compared to the 1.0 × 10 5 group or the 1.0 × 10 6 group (Fig. 3). More importantly, CLSM indicated that both the 1.0 × 10 5 and 1.0 × 10 6 groups had almost no cell interactions, that is, cells were independently dispersed within the hydrogel (Fig. 3a). However, for the 1.0 × 10 7 group, CLSM showed a high rate of cell-cell interaction, where most cells were connected to other cells and present as a network (Fig. 3b). In the 1.0 × 10 7 group, the entire silk gel was well distributed, with cells that could produce local calcium deposition and mineralization even in the very core of the hydrogel. Cell proliferation. Both the 1.0 × 10 5 group and the 1.0 × 10 6 group underwent a stable increase in cell quantity from day 1 to day 10. Significant differences were detected between day 4 and day 10 for these two groups (p < 0.01). For the 1.0 × 10 7 group, cell quantity declined after initial seeding, with significant differences detected between day 1 and day 4 (p < 0.01). However, the cell quantity then consistently increased until day 10. Statistically significant differences that indicated this increase were also detected between day 4 and day 7 as well as day 4 and day 10 (p < 0.01). Finally, the cell quantities in the 1.0 × 10 7 group showed no significant differences between day 1 and day 10 (Fig. 4).
Osteogenic potential in vitro. Both the alkaline phosphatase (ALP) activity assay and calcium deposition assay showed significant differences among the three groups. In the ALP activity assay, the 1.0 × 10 7 group appeared to have the highest ALP expression, followed by the 1.0 × 10 6 group and the 1.0 × 10 5 group (Fig. 5a). The calcium deposition in the 1.0 × 10 7 group was 1.6458 ± 1.1770 mg/well, which was significantly higher than that in the 1.0 × 10 6 group (0.2575 ± 0.028 mg/well) and the 1.0 × 10 5 group (0.0143 ± 0.002 mg/well) (p < 0.01). Figure 1. Schematic illustration. Schematic illustration of the fabrication protocol of the cell containing silk gel/scaffold complex. Silk solution was ultrasonicated to initiate the formation of β-sheet structure. Meanwhile, the solution was mixed with osteogenic cells, and the well-mixed solution was added dropwise to a silk scaffold. After gelation, the cell-carrying silk scaffold was ready for implantation to repair rat calvarial defects.
There was also a statistically significant difference between the 1.0 × 10 5 group and the 1.0 × 10 6 group (p < 0.01) (Fig. 5b). The results of the real-time quantitative polymerase chain-reaction (qPCR) assay are presented relative to the value for the 1.0 × 10 5 group. There were statistically significant differences in the expression of both ALP and osteocalcin (OCN) genes between the 1.0 × 10 5 group and the 1.0 × 10 6 group (p < 0.05), and more significant differences between the 1.0 × 10 5 group and the 1.0 × 10 7 group (p < 0.01) ( Fig. 5c and d). Figure 6a shows the reconstructed image of the newly formed bone in the rat calvarial defect area obtained using Micro-CT from both the apical and antapical views. In contrast to the 1.0 × 10 5 group and the 1.0 × 10 6 group, which exhibited few calcium nodules with a large amount of vacancy in the defect area, the 1.0 × 10 7 group showed significantly higher new bone volume (6.767 ± 0.481 mm 3 ) and the highest trabecular numbers (1.098 ± 0.197) (p < 0.01) after implantation in vivo for 8 weeks, where the newly formed calcium nodules grew evenly all over the defect area and connected to each other as a network (Fig. 6a, b and c). There were also statistically significant differences between the 1.0 × 10 5 group and the 1.0 × 10 6 group with respect to both new bone volume (p < 0.01) and trabecular number (p < 0.05).
Micro-CT.
Histological analysis. Figure by the remnant silk hydrogel, the 1.0 × 10 5 group presented a percentage of new bone area of approximately 6.236 ± 1.172%, which was significantly lower than that for the 1.0 × 10 6 group and the 1.0 × 10 7 group (p < 0.01).
Cell tracing. The CM-Dil labeled BMSCs were still surviving after 8 weeks of implantation in both ossification and non-ossification zones. In the ossification zone, the calcein labeling area (green) indicated the new bone that formed between 6 weeks and 8 weeks after implantation, where the labeled cells were actively participating in this procedure (Fig. 8).
Discussion
Stem-cell-based bone engineering is very promising and opens the possibility of developing cell-carrying materials with optimized abilities to carry as many cells as possible and prolonged lifespans for exerting certain functions in vivo. The conventional cell seeding process relies on the cell to adhere to the scaffold. In addition to the initial seeded cell quantity used in another study, which varied from 1.0 × 10 5 to 5.0 × 10 5 , the seeding efficiency was only 50% to 70% because most of the stem cells adhered only at the scaffold's surfaces and easily fell off [31][32][33][34] . Therefore, the actual seeding quantity in vivo is beyond prediction. We see this utterly inadequate and unstable cell seeding quantity as a huge challenge, combined with the more difficult problem of maintaining the cell quantity for a long period of time after implantation.
Based on the evidence that silk fibroin possesses excellent mechanical properties, fine biocompatibility and a controllable degradation rate, scientists have been incubating various cell lines with silk fibroin in the form of films, fibers or porous scaffolds for bone or soft tissue engineering [35][36][37][38][39] . Among all of those forms made using silk fibroin, we found hydrogel to be an ideal cell carrier because of its high water content, adequate mechanical strength and easily controlled gelation process [27][28][29][30] . More importantly, the manufacturing process for producing silk hydrogel allows the mingling of a determined quantity of cells before gelation, which could ensure the initial cell seeding number and allow for optimization. In this study, we quantitatively seeded selective densities of rat BMSCs into the silk hydrogels. The maintained cell viability in the three experiment groups (Fig. 4) demonstrated the marked cytocompatibility of the silk hydrogel.
We compared the osteogenic potential of these cell containing silk hydrogels and found that silk gels encapsulated with higher cell quantities tended to have better osteogenic potential both in vitro and in vivo. The 1.0 × 10 7 group exhibited the highest ALP activity, the most calcium deposition and the strongest osteogenic-related gene expression in vitro. Consistent with the in vitro experiment, the 1.0 × 10 7 group showed a clear increase in bone formation in vivo (Figs 6 and 7). Micro-CT showed that the 1.0 × 10 7 group had more and faster calcium deposition from the seeded BMSCs, which led to a prominent performance with respect to both new bone volume and trabecular number in the defect area compared to the other two groups (Fig. 6). The histological analysis also confirmed this result, as Van Gieson's staining for the 1.0 × 10 7 group showed the highest ratio of new bone area (Fig. 7). To further confirm that the rapid bone regeneration was produced by the encapsulated BMSCs, we conducted a cell tracing experiment that showed that the encapsulated stem cells maintained their viability and actively participated in the local bone formation process after implantation (Fig. 8).
This comparison of the three experimental groups verifies that the excellent bone formation was primarily a result of the larger quantity of encapsulated stem cells. Generally, the 1.0 × 10 7 group exhibited more mineralization locations in the defect area, providing more cores for calcium nodule formation. More importantly, the long-term in vivo preservation of the viability and function of the encapsulated stem cells in the 1.0 × 10 7 group guaranteed a continuous bone formation process. It is interesting that the expression of osteogenesis-related genes (ALP and OCN) in the 1.0 × 10 7 group was detected to be the strongest among the three groups (Fig. 5), which implies the better osteogenic differentiation potential of single cells when they are in a higher degree of encapsulation. We speculated that the result may be attributed to the enhanced cell-cell interactions in the 1.0 × 10 7 group (Fig. 3), where this optimization could increase the formation of gap junctions that directly transfer signaling molecules and metabolites between adjacent cells [40][41][42] . In addition, the increased cell quantity may also lead to more cytokines and proteins being secreted into the microenvironment that stimulate cell behavior 43,44 .
Conventional cell scaffolds rely on the seeded cells to grow into the center of the defect area, in which the long-term repair process and the insufficient oxygen and nutrients inside the scaffolds make it a challenge to preserve the survival of implanted cells [45][46][47][48] . However, the strategy of mingling stem cells with a silk hydrogel before gelation ensures that the cells are homogenously suspended in the material, which leads to mineralization of the entire defect area and reduction of the repair time. In addition, the silk hydrogel has a permeability that allows the transfer of both small molecules and proteins (Fig. 2) that could nourish the seeded cells to preserve their viability after implantation. This study emphasized the crucial role of stem cells in rapid bone regeneration while confirming the cell carrying properties and in vivo long-term cell viability supporting properties of the silk gel/ scaffold complex.
The fast and efficient bone-forming property achieved by the stem-cell-carrying silk hydrogel could be applied to multiple shapes of small defects by simply changing the nature or material of the scaffolds. We have now applied this strategy only to small defect areas to confirm the encapsulated cell functions with respect to rapid bone regeneration; verification of its applicability to large defect repair is still needed. Notably, achievements in in vitro osteogenic induction of hydrogel-enwrapped stem cells also hint at the possibility of pre-inducing the encapsulated stem cells into osteogenic progenitor cells to attain calcium deposition and local mineralization before implantation, which might further shorten the time needed for bone regeneration and reduce the risk of in vivo cell necrosis.
Methods
Animals. The animals used in this study were all obtained from the Ninth People's Hospital Animal Center (Shanghai, China) for both the calvarial defect repair experiment and BMSC isolation and culture. All animal experiments were conducted in accordance with the regional Ethics Committee guidelines, with the protocols approved by the Animal Care and Experiment Committee of Ninth People's Hospital.
Rat BMSC isolation and culture. BMSCs were obtained and cultured from 4-week-old male F344 rats, as we previously published 26,49 . Briefly, after euthanizing the rats with an overdose of pentobarbital injected intraperitoneally, the femurs were separated with the epiphysis being cut off. The marrow was then quickly rinsed out using Dulbecco's modified Eagle's medium (DMEM; Gibco, USA) containing 10% (v/v) fetal bovine serum (FBS; Gibco, USA). The isolated BMSCs were cultured in Dulbecco's modified Eagle's medium with 10% (v/v) fetal bovine serum. Cells were incubated at 37 °C in an environment containing 5% CO 2 . Non-adherent cells were removed by changing the medium after 24 h. When the confluence reached 80-90%, the BMSCs were subcultured at a density of 1.0 × 10 5 cells/mL with trypsin-ethylenediamine tetra-acetic acid (EDTA, 0.25% w/v trypsin, 0.02% EDTA). Cells at passage 2-3 were collected and resuspended in DMEM for subsequent cell encapsulation.
Preparation of the materials. Purified silk fibroin stock solutions were prepared at 8.0 wt% with deionized water diluted to approximately 4.0 wt% and used in the subsequent studies, as previously described 25,26,50,51 . The sterilized silk fibroin solution sterile DMEM powder was blended and sonicated to initiate gelation; approximately 10 min was required for the solution to fully transform into a hydrogel. Before it turned into a gel, a certain volume of cell suspension was added into the silk solution to reach three different final concentrations of 1.0 × 10 5 cells/mL, 1.0 × 10 6 cells/mL and 1.0 × 10 7 cells/mL. To observe the cell conditions inside the silk gel and evaluate their proliferation and osteogenic differentiation abilities in vitro, 20 μL of the mixed solutions was added dropwise to 96 well-plates and incubated at 37 °C for 10 min for gelation before conducting in vitro experiments. In addition, 20 μL of the silk gel was added dropwise to a porous silk scaffold (pore sizes 350-420 μm, 5 mm in diameter and 2 mm in thickness) 26 to evaluate the transfusion condition of this silk gel/scaffold complex. For the in vivo rat calvarial repair experiment, different densities of cell-containing silk gels were added dropwise to the silk scaffold for gelation, and then they were incubated in osteogenesis-induced medium for 7 days before in vivo implantation.
Nutrient transportation performance of the silk gel/scaffold complex. After the silk gels were fully gelled in the silk scaffolds, the gel/scaffold complexes were immersed in alizarin red solution and removed at selected time points (1 min, 5 min and 20 min) to observe the transfusion condition of the alizarin red solution from both the coronal and sagittal angles. To further evaluate the ability to transport large protein molecules, we placed the gel/scaffold complex in a-24 well plate containing 1 mL of distilled water and 20 μL of BSA (0.5 mg/mL) in each well, making the final BSA concentration of the immersed solution to be 10 μg/mL (the control group well contained only 20 μL of distilled water). After immersing the complex in the plate for different time periods (1 min, 5 min and 20 min), the gel/scaffold complex was removed and washed with PBS 3 times to remove the redundant BSA. Each gel/scaffold complex was then placed into 1 mL of distilled water at 4 °C overnight to release the encapsulated BSA (20 μL, 10 μg/mL BSA solution was added to 1 mL of distilled water to serve as positive control in this study). A mixture of bicinchoninic acid and copper sulfate solution was added into each well as the BCA working solution (Beyotime, Shanghai, China). The plate was then incubated at 37 °C for 30 min. The quantitative measurement of released BSA was present as the optical density (OD) value of the solutions at a length of 630 nm by an ELX ultra microplate reader (BioTek, Winooski, VT). Both transfusion experiments included only cell-free silk gel/scaffold complexes. Osteogenic potential in vitro. To detect the osteogenic differentiation potential of each study group, we performed an ALP activity assay, a calcium quantification experiment and a qPCR assay. All experiments were performed in triplicate.
Analysis of BMSCs within the silk hydrogel in vitro
For the ALP activity assay, different groups of silk hydrogels were incubated in osteogeneic induced medium for 3 and 7 days. After being fixed in paraformaldehyde for 30 min, the hydrogel was stained with an ALP kit (Beyotime, Shanghai, China) to evaluate the osteogenic potential of the encapsulated cells. Hydrogels with no cells were set as the control in this study.
For the calcium deposition assay, hydrogels were first fixed in neutral formalin at day 21 and then treated with 0.6 N HCL and gently shaken to decalcify for 24 h. The cell lysates were collected and transferred into a 96-well plate, and the cell lysates were incubated with a chromogenic reagent and calcium assay buffer from the calcium assay kit (Sigma, St. Louis, USA) for 10 min away from light. The optical density of the mixed solutions was then measured at 575 nm. A standard curve was set up to define the calcium concentration of each experiment group.
For the real-time quantitative polymerase chain-reaction (qPCR) assay, the total RNA of the cells in silk gel was extracted with Trizol reagent (TaKaRa, Shiga, Japan) after 7 days of incubation and reverse transcribed into cDNA with a PrimeScript 1 st strand cDNA synthesis kit (TaKaRa, Shiga, Japan). The qPCR results were measured using a real-time qPCR system (Bio-Rad, Hercules, CA) to evaluate the expression of ALP and OCN genes, with the housekeeping gene GAPDH set for normalization. The final result was calculated using the comparative delta C t method. The primers used in this study were commercially synthesized (Sangon Biotech, Shanghai, China), and the sequences are listed in Table 1.
Surgical procedure for the rat calvarial defect model. A rat calvarial defect model was established, as previously described 25 . Briefly, eighteen F344 rats were anesthetized with pentobarbital through an intraperitoneal injection (3.5 mg/100 g). A 5 mm diameter full-thickness calvarial defect was then created on both sides of the rat's skull. Different groups of silk gel/scaffold complexes were pre-incubated in osteogenic medium for 7 days to achieve local calcium deposition in vitro before random placement into the rat calvarial defects.
Micro-CT. After 8 weeks of implantation, the rats were sacrificed by injecting an overdose of pentobarbital.
The specimens were collected and fixed in 10% buffered formaldehyde solution. The specimens were imaged with a desktop Micro-CT system (μCT-80, Scanco Medical, Switzerland) and scanned in high-resolution mode (pixel matrix, 1024 × 1024; voxel size, 20 μm; slice thickness, 20 μm). We used an image analysis software package (Scanco Medical, Switzerland) to reconstruct the 3D images and detect new bone formation. The new bone volume (BV) and trabecular number (Tb.N) were then analyzed, as previously described 25 . The specimens were further stained with Van Gieson's picro fuchsin for histological observation.
Histomorphometric observation. After Micro-CT analysis, the specimens were dehydrated in gradient from 75% to absolute ethanol and embedded in polymethylmethacrylate (PMMA). The specimens were cut into 150 μm thick sections with a Leica SP1600 saw microtome (Leica, Germany) and further polished to a final thickness of 40 μm. The sections were stained with Van Gieson's picro fuchsin and observed under a confocal laser scanning microscope (CLSM, Leica, Germany). The new bone area was calculated using the Image-Pro Plus TM software program.
Fluorescence cell tracing experiment. Another 3 rats were included to determine whether the encapsulated osteogenic cells within the silk hydrogel participated in the bone formation process. The cells were labeled with CellTracker TM CM-Dil (Invitrogen, Carlsbad, CA, USA) and then encapsulated into the silk gel/scaffold complex at a density of 1.0 × 10 7 cells/mL. After incubation in osteogenic medium for 7 days, the complexes were implanted into rat calvarial bone defects. Six weeks after implantation, 20 mg/kg Calcein (Sigma, St. Louis, USA) was intraperitoneally injected into rats to detect new bone formation. At 8 weeks, all rats were euthanized with overdose pentobarbital, and the specimens were harvested. After dehydration, the specimens were embedded in PMMA and then cut and polished into 40 μm thick sections. The sections were further stained with DAPI and observed using a fluorescence stereomicroscope (Leica, Wetzlar, Germany).
Gene
Prime sequence Product size (bp) Accession number Table 1. Primers for real-time and reverse transcriptase polymerase chain reaction. OCN, osteocalcin; ALP, alkaline phosphatase; F, Forward; R, Reverse.
Scientific RepoRts | 7: 2175 | DOI:10.1038/s41598-017-02053-z Statistical analysis. The data are all presented as the mean ± standard deviation. ANOVA and SNK post hoc based on the normal distribution and equal variance assumption test were used to test for statistically significant differences (p < 0.05; p < 0.01) between the different groups in all studies. Statistical analyses were calculated with the SAS 8.2 statistical software package (Cary, USA). | 2023-02-17T14:41:57.310Z | 2017-05-19T00:00:00.000 | {
"year": 2017,
"sha1": "04ee9fca7cf1ec39b7ba329b4803dea1e87a6d58",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-02053-z.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "04ee9fca7cf1ec39b7ba329b4803dea1e87a6d58",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
2913126 | pes2o/s2orc | v3-fos-license | Updates on the genetic variations of norovirus in sporadic gastroenteritis in Chungnam Korea, 2009-2010.
Previously, we explored the epidemic pattern and molecular characterization of noroviruses (NoVs) isolated in Chungnam, Korea in 2008, and the present study extended these observations to 2009 and 2010. In Korea, NoVs showed the seasonal prevalence from late fall to spring, and widely detected in preschool children and peoples over 60 years of age. Epidemiological pattern of NoV was similar in 2008 and in 2010, but pattern in 2009 was affected by pandemic influenza A/H1N1 2009 virus. NoV-positive samples were subjected to sequence determination of the capsid gene region, which resolved the isolated NoVs into five GI (2, 6, 7, 9 and 10) and eleven GII genotypes (1, 2, 3, 4, 6, 7, 8, 12, 13, 16 and 17). The most prevalent genotype was GII.4 and occupied 130 out of 211 NoV isolates (61.6%). Comparison of NoV GII.4 of prevalent genotype in these periods with reference strains of the same genotype was conducted to genetic analysis by a phylogenetic tree. The NoV GII.4 strains were segregated into seven distinct genetic groups, which are supported by high bootstrap values and previously reported clusters. All Korean NoV GII.4 strains belonged to either VI cluster or VII cluster. The divergence of nucleotide sequences within VI and VII intra-clusters was > 3.9% and > 3.5%, respectively. The "Chungnam(06-117)/2010" strain which was isolated in June 2010 was a variant that did not belong to cluster VI or VII and showed 5.8-8.2%, 6.2-8.1% nucleotide divergence with cluster VI and VII, respectively.
Background
The noroviruses (NoVs) are classified in the genus Norovirus within the family Caliciviridae and are now considered the most important cause of outbreaks and sporadic cases of non-bacterial gastroenteritis worldwide [1]. Patients who are infected by NoVs usually show gastrointestinal manifestations including diarrhea, vomiting, abdominal pain, and low grade fever, and almost all of the infected cases resolve spontaneously [2]. NoV strains exhibit wide genetic diversity, and both genogroup GI and GII and different genotypes within the genogroups cocirculate in a given geographical region at the same time [3].
Seeing the high variability of NoV seasonality, it seems to be other factors govern transmission pattern of disease except environmental factors. Immunity to NoV infection and disease is generally temporal and heterotypic protection is limited [9]. Additionally, NoVs are highly infectious. Due to these combined factors, nearly all children will have had at least once NoV infection by their fifth birthday. However infections and disease occur throughout life as immunity wanes and new antigenic types are encountered [10]. Indeed, NoVs are constantly evolving, with the most common group of viruses (GII.4) under positive selection pressure-whereby immune escaping variants are selected for [11]. New variants with antigenic changes may escape population immunity. The emergence of such variants has been shown to be associated with substantial increases in cases worldwide [12,13]. Therefore, understanding of molecular epidemiology of NoV is very important. This study was to determine the epidemiology of NoV infection and molecular characteristics of Korean NoV GII.4 isolates.
Stool specimens
It was collected a total of 3171 stool specimens from patients with acute gastroenteritis in Chungnam, Korea from 2009 to 2010. The fecal specimens were diluted with phosphate buffered saline to 10% suspensions, and clarified by centrifugation at 8,000Χ g for 15 min.
RNA extraction and Detection of NoV in clinical samples
The viral RNA was extracted from the faecal supernatant using Viral Nucleic Acid Prep Kit according to the manufacturer's instructions (Greenmate Biotech, Seoul, Korea). The extracted RNA was dissolved in 50 μL of nuclease-free water and stored at -80°C until use for real-time and semi-nested RT-PCR. Real-time RT PCR for NoV detection was conducted using an AccuPower Norovirus Real-Time RT-PCR Kit (Bioneer, Daejeon, Korea) in accordance with the manufacturer's instructions; the 50 μL reaction mixtures contained 10 μL of RNA and each primer at a final concentration of 0.3 μM [14]. Reactions were performed using Exicycler™ 96 (Bioneer, Daejeon, Korea) under the following conditions: initial hold at 45°C for 15 min and 95°C for 5 min, followed by 45 cycles at 95°C for 5 sec, 55°C for 5 sec, and 25°C for 1 min. A sample with threshold cycle value < 35 and a typical sigmoid curve was defined as positive.
Nucleotide sequencing and molecular typing
In an effort to identify NoV genotypes, it was performed direct sequencing of all samples that tested positive for NoV by the real-time RT-PCR assay. For sequencing, semi-nested RT-PCR was conducted as described previously [12,15]. Products from the semi-nested PCR were purified using a QIA quick PCR purification kit (Qiagen, Hilden, Germany). The purified DNA was added to a reaction mixture containing 2 μL of BigDye Terminator reaction mix (ABI Prism Big Dye Terminator cycle sequencing kit; Perkin-Elmer/Applied Biosystems, Waltham, CA, USA) and 2 pmol each of the GI-R1M and GII-R1M primers. Sequencing reactions were subjected to an initial denaturation at 96°C for 1 min, followed by 25 cycles of 96°C for 10 sec, 50°C for 5 sec, and 60°C for 4 min in a thermal cycler (Gene Amp PCR System 2700; Perkin-Elmer/Applied Biosystems, Waltham, CA, USA). The products were purified by precipitating them with 100% cold ethanol, 3 M sodium-acetate (pH 5.8) before being loaded onto an automated analyzer (3730 XL DNA Analyzer; Perkin-Elmer/Applied Biosystems, Waltham, CA, USA). A BLAST search of GenBank sequences was conducted to determine the molecular type of each isolate. This was defined as the genotype that was scored as having the most nucleotides in common with the query sequence [16].
Phylogenetic analysis
Nucleotide sequences of Korean 31 candidate NoV isolates were compared with the 20 reference sequences using Clustal W v. 2.1 [17]. Phylogenetic relationships among the ORF2 sequences of the virus isolates were determined using MEGA software v. 5.05. Maximum Composite Likelihood was used as the substitution method, while the neighbor-joining method was used to reconstruct the phylogenetic tree [18,19]. The reliability of the phylogenetic tree was determined by bootstrap re-sampling of 1,000 replicates.
Nucleotide sequence accession numbers
The NoV candidate sequences were deposited in the GenBank sequence database (accession numbers JN688175 to JN688204).
Epidemiological features and genotyping of NoVs
We previously reported the results for epidemiological study of NoVs in Chungnam, Korea in 2008 [15]. The current study extended these findings by investigating the difference of the epidemic patterns in 2009 and 2010. In this period, 3171 samples obtained from patients with acute gastroenteritis were diagnosed for NoVs using by real-time RT-PCR and NoV positive samples were analyzed to sequences in capsid region. Consequently, it was detected a total of 211 NoVs: 64 from 1479 cases (4.3%) in 2009, and 147 from 1692 cases (8.7%) in 2010. Out of the total 211 cases, 5 cases (2.4%) were identified as GI genogroup and 206 cases (97.6%) as GII genogroup, which were further resolved into 5 GI and 11 GII genotypes, respectively. In decreasing order of abundance, these were: GII.4 (n = 130, 64.0%), GII.3 (n = 24, 11.8%), GII.8 (n = 16, 7.9%), GII.1 (n = 10, 4.9%), GII.2 and GII.7 (n = 7, 3.4%), and GII.12 and GII.16 (n = 3, 1.5%). The other thirteen genotypes (GII.6, GII.13, GII.17 and all GIs) were responsible for the remaining 1.0% of cases ( Table 1). The highest occurrence of GII.4 genotype has been reported in many recent surveillance of NoV epidemic throughout world [20], which is similar to our observation with Korean NoV isolates. Temporal distribution of the NoV epidemic in Chungnam Korea was seasonal, with most cases occurred during the winter from November to April (Figure 1). The high NoV detection rate in age distribution was 19.5%, 4.7%, and 15.4% in preschool age (under 5 ages), 12.3%, 4.3%, and 8.7% in old adult group (over 60 ages), from 2008 to 2010, respectively ( Table 2). The 122 were from males and 89 were from females, giving a male-to-female ratio of approximately 1.37:1.
Phylogenetic analysis of GII.4
The partial nucleotide sequences of ORF2 of 30 randomly selected NoV GII.4 strains obtained from patients with acute gastroenteritis (from 2008 to 2010) were used to construct a phylogenetic tree with 20 reference strains of the same genotype extracted from GenBank database. The NoV GII.4 strains were segregated into seven distinct genetic groups, which were supported by high bootstrap values and previously reported clusters including (I) CHDC cluster (1970s) [20], (II) Camberwell cluster (1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995) (JN688204)" strain which isolated in June 2010 was a variant that did not belong to cluster VI and showed 5.8-8.2% nucleotide divergence from cluster VI ( Figure 2).
Discussion
Global outbreaks of acute gastroenteritis by NoVs have been frequently reported since late 1990s, and NoV has been the etiological agent in many sporadic cases of gastroenteritis in Korea. We previously reported the epidemic occurrence of NoVs in Chungnam, Korea in 2008 [15]. The current study extended these findings by investigating the difference of the epidemic pattern in 2009 and 2010. Several studies in the past have demonstrated that NoV-associated gastroenteritis occurs mainly from late fall to spring [25][26][27][28]. Overall, the results of the present study are in accordance with this general pattern, except that epidemic of NoV in Chungnam Korea was temporarily reduced in the January during 2008 to 2010. The high NoV detection rate in age distribution was 19.5%, 4.7%, and 15.4% in preschool age (under 5 ages), 12.3%, 4.3%, and 8.7% in old adult group (over 60 ages), from 2008 to 2010, respectively. Generally, the patterns of age distribution in 2008 and 2010 were similar but that of 2009 was different. The reason was considered to be aggressive hand-washing and reduction of social activities due to the pandemic H1N1 influenza in 2009. This higher occurrence of NoV infection in young children and old people, who have weaker immunity than healthy adults, has been commonly observed. Immunity to NoV infection is temporal (between 2 and 6 months) and incomplete [9], and NoV infection frequently occurs and leads to symptoms even in adult groups unlike other viruses. In the present study, there were 5 GI and 11 GII genotypes identified. The highest prevalence of GII.4 NoV (61.6%) is consistent with recent clinical molecular epidemiological studies [20,24,29]. The NoV GII.4 strains evolved and spread in a manner similar to that of influenza A virus, with a rapid global spread of emerging variant [25]. During the last decade, most epidemics of NoV infection have been associated with the emergence of several novel GII.4 variants: CHDC, Camberwell, Grimsby, Farmington Hills, Hunter, Sakai, more recently 2008-Korea_a and b [15,[20][21][22][23][24][25][26][27][28][29]. In phylogenetic and diversity analysis, all Korean isolates analyzed in this study were contained into either cluster VI or VII which had been previously defined as the representative of Korean types of NoV. The divergence of nucleotide sequences of Korean isolates within cluster VI and VII was > 3.9% and > 3.5%, respectively. The "Chungnam(06-117)/2010 (JN688204)" strain was a variant that did not belong to cluster VI or VII and showed 5.8-8.2% and 6.2-8.1% nucleotide divergence with cluster VI and VII, respectively. The RNA viruses have high mutation rate in general and new variants of GII.4 can emerge quickly [13,15,30]. In this study, it was analyzed epidemiological distribution of NoVs from acute gastroenteritis patients collected in Chungnam Korea from 2009 to 2010. It was suggested that the results of this study might reflect national trends of NoV epidemics in Korea, particularly over recent years. Molecular characterization of the Chungnam isolates also revealed patterns of variation that may be useful in future studies. | 2016-05-12T22:15:10.714Z | 2012-01-24T00:00:00.000 | {
"year": 2012,
"sha1": "e0d948de3a47bbac176344dded37e9f28e9fa04f",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-9-29",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0d948de3a47bbac176344dded37e9f28e9fa04f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
20838564 | pes2o/s2orc | v3-fos-license | Severe acute respiratory syndrome coronavirus protein 7a interacts with hSGT
Severe acute respiratory syndrome coronavirus (SARS-CoV) 7a is an accessory protein with no known homologues. In this study, we report the interaction of a SARS-CoV 7a and small glutamine-rich tetratricopeptide repeat-containing protein (SGT). SARS-CoV 7a and human SGT interaction was identified using a two-hybrid system screen and confirmed with interaction screens in cell culture and cellular co-localization studies. The SGT domain of interaction was mapped by deletion mutant analysis and results indicated that tetratricopeptide repeat 2 (aa 125-158) was essential for interaction. We also showed that 7a interacted with SARS-CoV structural proteins M (membrane) and E (envelope), which have been shown to be essential for virus-like particle formation. Taken together, our results coupled with data from studies of the interaction between SGT and HIV-1 vpu indicated that SGT could be involved in the life-cycle, possibly assembly of SARS-CoV.
Coronaviruses are members of the Coronaviridae family in the order Nidovirales. This family of viruses contains genomes of $30 kb that include non-structural (pp1ab) and structural proteins (spike [S], envelope [E], membrane [M], and nucleocapsid [N]), making them the largest known RNA viruses. Interspersed among the structural proteins are group-specific proteins that differ in location and composition between the three coronavirus groups. The group-specific genes are not well characterized and the vast majority has, as yet, no known function. Initial research into the functions of these genes has shown that they are non-essential and dispensable for virus growth in cell culture [1]. However, more recent studies have shown that the accessory genes are required for in vivo infection in the natural host [2][3][4].
Severe acute respiratory syndrome virus (SARS-CoV) encodes for eight potential open reading frames (ORFs), i.e., ORF 3a, 3b, 6, 7a, 7b, 8a, 8b, and 9b, with no known homologues [5][6][7]. Of these, SARS 3a, 3b, 6, 7a, and 9b have been detected in SARS-CoV infected cells, as well as in clinical samples [8], indicating possible in vivo functions. SARS-CoV 7a (previously designated U122 in [9] and X4 in [10]) is the first ORF of subgenomic RNA 7 [6] and has been shown to be expressed in SARS-CoV infected Vero E6 cells [9,11]. It has been shown to localize to the Golgi apparatus and the endoplasmic reticulum (ER) and is recycled between the ER and Golgi complex via the intermediate compartments [9][10][11], where coronaviruses are known to bud [12]. Additionally, 7a has been shown to interact specifically with another unique SARS-CoV protein encoded by ORF3a (also called U274; [13]), which has been reported to be a novel SARS-CoV minor structural protein [14,15]. Interestingly, overexpression of 7a results in caspase-dependent apoptosis in Vero E6, as well as in cells from the lung, kidney, and liver. Therefore, based on the clinical observation of apoptosis in tissues from different organs, it appears as though 7a contributes to apoptosis during SARS-CoV infection and possibly to the pathogenesis of SARS-CoV [16]. Recently, the structure of the N-terminal domain of 7a was resolved and reported to be similar in fold and topology to members of the Ig superfamily [11]. Taken together, results indicate that 7a could play an important role in the viral infection cycle, but the exact biological function(s) of 7a is yet to be determined.
Identification of host proteins interacting with SARS-CoV 7a could be vital in elucidating its possible functions. It could also provide crucial insight into the biology and pathogenicity of SARS-CoV. This is the first report of a cellular protein that interacts with SARS-CoV 7a. We used a yeast two-hybrid system to screen a B-lymphocyte cDNA expression library for cellular proteins capable of interacting with SARS-CoV 7a. The principal cDNA identified from the screen encoded a $37.1 kDa peptide identified as human small glutamine-rich tetratricopeptide repeat (hSGT) containing protein. Our results indicated that 7a interacted with both human, as well as African Green monkey SGT (mSGT) from Vero E6 cells. The conceptual amino acid sequence of mSGT was determined and compared to the hSGT sequence; the amino acid (aa) sequence identity was found to be >99%. Protein-protein interaction was confirmed by co-immunoprecipitation and immunofluorescent staining in Vero E6 cells, while the SGT domains involved in the interaction were mapped by deletion mutant analysis. Tetratricopeptide repeat (TPR) 2 of hSGT (aa 125-158) was shown to be crucial for this interaction. The biological significance of the interaction between SARS-CoV 7a and SGT needs to be elucidated.
Materials and methods
Two-hybrid system library screen. The yeast reporter strain AH109 [GAL4 2H-3] (Clontech) was used for the two-hybrid selection. Plasmid pAS2-7aD96-122 (Table 1) was used as bait and a pACT-cDNA library (human lymphocyte MATCHMAKER, Clontech) was used as the source of prey genes. Yeast cells were grown on YPD or on synthetic minimal medium (0.67% yeast nitrogen base, the appropriate auxotrophic supplements, 2% agar [for plates]) supplemented with 2% dextrose. Yeast was transformed with appropriate plasmids by the lithium acetate method and transformants were selected on synthetic minimal medium. The bait plasmid and the pB42AD cDNA library were introduced into the yeast strain AH109 [GAL4 2H -3]. Two-hybrid screen and interaction assays were performed essentially as described in the protocol (Clontech) in the presence of 2% galactose and 80 mg of 5-bromo-4-chloro-3-indolyl-Dgalactopyranoside per liter. Prey plasmids were selected from yeast colonies giving a positive signal according to the manufacturer's protocol. False positives were eliminated by re-transforming the host AH109 [GAL4 2H-3] strain with pACT-cDNA library plus bait plasmid. Additionally, the pACT-cDNA was transfected in yeast strain PJ69-2A to check for autoactivation. The positive clones that contained cDNAs encoding 7ainteracting proteins were sequenced and analyzed using BLAST.
Mammalian cell lines and DNA constructs. African Green monkey kidney epithelial (Vero E6) cells (American Type Culture Collection, Manassas, USA) were maintained as described previously [9]. A cDNA clone expressing full-length 7a-HA were prepared as described previously [13]. A 7a-myc construct was prepared as in [13], but instead the amplicon was cloned into pXJ40-3 0 myc; all 7a proteins were epitope-tagged at the C-terminus. Full length, as well as truncations of hSGT cDNA, tagged with an N-terminus flag epitope for expression in mammalian cells is summarized in Table 1. All cDNA sequences were confirmed by sequencing.
Antibodies used. Glutathione S-transferase-hSGT (GST-hSGT) fusion protein was constructed to raise SGT-specific antibodies in mice and rabbits. The full-length hSGT was digested from pXJ40-flag-hSGT with BamHI and XhoI and cloned into the compatible sites of pGEX-4T-1. Fusion proteins were expressed in BL-21 Escherichia coli by induction with isopropyl-1-thio-L-D-galactopyranoside (IPTG) at 37°C for 3 h. Subsequently, GST-hSGT was purified using Glutathione Sepharose beads (Pharmacia) and eluted with excess glutathione. Purified GST-hSGT was then used to raise polyclonal antibodies in mice and rabbits, as described in [18]. Polyclonal (Santa Cruz) and monoclonal anti-HA (Roche) and anti-flag (Sigma) antibodies were used according to the manufacturer's instructions. All procedures on animals were done in accordance with the regulations of the Animal Research Ethics Committee, Singapore.
Transient expression and Western blotting. Vero E6 cells were transfected with Lipofectamine 2000 reagent (Invitrogen) according to the manufacturer's protocols. Unless stated otherwise, 1 lg of plasmid cDNA was used for transfection into Vero E6 cells in transient expression studies; full-length flag-hSGT was used at 0.25 lg. Western blotting was done as described in [13].
Immunoprecipitation. Cell lysates were extracted from transfected Vero E6 cells as described above. Typically, 150 lg of whole cell lysates was incubated with either rabbit anti-flag or rabbit anti-SGT antibody conjugated to Protein A-agarose beads (Roche) for 16 h at 4°C with endover-end mixing. Following incubation, the beads were collected and complexes were washed three times with IP buffer. The bound proteins were eluted by boiling in SDS sample buffer and Western blotted as discussed above.
Sequencing of the African Green monkey kidney epithelial SGT. Total cellular RNA was extracted from Vero E6 cells using the RNeasy Mini Kit (Qiagen) according to the manufacturer's instructions. First-strand cDNA was prepared from 1.0 lg total RNA using the SuperScript II RNase Reverse Transcriptase kit (Invitrogen). Subsequently, a 1:10 dilution of the first-strand cDNA was used in a PCR according to standard protocols. The primary nucleotide sequence of African Green monkey kidney epithelial SGT (mSGT) was determined by automated sequencing and compared to the hSGT sequence (NCBI Accession No. NP_003012) using CLUSTAL X [19]. The conceptual amino acid sequence of mSGT was compared to hSGT and comparisons were visualized using GENEDOC software [20].
Immunoflourescence. Transiently transfected Vero E6 cells were grown on coverslips. At about 16 h posttransfection, the medium was removed and the coverslips fixed in methanol at À20°C. After 5 min, the coverslips were lifted out and completely air-dried. Fixed cells were incubated with the primary antibody combination of mouse anti-HA and rabbit anti-flag at room temperature for 1 h. Mouse anti-HA and rabbit anti-flag antibody were used at a dilution of 1:200. Following washing, cells were incubated with the secondary antibody combination of FITC-conjugated goat antimouse and Rh-conjugated anti-rabbit antibodies at room temperature for 1 h (Santa Cruz Biochemicals, USA). Following extensive washing, the coverslips were mounted on glass slides and viewed.
Identification of cellular proteins interacting with SARS-CoV 7a
Biological processes are dependent on the direct physical interaction between different proteins. Therefore, the identification of host proteins that interact with viral accessory proteins could help elucidate possible functions of these unique viral proteins. In this study we used a yeast two-hybrid screen to identify host proteins that interacted with 7a. Truncated 7a (aa 1-96) protein-excluding the transmembrane-containing C-terminus to improve solubility of the expressed protein-was used as bait and 10 positive interacting candidates were identified. Following co-transformation and an autoactivation check to confirm the positive phenotypes, five clones were retained. The positive cDNA clones were sequenced and analyzed using BLAST; four of the positive clones contained complete ORFs that showed nucleotide sequence identity to human (h) SGT. hSGT identified in our screens encodes for a potential ORF of 313 amino acids (aa) with a predicted protein molecular mass of $37.1 kDa and contained three 34 aa tetratricopeptide repeat (TPR) motifs.
TPR motifs were first identified as protein interaction modules in cell division cycle proteins in yeast [21,22]. They have now been shown to be ubiquitous and present in a number of functionally distinct polypeptides from a variety of different species. Different protein-protein interactions are mediated by these motifs and the majority of TPR motif-containing proteins have been shown to be involved in processes as diverse as cell cycle control, transcription and splicing events, protein transport and protein folding, to name a few [23]. The TPR-containing protein, hSGT, was first identified and described as a cellular binding partner for the non-structural (NS) protein of autonomous parvovirus H-1. Interestingly, both H1-virus infection and transient expression of the NS protein result in modification (most likely phosphorylation) of hSGT [24]. A subsequent study showed that hSGT interacts with HIV-I Vpu and Gag proteins, with Callahan and co-workers postulating that hSGT plays a role in HIV-1 virus assembly or release [25]. Other binding partners of SGT include the growth hormone receptor [26], myostatin [27], heat shock cognate protein [28], and heat shock protein [29], and it has been speculated that SGT could also have a cochaperone function. Recently, it has been shown that SGT is present throughout the cell cycle and that depletion there-of leads to an increase in the mitotic index [30]. On the contrary, Wu and colleagues report that SGT is a pro-apoptotic factor and that knockdown of SGT expression in a hepatocarcinoma cell line protects against apoptotic stimuli [31]. It is clear that SGT has diverse functions within the different cell types and possibly plays a role in the life cycle of at least two human viruses.
SGT interacts with SARS-CoV 7a in Vero E6 cells
The mouse anti-SGT antibody specifically detected endogenous SGT from Vero E6 cells, as well as transfected flag-hSGT (Fig. 1A) and the polyclonal rabbit SGT antibody specifically immunoprecipitated endogenous Vero E6 SGT (Fig. 1B). Co-immunoprecipitation studies were used to confirm the interaction between SGT and 7a-HA in Vero E6 cells (Fig. 1B). Cells were co-transfected with pXJ40-flag-hSGT and pXJ40-7aHA, as described elsewhere. For 7a-HA interaction with endogenous Vero E6 SGT, only pXJ40-7aHA was transfected. Total protein extracts were immunoprecipitated with either rabbit antiflag (lane 4) or rabbit anti-hSGT (lane 5) antibody conjugated to Protein A-agarose beads and Western blotted; an unrelated antibody (rabbit anti-myc, lane 6) was used as negative antibody control. Western blots and IP blots were detected with mouse anti-SGT and mouse anti-HA antibodies (Fig. 1B). SARS-CoV 7a-HA co-immunoprecipitated with both flag-hSGT (lane 4), as well as with endogenous SGT from Vero E6 cells (lane 5). 7a-HA was not detected when the unrelated antibody (lane 6) was used for immunoprecipitation. Our results showed that 7a-HA interacted specifically with SGT from both human and monkey cells.
To explain the interaction of 7a with both hSGT as well as African Green monkey SGT (mSGT), we determined the nucleotide sequence of mSGT using cDNA from Vero E6 cells. The deduced primary amino acid sequence was compared to hSGT using CLUSTAL X [19] and visualized with GENEDOC [20]. mSGT showed >96% nucleotide and >99% aa sequence identity with hSGT (Fig. 2). This high sequence identity could explain why 7a-HA interacted with SGT from two distinct organisms.
SARS-CoV 7a co-localizes with flag-hSGT
The sub-cellular localization of 7a-HA and flag-hSGT in Vero E6 cells was studied. Vero E6 cells were transfected with pXJ40-flag-hSGT and pXJ40-7aHA (Fig. 3). At 16 h post-transfection, cells were fixed with methanol and stained with both mouse anti-HA and rabbit anti-flag antibodies. As a negative control untransfected Vero E6 cells were treated in the same way. Whereas 7a-HA showed a perinuclear localization (Fig. 3, middle panel), flag-hSGT was distributed to both the nucleus and cytoplasm of transfected cells (left panel) and untransfected cells did not stain (not shown) with the antibodies. The cellular distribution of SGT is consistent with a report by Cziepluch and coworkers [24] who found that untagged rat SGT is detectable in cytoplasm and nucleus of FREJ4 cells. Thus, 7a-HA was found to partially co-localize with flag-hSGT in Vero E6 cells (right panel). This partial co-localization of flag-hSGT and 7a-HA indicated that SARS-CoV 7a could interact with SGT in Vero E6 cells.
hSGT TPR2 is essential for the interaction with 7a Human small glutamine-rich tetratricopeptide contains three 34 aa TPR motifs that have been shown to mediate protein-protein interactions. In fact, recently Liou and Wang [32] investigated the functional importance of the different regions of SGT. They confirmed previous reports that the N-terminus of hSGT (aa 1-90) is responsible for self-dimerization of the protein and that the TPR domain (aa 91-192) is important for interaction with different proteins. Furthermore, in in vitro studies the glutamine-rich C-terminus portion (aa 193-313) could interact with short peptide segments consisting of consecutive non-polar amino acids. Thus, to determine whether the TPR motifs played a role in the interaction between hSGT and SARS-CoV 7a, four hSGT deletion mutants were created (Fig. 4A). Binding of the hSGT mutants with 7a-HA was studied using immunoprecipitation assays in Vero E6 cells. Results showed that full-length flag-hSGT (aa 1-313), flag-hSGTDC (aa , and flag-hSGTDC3 (aa 1-158) interacted with 7a-HA in Vero E6 cells. On the other hand, flag-hSGTDN1-2 (aa 159-313) and flag-hSGTDC3-2 (aa 1-125) did not interact with 7a-HA (Fig. 4B). Also, the negative control flag-GST did not interact with 7a-Ha. Since only full-length and SGT mutants that contain TPR2 interacted with 7a, our results showed that TPR2 (aa 125-158) was essential for the interaction.
7a-HA interacted with SARS-CoV M and E
7a-HA has previously been shown to interact with the SARS-CoV minor structural protein 3a [13]. Co-immunoprecipitation studies in Vero E6 cells were used to determine whether 7a-myc also interacted with the SARS-CoV structural proteins (nucleocapsid) N, M, and E (Fig. 5). In this study, the HA-tagged proteins HA-N, M-HA, and E-HA were used as previously described in [13]. Whereas, the N protein is responsible for packaging the RNA into the nucleocapsid, the M and E proteins are responsible for virus assembly. In fact, the co-expression of M and E in a baculovirus expression system was sufficient for the assembly of virus-like particles [33]. Results showed that flag-hSGT 7a-HA merge The interaction of SGT with the HIV-1 accessory protein vpu and major structural protein gag has previously been reported [25]. The accessory protein vpu has been shown to be essential in regulating viral particle release and viral load [34]. Interestingly, overexpression of SGT and subsequent association with vpu protein reduces the titre of virus released from HIV-1 infected cells [25]. The authors also reported that the in vivo interaction between gag and SGT was abrogated when vpu was overexpressed in cells, indicating an intricate relationship between SGT, the accessory protein and the major structural protein. In this study, we reported the interaction of SARS-CoV 7a with hSGT. To our knowledge, this is the first report of the interaction between a SARS-CoV accessory protein and a cellular protein. The interaction was confirmed by co-immunoprecipitation and immunofluorescent studies in VeroE6 cells. We concluded that TPR2 of hSGT (aa 125-158) was essential for the interaction between SGT and SARS-CoV 7a. We also showed that 7a interacted with SARS-CoV M and E. Based on recent reports, it is clear that the SARS-CoV accessory proteins are probably involved in various viral processes in vivo, including pathogenicity and infectivity. In showing that 7a interacted with SGT, a protein with many diverse and essential functions, our findings provided further evidence in support of this hypothesis. Our data raised the possibility that the interaction between SGT and 7a, and the latter's interaction with M and E which have been shown to be sufficient for VLP formation, could play a role in virus assembly or release from the cell. Understanding the biological significance of the interaction between SGT and 7a could possibly lead to the discovery of novel therapeutics in treating SARS-CoV infection. | 2018-04-03T05:34:39.363Z | 2006-03-24T00:00:00.000 | {
"year": 2006,
"sha1": "64b0204aa95cefe54033177243c7544b6e5dea22",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.bbrc.2006.03.091",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e23bde4e435cbcb5f23f6b1dc2f6687f1c3b221",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
27859108 | pes2o/s2orc | v3-fos-license | The economic cost of low back pain in Sweden in 2001.
Background Low back pain (LBP) is a common cause of lost work days and disability. In 2001, expenditure for back pain represented 11% of the total costs for short-term sick leave in Sweden, and about 13% of all early retirement pensions were granted for back problems, of which LBP is the most important symptom. The magnitude of LBP as a health problem justifies a closer look at its burden of illness to society. Materials and methods We assessed the costs of LBP to society in Sweden in 2001. The study was conducted in a cost-of-illness framework, measuring both the direct costs of providing health care to LBP patients, and the indirect costs as the value of the production that is lost because people are too ill to work. The costs were estimated by a prevalence and top-down approach. Results The total cost of LBP was 1 860 million EUR in Sweden in 2001. The indirect costs due to lost productivity accounted for 84% of the total cost. Interpretation The cost of illness due to low back pain was substantial, but does not appear to have risen during the last 10-15 years.
■
In 2001, expenditure for back pain represented 11% of the total costs for short-term sick leave in Sweden. About 13% of all early retirement pensions granted were related to back problems, of which low back pain is the most important group (RFV 2002a, b). Since low back pain is common and difficult to treat effectively, it is a condition that leads to long-term absence and, consequently, a high economic burden to society (Maetzel and Li 2002). About 15-30% of the population suffers from low back pain at any given point in time (point prevalence). The one-month prevalence is 19-43%, and the lifetime prevalence is 60-70% (SBU 2000).
Low back pain prevents normal activity and affects the capacity to work. For society, it means lost work days; for the individual, it means both lower income and a reduced quality of life as a result of pain and immobility. The health-related quality of life in patients with low back pain in Sweden is lower than for patients with conditions such as diabetes, angina pectoris, asthma, and even neck and shoulder pain (Burström et al. 2001). Most cases of low back pain are not severe and disappear within a few days to a few weeks, but for some the problems may become recurrent or chronic. Back pain is the most common chronic disease for those under 65 years of age (Waddell 1996, Andersson 1999. In Sweden, there are no national statistics on reduced work capacity as a result of low back pain, but some local epidemiological studies have been performed in Göteborg (SBU 2000). In this study, we estimated the total cost of low back pain in Sweden in 2001.
Cost-of-illness methodology
This study was performed according to traditional cost-of-illness methodology (for surveys, see Henriksson 2001, or Hodgson andMeiners 1982). We used the top-down approach to cost estimation, which means that the total national costs for illnesses were partitioned between different diseases according to the frequencies of different diagnoses. In the bottom-up approach, which can be used either as an alternative or as a complement, data are collected directly from a sample of patients during or after medical visits, and then the figures from the sample are extrapolated to represent the whole population by using national prevalence figures.
The advantages of using the top-down approach are that no extrapolation is needed, and that it avoids the risk of double counting. The disadvantages compared to the bottom-up approach are that diagnoses may be underreported or misreported, and that important cost items are missing from the national illness registers. For example, costs for social services or unpaid home help are unaccounted for if a pure top-down approach is used. The value of household production lost as a consequence of disease is also missing from a top-down approach to cost-of-illness studies.
Costs
We chose a societal perspective, which implies that all costs-whether incurred by individuals, employers, or the government-are taken into account. On the cost side, both direct costs for medical visits, hospitalization and pharmaceuticals, and indirect costs for sick leave and early retirement were considered. Direct costs are costs for goods and services used in the prevention, diagnosis and treatment of the disease in question, as well as rehabilitation and other medical consequences of the disease. Private costs incurred by the patient and family and other public resources (e.g. transportation) are also included under this heading. Indirect costs are defined as the value of the output that is lost because people are impaired from working or too ill to work (Luce and Elixhauser 1990). Typical cost items in this category are costs for shortterm absence from work, early retirement pensions caused by disability, and premature death.
The approach of evaluating life as the value of lost production is known as the human capital approach. This method is based on the assumption in neoclassic economic theory that, in a situation with full employment, the wage rate of an employee is equal to the marginal revenue that the employer gener-ates by hiring him or her. For example, the loss of productivity associated with disability is evaluated using gross earnings lost or some proportion of the gross earnings if an individual is unable to work to his/her full capacity (Hodgson andMeiners 1982, Luce andElixhauser 1990). Some favor the term productivity costs instead of indirect costs (Gold et al. 1996), but not even that term is ideal. The costs of lost productivity would perhaps be the most fitting description (Brouwer et al. 1997). In this study we will use the traditional term indirect costs, since it is still the most common terminology.
It has been argued that the human capital approach of measuring indirect cost overestimates the production loss from absence due to sickness, from disability, and from premature mortality (Koopmanschap et al. 1995). If an employer can replace a worker who is on long-term leave with someone who is currently unemployed, then the only period during which costs for lost productivity may occur is the friction period, i.e. the period between beginning of absenteeism and replacement. The friction cost method has been proposed as an alternative to the traditional human capital approach, since the latter does not take account of slacks in the economy such as unemployment (Koopmanschap andvan Ineveld 1992, Koopmanschap et al. 1995). However, in the absence of empirical data on the length of the friction period in Sweden, and the extent to which employees on long-term sick leave are actually replaced by unemployed people, we have only used the traditional human capital method of estimating the indirect costs in this study.
Some would argue that intangible costs, which include pain, psychosocial suffering, and changes in social functioning and activities of daily living caused by the disease, should also be included in an estimate of the cost of illness. However, pain and psychosocial suffering would turn up on the benefit side rather than the cost side in a cost-benefit analysis, i.e. we are dealing here with health effects rather than costs (Koopmanschap et al. 1995). Although it is possible to assign a monetary value to health effects, we have not included any estimate of the intangible costs in the Results section, but we return to the issue of intangible costs in the Discussion.
Prevalence and incidence
Cost-of-illness studies can be performed by using either prevalence-or incidence-based methods (Hodgson and Meiners 1982). Prevalence-based studies examine costs incurred during a given time period, usually 1 year, regardless of the date of the onset of disease. Incidence-based studies examine costs for cases of the disease that have developed for the first time in that year. Future costs and production losses are then estimated for the entire lifetime of these patients, and calculated in terms of present values. Since incidence-based studies can be used to calculate the economic benefits of reducing the number of new cases, they are suitable for evaluation of preventive measures (Henriksson 2001). For a long-term disease with a changing pattern of incidence, an incidence-based cost-of-illness estimate may bear little relation to the current annual costs for the disease, which makes it difficult to compare these costs with the total annual healthcare expenditure. The prevalence approach is therefore preferable for comparisons of the annual costs for a disease with the total annual costs for other, or all, diseases. In this study, the prevalence approach was chosen since the goal of the study was to put the cost of illness of low back pain into a larger perspective rather than to evaluate any specific preventive measures.
The average number of bed-days in 2001 was combined with per-diem costs for the same year. The unit costs for different departments were obtained from hospital price lists in Malmö, Lund, Linköping, Uppsala, and Umeå. The inpatient costs are summarized in Table 1. Orthopedic care was by far the most important type of inpatient care, representing about 70% of the total inpatient costs.
In Table 1, only discharges where one of the above diagnoses was the main diagnosis were included. This leads to an underestimation of the costs, since there are cases in which the above diagnoses are important as a secondary diagnosis. However, if we include the cases where the above diagnoses are secondary diagnosis, it is difficult to get the costs right. If secondary diagnoses are included, and the costs are calculated in the same way as above, then the total cost would be 36.6 million EUR. The true costs are probably somewhere between 33.0 and 36.6 million EUR, but the lower figure has been chosen cautiously as the base case estimate.
Ambulatory care
Ambulatory care includes visits to physicians, nurses and physiotherapists. According to local Swedish studies, outpatient visits for back pain account for about 2-3% of all outpatient visits in Sweden. The back and neck pain report from the Swedish Council on Technology Assessment in Health Care refers to a study in the county of Jämtland, which reported that 2-3% of all outpatient visits in primary care concern back pain (SBU 2000). In southwest Stockholm, back pain (or more specifically ICD-10 codes M543-M545 and M549P) constituted about 2.8% of all outpatient visits in primary care (EK-gruppen 2003, Swedish Federation of County Councils 2003, and our own calculations). In the US, the corresponding figure was also estimated to be 2.8%, based on data from the National Ambulatory Medical Care Survey (Hart et al. 1995). In the absence of more precise information at a national level in Sweden, we have assumed that 2.5% of all outpatient visits concerned back pain. Since the national costs for primary care physicians were 1,228 million EUR in 2001 (Swedish Federation of County Councils 2002a), the costs for primary care outpatient visits were estimated to be 30.7 million EUR. The costs for outpatient visits in outpatient somatic care were 2,044 million EUR (ibid.), which implies costs for back pain of 51.1 million EUR. The total costs from outpatient visits were 81.8 million EUR. The costs for physiotherapy are also important, since back-pain patients often visit physiotherapists more frequently than they visit physicians. The lack of national statistics on the proportion of physiotherapy visits that concern back pain makes it difficult to estimate these costs exactly. However, an estimate of the costs for physiotherapy can be obtained by combining the total costs for physiotherapy with local studies on the proportion of visits with back pain as the main diagnosis. There are some rather old estimates of the proportion of back pain patients attending physiotherapy, and in the absence of better information we have used these figures as a basis for an estimate of the costs. According to a study of physiotherapists performed in the county of Jämtland, 42% of the visits in primary care and 60% of the visits to private practitioners concerned back pain (the study is referenced in SBU 2000).
Pharmaceuticals
Since many of the pharmaceuticals used by low back pain patients are NSAIDs, analgesics, muscle relaxants, gastrointestinals and other pharmaceuticals that are prescribed to patients with co-morbidities and diffuse pain symptoms, it is not possible to identify the pharmaceutical costs for low back pain patients exactly. However, a reasonable estimate can be obtained by looking at the proportion of prescriptions that are written with low back pain as the main diagnosis. The cost of pharmaceuticals prescribed for low back pain can then be estimated by assuming that the proportion of costs is the same as the proportion of prescriptions.
The number of prescriptions for back pain was obtained from the "Diagnosis and Therapy Survey" of the National Corporation of Swedish Pharmacies (Apoteket AB), the Swedish state monopoly retailer of medical products. The total sales in ATC group M, 92.3 million EUR, were multiplied by the percentage of prescriptions for each diagnosis in order to get an estimate of the costs (Table 2). The total costs of pharmaceuticals for back pain were estimated to be 23.1 million EUR. There are also pharmaceutical costs for inpatients, but these are already included in the general inpatient costs. OTC pharmaceuticals that are bought out-of-pocket by the patients for self-medication of low back pain ought to be included, but that was not possible in the current study, since a patient survey would have been required to obtain such information.
Short-term absence from work
The costs for short-term absence from work were estimated based on the expenditure of the Swedish National Social Insurance. In 2001, the total expenditure for short-term absence from work was 4,150 million EUR. Low back pain (M54) accounted for 10.7% of this amount, i.e. 444 million EUR (RFV 2002b). This is not a good measure of the production loss, however, since the sickness pay from the national insurance system is only about 80% of the ordinary salary. In order to estimate the production loss, the payroll taxes (32.8%) must also be added, since the value of production is equal to the total labor cost from the employer's standpoint rather than the salary received by the employee. The indirect costs due to absenteeism were 4,150 × 0.107 × 1.33/0.80 = 738 million EUR.
Early retirement pensions
The indirect costs from early retirement pensions can be estimated in several ways. Since we do not know the exact proportion of expenditure due to low back pain, one option is to base the estimate on the number of diagnoses in the musculoskeletal area. In 1996, a total amount of 1 679 million EUR was paid out in early retirement pensions to people with diagnoses involving the musculoskeletal area (RFV 1998). Since 22.3% (37 112/166 431) of these diagnoses involved low back pain, it is reasonable to assume that about the same percentage of the expenditure for early retirement pensions went to low back pain patients. Taking inflation (4.3% from 1996 to 2001), the reimbursement level (64%), and payroll taxes (33%) into account, the estimated value of lost production in 2001 was 1 679 × 0.223 × 1.043 × 1.33/0.64 = 811 million EUR.
An alternative method would be to use the incidence approach, which is based upon the number of new pensions granted in a certain year. The prevalence approach, which was used above, is based instead on the expenditure for all who received early retirement pensions for low back pain during a given year, regardless of the year of onset of disease.
In the incidence approach, the number of expected working years lost is estimated by looking at the number of cases in each age group and then taking account of the mortality risk and the number of years left to the normal retirement age. In the calculation below, it has been assumed that people with low back pain have the same mortality risk as the average population. The working years lost in each age group have been discounted at a rate of 3% (Gold et al. 1996), and then multiplied by the number of new pensions granted in 2001 (RFV 2002a).
The problem is that not all cases in Table 3 concern low back pain, since the figures do not contain low back pain as a separate diagnosis. Low back pain is classified among "other and unspecified diseases of the back". The total number of cases in this group was 7 224 in 2001, and in the musculoskeletal group the total number of cases was 22 698. If we assume, as in the calculation above, that about 22% of the cases in the musculoskeletal group refer to low back pain, then the number of cases would be about 4 600. If the number of working years lost is multiplied by the number of working hours per year and the labor cost per hour, the value of lost production due to early retirement pensions can be estimated. Assuming 45 full working weeks per year, 34.6 h per week for men, 26.8 h per week for women, and a labor cost of 20.5 EUR per hour (Statistical Yearbook of Sweden 2002), the costs for men were estimated to be 811 million EUR, and the costs for women to be 998 million EUR, giving a total production loss of 1 810 million EUR.
As about 60% (4 600/7 224) of these costs refer to low back pain, the costs for this diagnosis are about 1 100 million EUR, which is slightly higher but still roughly in line with the prevalence-based estimate presented above. As the base case, the figure based on national insurance expenditure and the number of back pain diagnoses linked to early retirement was chosen, primarily because it is a prevalence estimate. The two different calculations presented here show, however, that the results were in the same order of magnitude no matter which method was used.
Costs of low back pain in Sweden in 2001
The direct and indirect costs resulting from low back pain in Sweden in 2001 are summarized in Table 4. As the base case for the indirect costs, the prevalence approach based on expenditure in the National Social Insurance system was chosen. Indirect costs represent the greatest costs by far, while the direct costs for prescription pharmaceuticals and inpatient care are relatively small by comparison.
Calculation of the cost per patient requires a relevant prevalence measure. However, the prevalence of low back pain is very much dependent on how back pain is defined (SBU 2000). The 1-year prevalence is 40-50% (SBU 2000), but this figure is based on people's self-reported back-pain problems in population studies rather than actual use in healthcare and health insurance. Most people with occasional back pain problems do not seek healthcare for their symptoms. The main question here involves whether we should include all those who have symptoms, or all who are treated in one way or another. Since the latter figure is not readily available, the cost per patient will be based on 1-year prevalence figures from population studies. Assuming a 1-year prevalence of 10% between 0-18 years of age and 40% for people 19 years of age and older, the cost per person with low back pain at some time during the year 2001 was 632 EUR. The cost per person in the total Swedish population (8 909 128) was about 211 EUR.
Comparison with previous studies
Our findings are hardly surprising. Earlier studies in Sweden and other countries have also found that the indirect costs are much greater than the direct costs. According to a recent Swedish report on back and neck pain (SBU 2000), the total costs to society for back pain (including low back pain) in Sweden amounted to 3.2 billion EUR in 1995(3.4 billion EUR in 2001. The predominant costs are the indirect costs for sick leave and early retirement, which amounted to 2.9 billion EUR, or 92% of the total cost. A Dutch study estimated the total cost of back pain in the Netherlands in 1991 to be 5.0 billion US dollars (5.2 billion EUR according to exchange rates in 2001), 93% of which concerned indirect costs (van Tulder et al. 1995). A British study estimated the total cost of back pain in the UK in 1998 to 12.3 billion pounds (20.4 billion EUR; 2001 rates), 87% of which concerned indirect costs (Maniadakis and Gray 2000). While these sums differ as a result of the different population sizes in the three countries, the costs per capita are very similar: 380 EUR in Sweden, 350 EUR in Holland, and 350 EUR in Britain The proportion of indirect costs in our study was 84%, which is slightly lower than in earlier Swedish studies. One problem, however, is that it is difficult to apply the same classification of diagnoses to physician visits, inpatient care and pharmaceuticals in a consistent way. Unless all costs are systematically broken down according to ICD-10 codes, it is hard to get the same selection of diagnoses for all cost items. We have tried as far as possible to match the back-pain diagnoses included in the direct costs and in the indirect costs, but a slightly broader range of diagnoses may be included for some cost items than for others. This is an almost unavoidable disadvantage of using the top-down approach to the cost of illness.
Completeness and validity of data
The present cost-of-illness study is far from complete. For example, the following direct cost items were excluded in the present estimate of the costs for low back pain because of lack of data: medical equipment, devices, and orthopedic aids; OTC pharmaceuticals for self-medication (pain relief); adaptations of house, kitchen, or bathroom; transportation to and from clinics and hospitals; community social services, such as back-pain related home help; private services; and time spent on informal care by relatives and friends.
Most of these cost items are unavailable with the present top-down approach to the cost of illness, but could have been considered by using the bottom-up approach, i.e. by distributing a patient questionnaire to a sample population and then extrapolating the results to the national level. Since a bottom-up study is often more complete, estimates of the cost of illness are generally higher if this method is chosen (see, e.g., Henriksson et al. 2001). Some of the cost items listed above can be quite expensive, which means that the present cost-of-illness estimate is an underestimate of the direct costs.
The proportion of back pain patients in outpatient care is somewhat uncertain, since no figures are available at the national level in Sweden. Given the existing evidence in previous studies, however, 2.5% seems to be a reasonable estimate. The cost of physiotherapy in 1995 was estimated at 103 million EUR in 1995 prices (SBU 2000), or 152 million EUR in 2001 prices, compared to 170 million EUR in our study. The costs have thus gone up by 11% in real terms since 1995. However, between 1987 and 1995 the costs for back-pain physiotherapy doubled (SBU 2000), so an increase of 11% is moderate by comparison.
In general, national registers such as the inpatient register or the national insurance registers are reliable data sources, with coverage close to 100% and very good validity of diagnoses. Some data sources are less reliable, e.g. the "Diagnosis and Therapy Survey" of Apoteket (the Swedish pharmacy chain), which was used for the pharmaceutical costs. The response rate is typically about 50% in this survey. The costs for physiotherapy were also quite unreliable, since they were partly based on a local study.
There are at least two ways of estimating the indirect costs of production losses caused by shortterm absence from work due to low back pain. The method we pursued was to look directly at the expenditure in the Swedish National Social Insurance system. The disadvantage of this method is that these figures are not directly relevant as a measure of the production loss, since only the amount paid out from the national sickness insurance is included. We compensated for this by taking reimbursement level and payroll taxes into account. It is difficult to do this in a completely rigorous way. For example, we disregarded the qualifying period before different benefits are paid out, and the fact that beyond a certain income level, no further benefits are paid out by the national insurance (even though there may be further private insurance coverage). Another way of estimating the value of lost production would be to look at the number of days of sickness, and combine this with the average labor cost per hour in Sweden. However, the latest available figures at the national level concerning the number of days of sickness absence specifically due to low back pain appear to be from as far back as 1990. Since these figures are not necessarily representative of conditions in 2001, the latter approach was not pursued here.
Apart from indirect costs for short-term absence and early retirement, cost-of-illness studies usually consider the loss of production that is caused by premature mortality in a disease. However, there does not seem to be any data on premature mortality due to low back pain in Sweden.
Estimation of indirect costs
The indirect costs in this study are lower than in the previous studies performed by the Swedish Council on Technology Assessment in Health Care (SBU 1991(SBU , 2000. The indirect costs for back pain in 1995 were 2.9 billion EUR, which amounts to 3.1 billion EUR in 2001 prices, i.e. more than 1 billion EUR more than the indirect costs for low back pain in 2001. One obvious reason for the difference is that the 1995 figures include pain in neck and shoulders, which increases the costs by about 30-50% (SBU 2000). Even so, the difference is slightly greater than expected, which may imply that the indirect costs for low back pain decreased between 1995 and 2001. However, it is difficult to determine whether the greater than expected difference is caused by real changes in costs over time, or whether it is due to the methodological differences in data sources, the diagnoses included, or the principles of calculation.
Intangible costs
As mentioned in the Methods section, the intangible costs of a disease are physical and emotional pain, and other effects on the patient's quality of life. Some would argue that cost-of-illness studies that do not take account of the intangible costs underestimate the contribution of the disease to the total burden of disease in society. For illustrative purposes, we will calculate an estimate of the intangible costs and discuss the implications of the result. An estimate of the intangible burden of low back pain can be obtained by comparing the quality of life of patients with the disease with the quality of life of the general population, and then assigning a monetary value to the loss in health in terms of QALYs . The QALY (Quality-Adjusted Life Year) combines-in one measure-the length of life and the quality of life of patients by assigning to each time period (e.g., a month or a year) a quality-of-life weighting ranging from 0 to 1, where 0 represents death and 1 perfect health. The quality of life weighting for low back pain is 0.66 on a scale from 0 to 1, according to the EQ-5D index value (EQ-5D (EuroQoL-5Dimensional) is a standardized quality of life form with five questions. Health states defined by answers to the EQ-5D form can be converted into a weighted health state index between 0 and 1 by applying results from general population samples (Dolan 1997)).
The corresponding value for middle-aged people (i.e. aged 40-49 years) is 0.86 (Burström et al. 2001). The difference between these values cannot be used directly, since there may be co-morbidities, differences in age and income, and other confounding factors. Instead, the regression coefficient for low back pain can be used. The regression coefficient states that, all else being equal, individuals with low back pain have an EQ-5D index value that is 0.1154 units lower than average. The prevalence of low back pain in the study by Burström et al. (2001) was 15.7% (481/3069), which corresponds well with point-prevalence figures in the literature (SBU 2000). The loss of QALYs can be calculated according to the formula (population of country) × (prevalence) × (difference in QALYs between person with and without low back pain). Assuming a constant point prevalence of 15.7% in the Swedish population, this would imply a loss of 8 909 128 × 0.157 × 0.1154 = 161 400 QALYs per year. With a value per QALY of 56 000 EUR (Newhouse 1998, Ekman 2002, the intangible cost would be 9.0 billion EUR. This is almost five times greater than the total cost of illness figure reported in this study. It should be mentioned, however, that there are several ways of calculating the monetary value of a QALY, and different methods tend to give different results. The method that generally results in the lowest valuation per QALY gained is the human capital method. In a survey by Hirth et al. (2000), the median value of a QALY was 24 777 US dollars for studies using the human capital method, or 27 700 EUR in 2001 prices (USD1 = EUR1.12). By using this lower value of a QALY, an intangible cost of 4.5 billion EUR would be obtained, which is still a considerable amount.
These results appear to indicate that the intangible costs of pain and suffering are considerably greater than the sum of the direct costs of treatment and the indirect costs of production loss. The problem is that the intangible costs are not costs in the usual sense, as they will not show up as a monetary sum on anyone's balance sheet. Unlike direct treatment costs and production losses, intangible costs do not represent resources that could have had alternative uses in healthcare or in society as a whole. What the intangible costs represent is rather the potential value of eradicating the disease, which is a measure of health benefits rather than costs. The concept of intangible costs may therefore be somewhat misleading, and should not be included in an estimate of the cost of illness. However, the exercise presented here was far from meaningless; the same type of calculation can be used to estimate the value of health changes (Burström et al. 2003).
Concluding remarks
The direct costs for low back pain constitute 1.7% of the total healthcare costs, and the indirect costs for short-term illness due to low back pain constitute 10.7% of the costs for short-term absence. The indirect costs for low back pain as part of the costs for early retirement are more difficult to calculate, but probably amount to somewhere between 6-11%. In terms of costs, low back pain is primarily a problem for the health insurance system. The conclusions for policy decisions that can be drawn from this study are rather limited, but it is plausible to assume that even if new treatments for low back pain increase the direct healthcare costs, they may still be worthwhile if they can contribute to lower rates of short-term illness, long-term disability, or both-especially if improvements in quality of life are taken into account.
Cost-of-illness studies have raised much criticism both on methodological grounds and for being of doubtful value for policy-making purposes (see, for example, Shiell et al. 1987 andKoopmanschap 1998). In particular, the fact that a certain disease costs a certain amount does not in itself tell us whether more or less resources should be spent on treating the disease. What we need for policymaking purposes is rather economic evaluations that assess both costs and health effects of single medical interventions or healthcare programmes (Byford et al. 2000). However, a cost-of-illness study can be an important step in generating ideas for further research, and can act as a building block in a subsequent economic evaluation. In such an evaluation, the change in direct and indirect costs of an intervention or a programme would be weighed against the change in health effects (Drummond et al. 1997).
To conclude, the cost of illness for low back pain is substantial, but does not appear to have risen with time during the past 10-15 years. The indirect costs have probably even decreased a little since the 1990s, but it is difficult to say by how much, since there are methodological differences compared to previous studies. The direct costs seem to have increased slightly. The intangible costs, or rather the potential (and probably unattainable) benefits of eradicating low back pain, have also been calculated for comparison, even though this figure should not be included in the cost-of-illness estimate. | 2018-04-03T01:02:00.278Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "83ce4e3b32931d30e4c7817bda99c5fd0bb48176",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/00016470510030698",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c4c51725133d52acf5496efccf9a49e7bd0ac91c",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203000052 | pes2o/s2orc | v3-fos-license | EXPERIMENTAL STUDY OF THE COMPOSITE STEEL DECK IN TERMS OF GEOMETRIC AND MECHANICAL PARAMETERS
The behavior of the composite steel deck is conditioned, mainly, to the interaction that exists between the concrete and the steel deck, the shear connection between these two materials is provided by the mechanical adherence, which exists in its interface. In order to study these characteristics, a series of full-scale slab tests have been carried out, such as bending tests to identify the slipping load, which is directly related to the shear bond strength of the slab. The study of the shear bond strength is very important because in most cases the geometric conditions of the section and the slenderness of the slab will lead to this type of failure. For this reason, this research will be in charge of studying this failure condition, based on previously full-scale slab tests and newly constructed specimens, in order to cover as many situations as possible. The experimental results of these tests, such as the slipping load and the corresponding deflection, will allow establishing a new relationship between the slipping load, the geometric and mechanical parameters. This proposed relationship will be carried out for each of the steel deck profiles under study and this formulation will be validated as an alternative proposal to the "m y k method".
INTRODUCTION
The composite steel deck became known in Peru at the end of the 1990s, thanks to the advantages that this type of slab showed worldwide, in comparison to the traditional systems that existed at that time. A composite slab is defined by ASCE (1992) as a system comprising of normal weight or lightweight structural concrete placed permanently over cold-formed steel deck in which the steel deck performs the dual role of acting as a form for the concrete during construction and as positive reinforcement for the slab during service [1].
This type of slab is composed of concrete and the galvanized steel deck, the type of galvanized is G90 and must follow the requirements established in ASTM A653 [2]. The connection between these two materials is through the mechanical adhesion (provided by the embossments), this allows the system to work as a composite section and reach its maximum capacity if the embossments work correctly.
The composite slab has three important advantages over the typical slabs, first, the steel sheet acts as a principal reinforcement in replacement of the rebar, also during construction the steel deck acting as a work platform and no formwork is necessary, because the steel deck profile provide adequate stiffness and strength to support construction live loads and wet concrete [3].
There are three failure modes in these composite slabs (flexural, shear and shear-bond), but most of these slabs are conditioned to shear bond stress at the interface within the steel deck and concrete, and this has a direct relationship with the embossments that exists in the deck profile. The degree of interaction between these materials will determinate the maximum load reached. That is why the importance of study these slabs subjected to a bending test because it allows determining which is the slipping load (limit of shear bond stress) and the corresponding deflections, to know if it is within the permissible limits in its service state.
Furthermore, the bending test allows studying the behavior of the composite slab and analyzes the failure modes for the different combinations of geometric and mechanical parameters, such as steel profile shape, span length ( ), total slab thickness ( ) and steel deck thickness (gage).
The shear bond "m y k" method, presented by Porter, M, & Ekberg, C [4], is a composite slabs design that is including in different standards specifications and is widely accepted worldwide, as in the ASCE and Eurocode 4 [5]. This method consists of a full-scale DOI: https://doi.org/10.21754/tecnia.v29i2.705 Journal TECNIA Vol.29 N°2 July-December 2019 bending test and related the slipping load and geometric parameters for each profile deck and thickness.
In the tests carried out to date, it has been observed how deflection affects the functionality of the system and the need to include it in the design criteria.
BACKGROUND
In the previous investigations, presented by Miguel Diaz [6], 36 composite steel deck were tested by bending with AD-600, AD-900 and AD-730 profiles for the different configurations of geometric parameters and thicknesses of the steel sheet 0.909 mm and 0.749 mm (gage 20 and 22 respectively) [7].
Also, in the same investigation, the pull-out tests were carried out, which is an adhesion test that allows quantifying the shear bond stress in a piece of the slab.
Further with the results that were obtained the parameters m and k were determined for each profile and each gage so to have a first proposal for the design of composite steel deck conditioned by the shear bond failure.
DESCRIPTION OF THE SPECIMENS
With the purpose of increasing the results obtained until now, 15 new specimens were built with the AD-730 and AD-600 profiles for the thickness 1,504 mm (gage 18) shown in Figure 1 y Figure 2 respectively. This in order to study better the bending behavior of the slabs according to its geometric parameters and so complement this information with the other 36 specimens.
The specimens were built in 3 groups, 2 castings of 6 slabs each one, and finally the last casting of 3 slabs (longer slabs).
TEST PROGRAM
The newly built specimens will be subjected to static bending tests following the recommendations of ANSI/ASCE 3-91 "Standard for the Structural Design of Composite Slab" Chapter 3 Performance Test.
The Figure 3 shows the distribution of the channels that will measure the vertical displacements and the load during the test. It also shows the location of the slab, as it is supported on rollers and is on metal beams that will be adapted to the length of the specimens. Additionally, in Table 1, the channels used in this research are briefly described. The load capacity of the hydraulic jack was 500 kN and the load was transmitted through a rail to apply the load to the thirds, as is presented in Figure 4. Additionally, other channels will be placed to measure the end slip to study the horizontal shear bond and following the recommendations given by Abdullah, R. and Easterling, W. S. [8].
RESULTS BENDING TEST
The load will increase until reaching the failure load, this progressive increase will allow visualizing the slipping load and the deflections at each moment.
The load applied through the hydraulic jack is transmitted under two-point loads in the slab, as indicated in Figure 5. Furthermore, given the load condition, the shear force ( = /2) is constant throughout the length of the cut. In the graphs with the AD-730 profile, it can be noticed that the slipping load increases when increasing the thickness of the slab. In addition, it can be observed that after reaching the detachment load, a fall occurs in the curve to later give way to over-resistance, which in some cases can overcome the slipping load or be below it.
A similar situation occurs with the AD-600 profile, with the exception of specimen L-011 (thinner), in which it is not possible to appreciate the slipping load due to the slenderness of the slab. The geometrical properties of each specimen, the slipping loads and the deflection in which it occurs are indicated in Table 1 and Table 2 for the AD-730 and AD-600 profile respectively.
Also indicate that Pslip corresponding slipping load and is the deflection associated with the slipping point. The Table 4, Table 5 and Table 6 show the values for the AD-900, AD-730 and AD-730 profiles respectively, according the results obtained in 2009.
FUNCTIONALITY OF THE SYSTEM
The behavior curves of the composite steel deck show that the functionality of the system can be conditioned mainly by two factors, the first factor is the slipping load, since from that point the interaction between both elements is partial and doesn't work as a composite section. The second factor is the maximum deflection (L/360), which determined the functionality of the system. Therefore, it is important to identify under what deflection the slipping load occurs, since in some cases it is visualized that the slipping load occurs after having exceeded the limit deflection, in this case, the load corresponding to (L/360) must be determined to establish the maximum service load.
In other specimens, it can be seen that the slipping load occurs before reaching the limit deflection, in this event, the functionality of the system is conditioned to the shear bond strength.
Then, all the slipping loads (including all the tests) will be plotted, separated by profile and span length, in order to visualize under which deflection occurs the slipping in comparison to the limit deflection.
The Figure 11 shows the results for the AD-900 profile with a span length of 2750 mm and it can be noticed that there are specimens in which the slipping load occurs before and after the limit deflection. However, in Figure 12, for span length 3780 mm, all specimens have a slipping point after having exceeded the limit deflection. Also, the Figure 15 and Figure 16 show the results for AD-600 profile.
THE SHEAR BOND m y k METHOD
In the tests carried out previously (M. Diaz, 2009) the parameters "m" and "k" were determined for each profile and for each gage (20 and 22) from expression (1), thus giving the first experimental results for the design of these structural elements. The parameters "m" and "k" obtained in that study are shown in Table 7, according to the type of profile and the gage.
NEW PROPOSED METHOD
For the new design proposal, a new linear regression will be made between the slipping load and the geometrical and mechanical parameters of each specimen.
Unlike the method "m" and "k", this new correlation gathers all the results obtained for a specific profile (AD-900, AD-730 or AD-600), that is, it groups all the slipping loads regardless of the type of gage, since within the geometric parameters the area of the section of the profile is included.
This way of studying the results will allow to directly relate the slipping load with the thickness slab, span length, and the steel area, the expression (2) shows the relationship between all the mentioned parameters.
In addition, in this new proposal, the verification of the functionality of the system will be incorporated, since this could be limited by the deflection or the shear bond strength.
This is a very important difference in comparison to "m" and "k" method, because it only provides the slipping load through the linear regression that is performed for each profile and gage, leaving the verification by deflection as an additional calculation that is theoretical, and that in many cases it is usually the one that conditions the functioning of the system.
For the incorporation of this criterion in the new proposed method, the figures that showed the deflection measured in the slipping point were analyzed, and the percentage of reduction of the load was quantified for the cases in which the slipping occurred after exceeding the limit deflection, this procedure is seen in Figure 17. For the case in which the slipping occurs before the limit deflection, the reduction will be zero percent, and the functionality will be conditioned by the shear bond strength.
This percentage was analyzed for each profile, as seen in Table 8, and it was noted that the reduction is greater in the slender slabs compared to the compact slabs. In the previous table, it is observed that the percentages of reduction vary according to the type of profile, with the AD-900 profile having the greatest reduction, contrary to the AD-730 profile. In addition, it can be seen that in the AD-730 profile there is a 0% reduction, which indicates that for this slenderness region the functionality is conditioned by the slipping load, which occurs before the limit deflection (L/360).
The experimental results, plotted as shown in Expression 1, are shown in Figure 18, Figure 19 and Figure 20 for AD-900, AD-730 and AD-600 profiles respectively. Also, in these graphs, the linear regression of the experimental data and the reduced linear regression are added from the reduction percentages shown in Table 8. The parameters and of the reduced linear regression are obtained from the graphs shown, and these are summarized in Table 9, in order to obtain the shear force associated with the functionality of the system ( ). Now the graphical comparison between the m and k Method and the new proposal is presented, to see how the Loads and vary according to the span length and the thickness slab, the latter will vary according to the profile. In addition, experimental results will be added to analyze the proximity of the methods described. In these first results it is observed that there is a difference of values between the two methods, but it can be noticed that this difference is shortened in the compact slabs, that is, in slabs with greater thickness.
Similarly, Figure 23 Finally, the theoretical values for the AD-600 profile are indicated, again with the 22 gage. Figure 25 shows the values for a thickness of 110mm and covering span length from 2000mm to 4500mm. In the same way figure 1 is plotted, but with 150mm slab thickness. In the case that the compressive strength of concrete in the design is lower from the strength DOI: https://doi.org/10.21754/tecnia.v29i2.705 Journal TECNIA Vol.29 N°2 July-December 2019 obtained in the tests carried out, the recommendation indicated by the ASCE will be followed and use the expression (3) to determine the shear force associated with the functionality limit. (3)
CONCLUSIONS
-In the slender slabs, it is visualized that the slipping load occurs under a deflection that exceeds the limit deflection (L/360), therefore a percentage of reduction must be calculated in order to determine the load that conditions the functionality of the composite steel deck. -The reduction percentage is much higher for the AD-900 profile, reducing up to 45% for the slender slabs, compared to the AD-730 profile, which has a 20% reduction. -The linear regression performed on the experimental data, according to the proposed method, presents a better relationship for the AD-900 and AD-600 profile, while for the AD-730 profile there were some scattered results. -In the comparison of loads for both methods, m and k Method and the new proposed method, it can be seen that the loads obtained by the second method are smaller compared to the m and k Method, but this difference is shortened in the slabs with greater thickness. -The new proposed method can be used as a new design alternative since it is a tool that allows determining the load that limits the functionality of the system based on the study of the slipping load and the maximum deflection allowed in its service state. -From the load obtained by the new method, the permissible distributed live load can be calculated for each combination of slab thickness and span length; as well as for specific concrete strength. | 2019-09-17T02:47:36.938Z | 2019-08-07T00:00:00.000 | {
"year": 2019,
"sha1": "6c7db58af22ca1a4f59c0d3385b142b59d16c021",
"oa_license": "CCBYNC",
"oa_url": "http://revistas.uni.edu.pe/index.php/tecnia/article/download/705/1107",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4da815ce7f3a46aeaf181e664e2f8ebbee8368ad",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
67749153 | pes2o/s2orc | v3-fos-license | Dephasing in the electronic Mach-Zehnder interferometer at filling factor 2
We propose a simple physical model which describes dephasing in the electronic Mach-Zehnder interferometer at filling factor 2. This model explains very recent experimental results, such as the unusual lobe-type structure in the visibility of Aharonov-Bohm oscillations, phase rigidity, and the asymmetry of the visibility as a function of transparencies of quantum point contacts. According to our model, dephasing in the interferometer originates from strong Coulomb interaction at the edge of two-dimensional electron gas. The long-range character of the interaction leads to a separation of the spectrum of edge excitations on slow and fast mode. These modes are excited by electron tunneling and carry away the phase information. The new energy scale associated with the slow mode determines the temperature dependence of the visibility and the period of its oscillations as a function of voltage bias. Moreover, the variation of the lobe structure from one experiment to another is explained by specific charging effects, which are different in all experiments. We propose to use a strongly asymmetric Mach-Zehnder interferometer with one arm being much shorter than the other for the spectroscopy of quantum Hall edge states.
We propose a simple physical model which describes dephasing in the electronic Mach-Zehnder interferometer at filling factor ν = 2. This model explains very recent experimental results, such as the unusual lobe-type structure in the visibility of Aharonov-Bohm oscillations, phase rigidity, and the asymmetry of the visibility as a function of transparencies of quantum point contacts. According to our model, dephasing in the interferometer originates from strong Coulomb interaction at the edge of two-dimensional electron gas. The long-range character of the interaction leads to a separation of the spectrum of edge excitations on slow and fast mode. These modes are excited by electron tunneling and carry away the phase information. The new energy scale associated with the slow mode determines the temperature dependence of the visibility and the period of its oscillations as a function of voltage bias. Moreover, the variation of the lobe structure from one experiment to another is explained by specific charging effects, which are different in all experiments. We propose to use a strongly asymmetric Mach-Zehnder interferometer with one arm being much shorter than the other for the spectroscopy of quantum Hall edge states.
I. INTRODUCTION
The quantum Hall effect (QHE), 1 one of the central subjects of the modern mesoscopic physics, 2 continues to attract an attention of both experimentalists and theorists. It is well known that the low energy physics of the QHE at the Hall plateau is determined by the edge excitations, because at strong magnetic fields there exist a gap for excitations in the bulk of the two-dimensional gas (2DEG). Properties of quantum Hall edge excitations were investigated in a number of experimental and theoretical works. 3 However, only very recently the progress in the fabrication of novel mesoscopic systems made it possible to closely focus on the electronic properties of quantum Hall edge, which were not well understood earlier. In particular, experiments on the quantum interference and dephasing processes in electronic Mach-Zehnder 4 interferometers (MZI) brought remarkable results, which shed light on new physics of quantum Hall edge states. This physics is the subject of our theoretical investigation.
The idea of the electronic MZI is the same in all recent experiments. 5,6,7,8,9 The region of the sample, where the two-dimensional electron gas (2DEG) is present, is topologically equivalent to so called Corbino disk (see Fig. 1). There are at least two ohmic contacts: one is grounded, and the second is biased by the potential difference △µ. The current I is detected at one of the ohmic contacts. In fact, experiments that we discuss used several ohmic contacts for the convenience of the measurement, although only two contacts are required for the realization of MZI. Two QPCs play a role of beam splitters which mix outer edge channels (thin black line in Fig. 1). The inner channels (blue line in Fig. 1) are always reflected from QPCs.
Typically, the transparencies of two QPCs were varied between T ℓ = 0 and T ℓ = 1, ℓ = L, R. However, The Mach-Zehnder interferometer is schematically shown as a Corbino disk which contains the two-dimensional electron gas (2DEG). In strong magnetic field at filling factor ν = 2 two chiral one-dimensional channels are formed and propagate along the edge of 2DEG. Inner channels (blue line) are always reflected from both quantum point contacts (QPC), while outer channels (black line) are mixed by QPCs. Bias ∆µ applied to the upper ohmic contact causes the current I to flow to the lower ohmic contact. This current is due to scattering at QPCs and contains the interference contribution sensitive to the magnetic flux Φ and leading to Aharonov-Bohm oscillations. the most interesting physics was observed in two limits: in the regimes of weak tunneling T ℓ → 0 and of weak backscattering T ℓ → 1. In the first regime one of the outer channels is biased (upper channel in Fig. 1) and almost completely reflected at the first QPC. Then it runs on the same (upper) part of the Corbino disk. The channel that originates from the second (lower) ohmic contact is grounded. In the second regime (shown in Fig. 1 as example) the biased channels are almost fully transmitted at the first QPC to the opposite (lower) part of the Corbino disk. The physical consequences of the difference between these two regimes will be discussed later in the Sec. IV.
Two ohmic contacts are connected solely via scattering at two QPCs. Consequently, there are two paths between ohmic contacts, which contribute to the total current I. The first path is reflected at the right QPC and transmitted at the left one, while it is the other way around for the second path. It is easy to see that two paths enclose a loop with the nonzero magnetic flux. The Aharonov-Bohm (AB) phase associated with it may be changed either by varying slightly the strength of the magnetic field, or by varying the length of one of the paths with the help of the modulation gate placed near the corresponding arm of the interferometer.
According to a frequently used single-particle picture, 2 the electron edge states propagate as plane waves with the group velocity v F at Fermi level. They are transmitted through the MZI (see Fig. 1) at the left and right QPCs with amplitudes t L and t R , respectively. In the case of low transmission, two amplitudes add so that the total transmission probability oscillates as a function of the AB phase ϕ AB and bias ∆µ. The visibility of the oscillations of the differential conductance G ≡ dI/d∆µ is defined as Then the Landauer-Büttiker formula 10 applied to the differential conductance gives the following result for the visibility and the AB phase shift: where ∆L is the length difference between two paths of the MZI. Thus we arrive at the result that in the absence of interaction the visibility is independent of bias, while phase shift grows linearly with bias. The most remarkable observation made in experiments [5,6,7,8,9] is that the simple single-particle picture of edge states fails to correctly describe the AB effect in the MZI. Essentially, the results can be summarized as following: The visibility of AB oscillations is not constant, but rather strongly depends on bias ∆µ. It oscillates, showing a new energy scale, and may vanish at specific values of bias. While this behavior is observed in all experiments, the details are different and very important for understanding the underlying physics. Therefore, we group experimental observations roughly in two parts, according to a specific important feature of the experimental set-up, and describe them below in details.
A. Only one edge channel is biased
The first experimental situation that we wish to address is reported in Ref. [5]. In this experiment the bias is applied to the outer channel only. This situation is achieved by splitting incoming inner and outer channels with the help of an additional QPC, so that two channels originate in fact from different ohmic contacts. This allows a different bias to be applied separately to two channels at the same edge. The MZI in this situation is schematically shown in Fig. 2 for the regimes of weak tunneling T ℓ → 0 (left panel), and of weak backscattering T ℓ → 1 (right panel). This schematics is obtained from Fig. 1 by splitting each ohmic contact attached to the Corbino disk and deforming two interfering paths so that they run from left to right. After this procedure, the symmetry between two scattering regimes becomes obvious: In order to go from the set-up on the left panel of Fig. 2 to the one on the right panel, one needs to simply flip the interferometer vertically. This symmetry is important, and will be shown in Sec. IV to result in the symmetry between weak tunneling and weak backscattering regimes. The Ref. [5] discovered an unexpected AB effect which is inconsistent with the single-particle picture of edge channels. The following observations where reported: • Lobe-type structure in the dependence of the visibility of AB oscillations on the DC bias with almost equal widths of lobes. The visibility vanishes at specific values of the bias. This behavior persists for various fixed values of magnetic field and for various transparencies of QPCs; • The rigidity of the AB phase shift followed by sharp π-valued jumps at the points where the visibility vanishes; • The stability of both mentioned effects with respect to changes in the length of one of the interferometer paths.
The experiment [5] was theoretically analyzed in several recent works [11,12,13,14]. The Ref. [11] focuses on ν = 1 case and suggests that the suppression of the visibility is due to the resonant interaction with the counterpropagating edge channel located near one of the arms of the interferometer. 15 At present, this idea seems to be a reasonable guess, as far as the dephasing at ν = 1 is concerned. However, the experiments [5] and [6] concentrate on the ν = 2 regime, where two edge channels coexist. These and new experiments, 7,8,9 where the counterpropagating edge channel has been removed, prompt a new theoretical analysis. The authors of the Ref. [12] consider a long-range Coulomb interaction at the edge and make an interesting prediction about the temperature dependence of the visibility. However, they are not able to propose an explanation of the lobe-type behavior of the visibility. The Refs. [13,14] suggest that dephasing in MZI is due to shot noise generated by the partition of the edge channel at the first QPC. While this idea may correctly capture a part of the physics at ν = 1, the drawback of this explanation is that the shot noise vanishes in weak tunneling and weak backscattering regimes, where the experiments nevertheless demonstrate strong dephasing. Moreover, the experiment which we discuss below illuminates the special role that the second inner edge channel at ν = 2 plays in dephasing.
B. Two edge channels are biased
In contrast to the work [5], the experimental set-up in Ref. [6] does not contain an additional QPC that would allow to split two edge channels at ν = 2 and to apply potentials to each of them separately. Therefore, in Ref. [6] two edge channels that originate from the same ohmic contact are biased by the same potential difference ∆µ. For the convenience of the following analysis we again unfold the MZI on [6]. Two incoming edge channels of the MZI are biased with the same potential difference ∆µ, and other channels are grounded. Left panel shows the weak tunneling regime, while the right panel shows the weak backscattering regime. Now it is easy to see the asymmetry between regimes of weak tunneling and of weak backscattering. In the first regime (left panel) two channels on the upper arm of the interferometer are equally biased with the potential difference ∆µ. The situation is different in the second regime (right panel): The inner channel is biased on the upper arm of the interferometer, while the outer channel is biased on the lower arm. We believe that this asymmetry is responsible for entirely different behavior of the visibility of AB oscillations in the experiment [6]: • Lobe-type structure with the visibility vanishing at certain values of bias is observed only in the weak tunneling regime. The central lobe is approximately two times wider than side lobes. In the weak backscattering regime the visibility shows oscillations and decays as a function of the bias; • No phase rigidity is found at all transparencies of QPCs; • The asymmetry in the visibility as a function of the transparency of the first QPC is observed. In particular, the visibility always decays as a function of the bias in the regimes of weak tunneling. In contrast, in the regime of weak backscattering the visibility first grows around zero bias, and only then it decays.
It is the last observation which is very important. It indicates that charging effects induced by different biasing of edge channels may be responsible for differences in the results of experiments [5] and [6]. This idea seems to agree with the conclusion of the authors 16 of the experiment [7]. In this paper we develop this idea and propose a simple model that is capable to explain on a single basis all the experimental observations described above. Namely, we assume a strong (Coulomb) interaction between two edge channels that belong to the same quantum Hall edge. The interaction effect is complex: First of all, it leads to charging of edge channels and induces experimentally observed phase shifts. Second, the interaction is partially screened, which leads to the emergence of the soft mode and of a new low energy scale associated with it. The width of lobes in the visibility and the temperature dependence are determined by this energy scale. Finally, the interaction is responsible for the decay of coherence at large bias.
Further details of our model are given in Sec. II, while in the Appendix A we check the consistency of the model. In Sec. III we express the visibility of AB oscillations in terms of electronic correlation functions, and derive these functions in the Appendix B. In section IV we present a detailed comparison of our results with the experimental observations. Finally, in Sec. V we briefly summarize our results.
II. MODEL OF MACH-ZEHNDER INTERFEROMETER
Before we proceed with the mathematical formulation of the model we wish to stress the following points. The experimentally found new energy scale 5,6,7,8,9 is very small. For instance, the width of lobes in the visibility is approximately 20µV . We show below that this energy is inverse proportional to the size of the MZI, few micrometers. Thus it is much smaller than any other energy scale associated, e.g., with the formation of compressible strips. 17 Therefore, we use an effective model 18 appropriate for the description of the low energy physics of quantum Hall edge excitations. Namely, we consider the inner and outer edge channels at ν = 2 as two chiral boson fields and introduce the Luttinger-type Hamiltonian 3,19 to describe the equilibrium state. Second, we introduce the density-density interaction, which is known to be irrelevant in the low-energy limit. 18 This fact has no influence on the physics that we discuss below, because we focus on the processes at finite energy and length scale, which take place inside the MZI.
A. Fields and Hamiltonian
We assume that at filling factor ν = 2 there are two edge channels at each edge of the quantum Hall system and two chiral fermions associated with them and denoted as: ψ αj (x), α = 1, 2 and j = U, D. Here the subscript 1 corresponds to the fermion on outer channel, and 2 to the fermion on inner channel (see Fig. 4), while the index j stands for upper and lower arms of the interferometer. The total Hamiltonian of the interferometer contains single particle term H 0 , interaction part H int , and the tunneling Hamiltonian H T . The single-particle Hamiltonian describes free chiral fermions: 18 where v F is a Fermi velocity, which is assumed to be the same for each edge channel. This assumption is not critical, because, as we will see below, the Fermi velocity is strongly renormalized by the interaction. We postpone for a while a detailed discussion of the interaction and at the moment write the interaction Hamiltonian in terms of local densities ρ αj in the following general form: Note that this effective Hamiltonian is not microscopically derived. However, the experiment indicates, 16 that the interaction has a Coulomb long-range character and leads to charging effects at the edge. Below we show that once this assumption is made, it leads to a number of universalities in the MZI physics and correctly captures most of the experimental observations.
We have already mentioned in the introduction that the interference in MZI originates from scattering processes at QPCs. In the case when interaction is strong, the scattering has to be assumed weak and treated perturbatively. Fortunately, this limitation does not detract from our theoretical approach, because neither the interference nor its suppression are necessarily weak in the case of weak scattering. Moreover, we would like to stress again that most interesting physics takes place in the regimes of weak tunneling and of weak backscattering. Both regimes can be described by the tunneling Hamiltonian: where the tunneling amplitude connects outer edge channels and transfers the electron from the lower arm to the upper arm of the MZI. It is worth mentioning already here that at low energies the electron tunneling is relevant and leads in fact to the ohmic behavior of the QPCs, in agreement with experiments. 5,6 The AB phase may now be included in the tunneling amplitudes via the relation t * R t L = |t R t L |e iϕAB .
B. Bosonization
In order to account for the strong interaction at the edge, we take advantage of the commonly used bosonization technique, 19 and represent fermion operators in terms of chiral boson fields φ αj : which satisfy the commutation relations [φ αj (x), φ αj (y)] = iπsgn(x − y). The local density is obtained via the point splitting which gives the following expression: Applying point splitting to the single-particle Hamiltonian (4), we obtain where the interaction potential is simply shifted by the Fermi velocity, The crucial point is that now the Hamiltonian (10) for quantum Hall edge is quadratic in boson fields. Next, we quantize fields by expressing them in terms of boson creation and annihilation operators, a † αj (k) and a αj (k), where zero modes, ϕ αj and p αj , satisfy commutation relations [p αj , ϕ αj ] = i/W , and W is the total size of the system. In the end of calculations we take the thermodynamic limit W → ∞, so that W drops from the final result. Then the edge Hamiltonian acquires the following form: The vacuum for collective excitations is defined as a αj (k)|0 = 0. The special care has to be taken about zero modes, because as we show in Sec. IV, zero modes determine charging effects and phase shifts, which are not small. From the definitions (9) and (12) it is clear that the zero mode p αj has a meaning of a homogeneous density at the edge channel (α, j). Therefore, we define "vacuum charges" Q αj which are in fact charge densities at the edge channels, generated by the bias. The energy E 0 of the ground state, defined as H|0 = E 0 |0 , is then given by Since edge excitations propagate along the equipotential lines, edge channels can be considered a metallic surfaces. We therefore can apply the well known electrostatic relation 22 for the potentials ∆µ αj to the edge channels: Thus the quantity V αβ (0) is the inverse capacitance matrix. 20 Using now Eqs. (13), (14), and the commutation relation for zero modes, we arrive at the following important result for the time evolution of zero modes We finally note that formulated here model of the MZI is consistent with the effective theory of the quantum Hall state 18 at ν = 2. This is demonstrated in the Appendix A, where we check the locality of the electron operators, their fermionic commutation relations, and the gauge invariance of our model.
C. Strong interaction limit and the universality
It is quite natural to assume that edge channels interact via the Coulomb potential. It has a long-range character and the logarithmic dispersion V αβ (k) ∝ log(ka). Here a is the shortest important length scale, e.g. the width of compressible stripes 17 , or the inter-channel distance. The dispersion is important in the case ν = 1, because it generates dephasing at the homogeneous edge. 12 However, taken alone the dispersion is not able to explain lobe-type behavior of the visibility. What is more important it is the fact that the logarithm may become relatively large when cutoff at relevant long distances.
We therefore further assume that the Coulomb interaction is screened at distances D, such as L U , L D ≫ D ≫ a, where L U and L D are the lengths of the arms of the MZI. In fact, some sort of screening may exist in MZIs. For instance, in the experiments [5,6,7,8,9] the cutoff length D may be a distance to the back gate, or to the massive metallic air bridge. There are several consequences of screening on the intermediate distances D. First of all, it allows to neglect the interaction between two arms of the interferometer (see however the discussion in Sec. IV). Second, at low energies we can neglect the logarithmic dispersion and write so that for the Fourier transform we obtain: And finally, the mutual interaction between inner and outer edge channels, located on the distance of order a ≪ D from each other, is strongly reduced. Therefore, one can parametrize the interaction matrix as follows where is new large parameter, the most important consequence of the long-range character of Coulomb interaction. Indeed, we now diagonalize the interaction, V = S † ΛS, with the result Thus we find that the Coulomb interaction at the ν = 2 edge leads to the separation of spectrum on the fast (charge) mode with the speed u and slow (dipole) mode with the speed v. In Sec. IV we show that the lobe structure in the visibility is determined by slow mode, while the fast mode is not excited at relevant low energies. That is why at ν = 2 the logarithmic dispersion of the Coulomb interaction is not important for explaining lobes.
Moreover, the Coulomb character of the interaction leads to the following universality. We show later that the coupling of electrons in the outer channel to the fast and slow mode is determined by the parameters s α = |S 1α | 2 , which satisfy the sum rule that follows from the unitarity of the matrix S. For the special choice (19) of the interaction matrix coupling constants are equal, which has an important consequence, as we show in Sec. III. Note that in the limit of strong long-range interaction, u ≫ v F , the result (23) is stable against variations of the bare Fermi velocity v F and is not sensitive to the physics of edge channels at distances of order a, leading to the universality of dephasing in MZI. Finally, we partially diagonalize the Hamiltonian by introducing new boson operators via a αj (k) = β S αβ b βj (k). Using equations (13), (19), and (21), we obtain new Hamiltonian for the quantum Hall edge which completes our discussion of the model. In the Appendix B we use Eqs. (8), (12), (17), and (24) to derive electronic correlation functions.
III. VISIBILITY AND PHASE SHIFT
In this section we consider the transport through the MZIs shown in figures 1-3 and evaluate the visibility of AB oscillations. Both regimes, of weak tunneling and of weak backscattering, can be considered on the same basis, by applying the tunneling Hamiltonian approach. 21 In the derivation presented below we follow the Ref. [11]. We introduce the tunneling current operatorÎ =Ṅ D = i[H T , N D ], which differs for two regimes only by the sign. Here N D = dxψ † 1D ψ 1D is the number of electrons on the outer edge channel of the lower arm of the interferometer. Then we use Eqs. (6) and (7) to writê We evaluate the average current to lowest order in tunneling and obtain where the average is taken with respect to ground state in quantum Hall edges. Finite temperature effects will be considered separately in Sec. IV C. It easy to see that the average current can be written as a sum of four terms: where I LL and I RR are the direct currents at the left and write QPC, respectively, and I LR + I RL is the interference contribution. In our model there is no interaction between upper and lower arms of MZI, therefore the correlation function in (27) splits into the product of two single-particle correlators: We note that the operator ψ † 1j applied to the ground state creates a quasi-particle above the Fermi level (with the positive energy), while the operator ψ 1j creates a hole below Fermi level (with the negative energy). This implies that in the first term in (28) all the singularities are shifted to the upper half plane of the complex variable t, and in the second term singularities are shifted to the lower half plane. This means that only one term contributes, depending on the sign of bias ∆µ which determines the direction of current. Apart from this, there is no difference between two terms. Therefore, we choose, e.g., the first term, shift the counter of integration C to the low half plane, and rewrite the expression (28) as follows: where the correlators are defined in such a way that they have singularities on the real axis of t.
The correlators are evaluated in Appendix B using the bosonization technique with the result One remarkable fact we prove below is that for x ℓ = x ℓ ′ the only role of the interaction is to renormalize the density of states at Fermi level, n F = 1/(u s1 v s2 ). This immediately follows from the sum rule (22). Therefore, for the direct currents we readily obtain i.e. the QPCs are in the ohmic regime, in agreement with experimental observations. In order to present the visibility in a compact form, we introduce the electron correlation functions of an isolated edge, normalized to the density of states: This functions contain all the important information about charging effects (phase shift generated by zero modes), and dephasing determined by the singularities. Next, adding all the terms I = I ℓℓ ′ we find the differential conductance G = dI/d∆µ: where the time shift ∆t is the charging effect, which depends on the bias scheme, and will be calculated in Sec. IV for a particular experimental situations. It is important to note that in the weak backscattering regime (see Figs. 2 and 3) tunneling occurs from the lower arm of the interferometer, therefore one should exchange indexes U and D.
The first term in Eq. (33) is the contribution of direct incoherent currents through QPCs, while the second term is the interference contribution, which oscillates with magnetic field. Therefore, the visibility of AB oscillations (1) in the differential conductance G and the AB phase shift take the following form where the visibility at zero bias V G (0) is given by Eq.
(2) for a non-interacting system, while all the interaction effects enter via the dimensionless Fourier integral with the counter C shifted to the lower half plane of the variable t. This formula, together with Eqs. (32) and (34) is one of the central results and will serve as a starting point for the analysis of experiments. However, before we proceed with detailed explanations of experiments, we would like to quickly consider two examples. The first example, a non-interacting system, serves merely as a test for our theory. In this case using relevant parameters, the vacuum charges Q 1U = ∆µ/v F , Q 1D = 0, the group velocities u = v = v F , coupling constants s 1 = 1, s 2 = 0, we obtain the cor- The time shift ∆t = L U /v F follows from Eq. (34). We substitute all these results to the Eq. (36) and finally obtain: so that the visibility |I AB | = 1, and the phase shift is ∆ϕ AB = ∆µ∆L/v F , in agreement with the Eq. (2). Next, we consider a more interesting situation when the interferometer is in weak tunneling regime (see the Sec. I), and one of its arm, e.g. the upper arm of the interferometer, is much shorter than the other, L U ≪ L D . Then the properties of the function I AB are determined by excitations at the lower arm of MZI at energies of order v/L D . At this energies the electronic correlator in the upper arm behaves as a correlator of free fermions: G U (t) = 1/t. Therefore, for the visibility we obtain i.e. it is simply given by the Fourier transform of the electron correlation function at the edge. This leads to an interesting idea to use a strongly asymmetric MZI for the spectroscopy of excitations at the edge of quantum Hall system. We now use the opportunity to analize the role of the coupling coefficients s α in this simple situation. The absolute value of the Fourier transform of the function G D is shown in Fig. (6). We see that s 1 = s 2 = 1/2 is the special point. In this case, and taking the limit u → ∞, the Fourier transform gives |I AB | = |J 0 (∆µL D /2v)|, where J 0 is the zero-order Bessel function. Thus the lobes in the visibility of AB oscillations are well resolved only in the limit of strong long-range interaction. Therefore, an asymmetric MZI can be used to test the character of the interaction. From now on we assume that s 1 = s 2 = 1/2.
IV. DISCUSSION OF EXPERIMENTS
In this section we present a detailed analysis of experiments described in the introduction. It is convenient to rewrite Eq. (36) in slightly different form by using Eq. (32) with s 1 = s 2 = 1/2 and shifting the time integral: where v 1 = u and v 2 = v, and the contour of integration C goes around the branch cuts (see, e.g., Fig. 7). These branch cuts, which replace single-particle poles of correlation functions for free electrons, originate from the interaction. On a mathematical level, they are the main source of the suppression of the coherence, because at large argument ∆µ the Fourier transform (36) of relatively smooth function quickly decays. We will use this fact for the analysis of dephasing. Physically, when electron tunnels, it excites two collective modes associated with two edge channels, and they carry away a part of the phase information.
On the other hand, charging effects reflected in the parameter ∆t lead to the bias dependent shift of the AB phase, ∆φ AB . As it follows from Eq. (35), the phase slips by π at points where the visibility vanishes. Away from these points, in particular at zero bias, the phase shift is a smooth function of the bias. Therefore, it is interesting to consider the value ∂ ∆µ ∆φ AB at ∆µ = 0 where |I AB | = 1, which can be found from the expansion I AB = |I AB |e i∆φAB = 1 + i(∂ ∆µ ∆φ AB )∆µ in the right hand side of Eq. (39). We find it exactly: where the first term t 0 is the contribution of the quantum mechanical phase accumulated due to the propagation of an electron along the MZI. The second term, found from Eq. (34), is the contribution of the charge accumulated at the arms of MZI due to the Coulomb interaction between edge channels. Partial cancellation of two effects leads to the phase rigidity found in Ref. [5]. This effect is discussed below. Finally, all the experiments found that the visibility V G oscillates as a function of the bias ∆µ. Our model reproduces such oscillations and helps to understand their origin. Indeed, two well defined collective modes with speeds u and v lead to the formation of four branch points in the integral (39), which give relatively slowly decaying contributions. These contributions come with different bias dependent phase factors, so that the function I AB (∆µ) oscillates. The period of oscillations is determined by the smallest energy scale ǫ, which is given by the total size of the branch cut and can be estimated as .
In the case u ≫ v, the parameter u cancels, so that the period of oscillations is determined by the slowest mode, and by the size of the interferometer. We would like to emphasize that oscillations in the visibility appear only when at least two modes are relatively well resolved. Our model predicts a power-law decays of the visibility. In experiments 5,6 the visibility seems to decay faster. There might be several reasons for this, e.g. low frequency fluctuations in the electrical circuit, 23,24 or the electromagnetic radiation. 25 Intrinsic reasons for dephasing deserve a separate consideration. We have already mentioned that the dispersion of the Coulomb interaction, neglected here, may lead to strong dephasing. 12 However, it affects only the fast mode, while the slow mode contribution to the integral (39) maintains the phase coherence. Therefore taken alone the dispersion of Coulomb interaction is not able to explain strong dephasing at ν = 2. The experiments seem to indicate that the slow mode is also dispersive, which may be a result of strong disorder at the edge, or, more interestingly, of the intrinsic structure of each edge channel. 26 Having stressed this point, we now wish to focus solely on the phase shift and oscillations in the visibility. We use the fact that u ≫ v and simplify the integral (39) by neglecting terms containing 1/u: This expression contains one pole and one branch cut (see Fig. 7). Therefore, it can be expressed in terms of the zero order Bessel function J 0 . After elementary steps we find: where t 0 = (L U + L D )/2v, and ∆L = L D − L U . We now proceed with the analysis of experiments discussed in the introduction.
A. Only one edge channel is biased
We start with the experiment [5]. Using Eqs. (17) and (19) we find In the weak tunneling regime, shown on the left panel of Fig. 2, only outer channel in the upper arm of the interferometer is biased, ∆µ 1U = ∆µ and ∆µ 2U = ∆µ αD = 0.
Therefore we obtain Then the equation (34) gives ∆t = L U (u + v)/2uv. Substituting ∆t into Eq. (40), we find that at zero bias Therefore, for the symmetric interferometer, ∆L = 0, the phase shift is independent of the bias, away from phase slip points where the visibility vanishes. This may explain the phenomenon of phase rigidity observed in Ref. [5], if we assume that the interferometer is almost symmetric in this experiment. Indeed, the period of oscillations of the visibility is given by the energy scale (41). Therefore, the overall phase shift between zeros of the visibility can be estimated as ∆L/(L U + L D ) ≪ 1. The integral (42), evaluated numerically, is plotted in Fig. 8 for two values of the asymmetry, L D /L U = 1.15 and 1.35. Our main focus is first few oscillations of the visibility (upper panel), which reveal charging effects. We would like to emphasize several points. First, the width of the central lobe is equal to the width of side lobes. This is because in the case of the symmetric interferometer, L U = L D = L, the branch cut shrinks to the pole (see Fig. 7), so that two poles are at t = ±L/2v. Then Eq. (42) gives |I AB | = | cos(∆µL/2v)|. Second, the small variation of the length L D of the lower arm has only minor effect on the position of lobes, while the amplitude of oscillations is considerably suppressed. Finally, the lower panel of Fig. 8 illustrates the phenomenon of phase rigidity for almost symmetric interferometer, L D = 1.15L U . The AB phase shift changes slowly inside the lobes and slips by π at zeros of the visibility. All these observation are in agreement with the experiment [5].
To conclude this section we would like to remark that the visibility in the regime of weak backscattering (see the right panel in Fig. 2) can be obtained by simply replacing L U and L D . This is because in our model the charging effects are important only in the part of the MZI between two QPCs, where they induce phase shifts. For the same reason, the transparency of the second QPC does not affect the visibility. 6 In the next section we show that the symmetry between weak tunneling and weak backscattering is broken if the bias is applied to two edge channels.
B. Two edge channels are biased
Next we analyze the experiment [6]. The details of this experiment are discussed in the introduction. In the weak tunneling regime (see the left panel of Fig. 3) two edge channels are biased and almost completely reflected at the first QPC. Therefore, Eq. (44) gives and from Eq. (34) we find ∆t = L U /u. t t FIG. 9: Analytic structure of the Fourier integral (39) in case when two edge channels are biased, 5 and in the weak tunneling regime (see Fig. 3). Left panel shows branch cuts of two single-particle correlation functions, while in the right panel the limit u ≫ v is taken. The branch cut extends from t = LU /v to t = LD/v. Taking now the strong interaction limit, u ≫ v, we find that ∆t → 0. Therefore, in the integral (42) the pole corresponding to the fast mode cancels (analytical structure of the integral is shown in Fig. 9), so that the visibility can be found exactly: where ∆L = L D − L U . The visibility of AB oscillations, given by the absolute value of the integral (48), is shown in Fig. 11. One can see that in contrast to the case when only one channel is biased, 5 the central lobe is approximately two times wider than side lobes, in agreement with the experimental observation. 6 Moreover, the width of lobes is determined by the new energy scale, ǫ ′ = v/∆L. Finally, inside the lobes the phase shift ∆φ AB = ∆µ(L D + L U )/2v always grows linearly with bias, so no phase rigidity should be observed. We now switch to the regime of weak backscattering (see the right panel of Fig. 3). In the upper arm only inner channel is biased, while only outer channel is biased in the lower arm of the interferometer. Using again Eq. (44), we obtain Then from the equation (34) we find that ∆t = (L D + L U )/2v + (L U − L D )/2u. The analytical structure of the integral (42) is shown in Fig. 10. It looks somewhat similar to the structure shown in Fig. 7 for the case of single biased channel. However, the principal difference between these two cases is that the singularities in Fig. 10 are strongly asymmetric with respect to t → −t. In order to see a consequence of this fact we take the limit u ≫ v and write ∆t = (L U + L D )/2v. For the phase shift (40) at small bias we obtain ∂∆φ AB /∂∆µ = −(L U + L D )/2v. Therefore, in the weak backscattering regime and when two channels are biased no phase rigidity can be observed. The most remarkable new feature of the visibility (see Fig. 11) is that, in contrast to the cases considered above, it grows as a function of bias around ∆µ = 0 , in full agreement with the experiment [6]. It may even exceed the value 1 if two QPCs have approximately same transparencies, so that V G (0) is close to 1. This behavior may look surprising, because it is expected that dephasing should suppress the visibility of AB oscillations below its maximum value (2) for a non-interacting coherent system. However, one should keep in mind that according to our model oscillations of the visibility as a function of bias originate from charging effects which are caused by the Coulomb interaction between edge channels. Therefore, simple arguments which rely on the Landauer formula for the conductance do not apply. Thus in the experimental set-up where two edge channels are biased 6 there is a strong asymmetry between weak tunneling and weak backscattering regimes, which is easily seen in Fig. 11. In order to clarify the physical origin of this effect, we evaluate the integral (42) in the limit of strong interaction u ≫ v and for a symmetric MZI, L U = L D = L. Then the branch cut shrinks to the pole, and we obtain the following simple result: where t 0 = L/v is the time of the propagation of the slow mode between two QPCs. We find that quite similar to the result for the phase shift (40), here we also have a competition of two terms, ∆t given by Eq. (34), and of the flight time t 0 . Whether the visibility grows or decays depends on the sign of the second term in Eq. (50). In the experiment [5] ∆t = L/2v = t 0 /2, so that the visibility always decays. On the other hand, the experiment [6] represent an intermediate case. In the regime of weak tunneling we have ∆t = 0, while in the regime of weak backscattering ∆t = t 0 , so that in both regimes the visibility is constant for the symmetric MZI. Therefore, in Fig. 11 we had to consider a strongly asymmetric interferometer with L D = 1.8L U . Note however, that once ∆t exceeds slightly t 0 , the visibility easily becomes growing function at small bias. This is exactly what happens if we relax our assumption of good screening of the interaction and allow opposite arms of the interferometer to interact. Indeed, in order to be electro-neutral the system compensates such interaction by decreasing further the charge Q 1U below the value given by Eq. (49), so that now ∆t > t 0 . We have checked numerically that this assumption alone gives rise to a good agreement with the experiment [6] even in the case of symmetric interferometer.
C. Effects of finite temperature
The temperature dependence of the visibility of AB oscillations in the MZI has been recently measured in Ref. [8]. The most interesting fact is that the visibility scales exponentially with the total size of the interferometer V G ∝ e −L/lϕ . This is in obvious contradiction with the prediction V G ∝ e −∆L/lϕ for free electrons, 28 where dephasing is due to energy averaging. Moreover, the coherence length scales with temperature as l ϕ ∝ 1/T , which does not agree with the prediction based on Luttinger liquid model for ν = 1. 12 Here we show that the experimentally observed temperature dependence of the visibility can be explained within our model.
Indeed, according to results of the Sec. III, at high temperatures, neglecting charging effects which merely influence the prefactor, the visibility can be estimated as Here the correlators are given by the hight-temperature asymptotic form (B7), where X α has to be replaced with L j − v α t. Then in the noninteracting case (i.e. for s 1 = 1, s 2 = 0, and v 1 = v F ) we obtain the result which agrees with the prediction in Ref. [28]. On the other hand, in our model s 1 = s 2 = 1/2, so we obtain where the dephasing length .
Thus we find that the visibility scales exponentially with the total size of the interferometer, and the dephasing length scales as l ϕ ∝ 1/T , in full agreement with the experiment [8]. Two remarks are in order. According to Eqs. (52), (53), and to the results of the Sec. III, the temperature dependence and the period of oscillations of the visibility are determined by the same energy scale ǫ, given by Eq. (41). On the other hand, the decay of the visibility as a function of the bias ∆µ at zero temperature is determined by a larger energy scale ǫ ′ . It is equal to ǫ ′ = v/∆L, or, in case of the symmetric interferometer, depends on the dispersion of the slow mode. The existence of two distinct energy scales, which originate from the separation of the spectrum of edge excitations on slow and fast modes, is one of the most important predictions of our theory.
Second, we note that v and u are the group velocities of the collective dipole and charge excitations, respectively. Very roughly, they are determined by the spatial separation between edge modes a, and by the distance to the back gate D. On the ν = 2 Hall plateau, the separation a grows with the magnetic field, because the inner edge channel moves away from the edge of 2DEG until it disappears in the end of the plateau. Therefore, in contrast to the bare Fermi velocity, the velocity of the slow mode increases with the magnetic field. This may explain the non-monotonic behavior of l ϕ observed in Ref. [8]. Indeed, according to Eq. (53) the decoherence length first increases with the magnetic field starting from the value l ϕ = v/πT . Then it reaches the maximum value at v ≈ u and goes down to the value l ϕ ≈ u/πT on the plateau ν = 1.
V. CONCLUSION
Earlier theoretical works 24,25,28 on dephasing in MZI predicted a smooth decay of the visibility of AB oscillations as a function of temperature and voltage bias. Therefore, when the Ref. [5] reported unusual oscillations and lobes in the visibility of AB oscillations as a function of bias, this was considered a great puzzle and attracted considerable theoretical attention. One of us suggested 11 a first explanation that is based on the long-range Coulomb interaction between counterpropagating edge states which leads to resonant scattering of plasmons. Although this phenomenon may be encountered in a number of experimental situations, new experiments 6,7,8,9 unambiguously pointed to physics related to the intrinsic structure of the quantum Hall edge.
In the present paper we focus on the intrinsic properties of the edge and propose a simple model which is able to explain almost every detail of existing experiments.
The key ingredient of our theory is the assumption that two chiral channels at the edge of ν = 2 electron system interact via the long-range Coulomb potential. This leads to number of universalities, in particular, to the separation of the spectrum of edge excitations on slow and fast mode (plasmons), and to equal coupling of electrons to both modes. When electrons scatter off the QPCs, which play a role of beam splitters in the electronic MZI, they excite plasmons, depending on the energy provided by the voltage bias. The plasmons carry away the electronic phase information, which leads the the decay of the visibility of AB oscillations as a function of bias.
The remarkable property of our model is that at zero temperature the phase information emitted at the first QPC can be partially recollected at the second QPC. This leads to oscillation and lobes in the visibility which can be interpreted as a size effect. The new energy scale in these oscillations, associated with the total size of the MZI and with the slow mode, determines also the temperature dependence of the visibility.
Importantly, within the framework of the same simple model we are able to explain a variety of ways the interaction effects manifest themselves in different experiments. 5,6,7,8,9 This includes the lobe-type structure observed in Refs. [5,6], the phase rigidity that was found only in Ref. [5], the growing visibility and the asymmetry of the AB effect discovered in Ref. [6]. All these phenomena can be interpreted as charging effects. Indeed, edge channels in quantum Hall systems move along the equipo-tential lines and can be regarded as one-dimensional metals. Therefore, they accumulate ground states charges, which lead to electronic phase shifts, depending on the bias scheme (see figures 2 and 3). These bias dependent phases determine the overall AB phase shift and the specific behavior of the visibility as a function of the voltage bias.
Finally, experimentally observed decay of the visibility as a function of bias seems to be stronger than what our model predicts. We speculate that this effect cannot be explained by the long-range Coulomb interaction alone, and may originate from the dispersion of the slow mode due to disorder, or because of the intrinsic structure of each edge channel. 26 This point deserves a careful experimental and theoretical investigation. Moreover, it is interesting to find out how charging and size effects discussed here may influence the interferometry at other filling factors, where quite similar processes can take place. 29 Although the first theoretical steps have already been taken, 30,31,32 the experiment, as usual, may bring new surprises.
APPENDIX A: CONSISTENCY OF THE THEORY
Any model of the quantum Hall edge should satisfy the following physical conditions: 27 the existence of local electron operator, proper charge and statistics of electron operators, and the cancellation of the gauge anomaly with the one in the bulk theory. Validity of almost all of them is obvious, but it is important to ascertain that there is no intrinsic inconsistencies and incompatibilities with bulk physics in our theory. In the analysis presented below we simplify notations by omitting some indexes, and assuming the summation over repeating indexes.
Check of locality of the electron operator (8) is obvious, and follows from the commutation rule for phase operators. The statistical phase θ of the operator ψ α is defined as: ψ α (x ′ )ψ α (x) = e iθ ψ α (x)ψ α (x ′ ).
Using the simple relation e iφα(x ′ ) e iφα(x) = e −[φα(x ′ ),φα(x)] e iφα(x) e iφα(x ′ ) and the commutation relation for bosonic phase operators we find that our electron operators (8) are fermions with the phase θ = π. Finally, the total charge at the quantum Hall edge is Therefore, using the relation (A1) we find which means that the fermion (8) in our model has an electron charge, e = 1. The only non-trivial question is whether the condition of the cancellation of the anomaly inflow imposes any constraint on the interaction matrix V αβ ? The answer is no. To show this we use the Chern-Simons action for the gauge field a µ in the effective low-energy description of quantum Hall bulk physics 27 at ν = 2: Here Ω is the region of 2DEG where the quantum Hall liquid is present. After the gauge transformation a αµ → a αµ + ∂ µ λ α the gauge anomaly (total change of action) acquires the following form: In our model the action for edge excitations alone can be written as: The point is that for any interaction matrix V αβ the coupling of edge modes with the field a µ may be written in the gauge invariant form: where D µ φ α = ∂ µ φ α − a αµ . After the gauge transformation in the edge action, φ α → φ α + λ α , the anomaly (A5) cancels in the total action S CS + S(a).
APPENDIX B: CALCULATION OF ELECTRON CORRELATION FUNCTION
After we have introduced the model in Sec. II, the derivation of the electronic correlation function is relatively simple. We represent the electronic operators as ψ 1j ∝ e iφ1j and fix the normalization in the end of calculations. Using the gaussian character of the theory, we write i ψ † 1j (x, t)ψ 1j (0, 0) ∝ exp(i∆µ 1j t − 2πiQ 1j x)K j (x, t), | 2008-01-15T16:58:20.000Z | 2008-01-15T00:00:00.000 | {
"year": 2008,
"sha1": "e6c3702670b1055dcf9fbbffda718c53dafe0a61",
"oa_license": null,
"oa_url": "https://archive-ouverte.unige.ch/unige:36341/ATTACHMENT01",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e6c3702670b1055dcf9fbbffda718c53dafe0a61",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247411449 | pes2o/s2orc | v3-fos-license | An optimal feedback control that minimizes the epidemic peak in the SIR model under a budget constraint
We give the explicit solution of the optimal control problem which consists in minimizing the epidemic peak in the SIR model when the control is an attenuation factor of the infectious rate, subject to a L 1 budget constraint. The optimal strategy is given as a feedback control which consists in an singular arc maintaining the infected population at a constant level until the immunity threshold is reached, and no intervention outside the singular arc.
Introduction
Since the pioneer work of Kermack and McKendrick [16], the SIR model has been very popular in epidemiology, as the basic model for infectious diseases with direct transmission (see for instance [22,18] as introductions on the subject). It retakes great importance nowadays due to the recent coronavirus pandemic. In face of a new pathogen, non-pharmaceutical interventions (such as reducing physical distance in the population) are often the first available means to reduce the propagation of the disease, but this has economical and social prices... In [20,19], the authors underline the need of control strategies for epidemic mitigation by "flattering the epidemic curve", rather than eradication of the disease that might be too costly. Several works have applied the optimal control theory considering interventions as a control variable that reduces the effective transmission rate of the SIR model, and studied optimal strategies with criteria based on running and terminal cost over fixed finite interval or infinite horizon [4,7,8,15,21,5,9,12,17,6]. However, the highest peak of the epidemic appears to be the highly relevant criterion to be minimized (especially when there is an hospital pressure to save individuals with severe forms of the infection). In [20], the authors studied the minimization of the peak of the infected population under the constraint that interventions occur on a single time interval of given duration. In the present work, we consider the same criterion, but under a budget constraint on the control (as an integral cost) that we believe to be more relevant as it takes into account the strength of the interventions and does not impose an a priori single time interval of given length for the interventions to take place (although we have been able to prove that the optimal solution consists indeed in having interventions on a single time interval but with a control strategy different that the one obtained in [20]). Let us also mention a more recent work [1] that considers a kind of "dual" problem, which consists in minimizing an integral cost of the control under the constraint that the epidemic stays below a prescribed value and an additional constraint on the state at a fixed time. The structure of the optimal strategy given by the authors in [1] is similar to the one we obtained without having to fix a time horizon and a terminal constraint. All the cited works rely on numerical methods to provide the effective control. Here, we give an explicit analytical expression of the optimal control.
Let us stress that optimal control problems with maximum cost are not in the usual Mayer, Lagrange or Bolza forms of the optimal control theory [10], for which the necessary optimality conditions of Pontryagin's Principle apply, but fall into the class of optimal control with L ∞ criterion, for which characterizations have been proposed in the literature mainly in terms of the value function (see for instance [3]). Although necessary optimality conditions and numerical procedures have been derived from theses characterizations (see for instance [2,11]), these approaches remain quite difficult and numerically heavy to be applied on concrete problems. On another hand, for minimal time problems with planar dynamics linear with respect to the control variable, comparison tools based on the application of the Green's Theorem have shown that it is possible to dispense with the use of necessary conditions to prove the optimality of a candidate solution [14]. Although our criterion is of different nature, we show in the present work that it is also possible to implement this approach for our problem.
The paper is organized as follows. In the next section, we posit the problem of peak minimization to be studied. In Section 3, we define a class of feedback strategies that we called "NSN", and give some preliminary properties. Section 4 proves that the existence of an NSN strategy which is optimal for our problem, and makes it explicit. Finally, Section 5 illustrates the optimal solutions on numerical simulations and discusses about the optimal strategy.
Definitions and problem statement
We consider the SIR model where S, I and R denotes respectively the proportion of susceptible, infected and recovered individuals in a population of constant size. The parameters β and γ are the transmission and recovery rates of the disease. The control u, which belongs to U := [0, 1], represents the efforts of interventions by reducing the effective transmission rate. For simplicity, we shall drop in the following the R dynamics. Throughout the paper, we shall assume that the basic reproduction number R 0 is larger than one, so that an epidemic outbreak may occur. Assumption 1.
For a positive initial condition (S(0), I(0)) = (S 0 , I 0 ) with S 0 +I 0 ≤ 1, we consider the optimal control problem which consists in minimizing the epidemic peak under a budget constraint where U denotes the set of measurable functions u(·) that take values in U and satisfy the L 1 constraint +∞ 0 u(t)dt ≤ Q Remark 1. From equations (1), one can easily check that the solution I(t) tends to zero when t to +∞ whatever is the control u(·), so that the supremum of Equivalently, one can consider the extended dynamics.
with the initial condition (S(0), I(0), C(0)) = (S 0 , I 0 , Q) and the state constraint A solution of (3) is admissible if the control u(·) takes its values in U and the condition (4) is fulfilled.
The NSN feedback
Let us denote the immunity threshold Note that S(·) is a non increasing function and that one hasİ ≤ 0 when S ≤ S h , whatever is the control. If S 0 ≤ S h , the maximum of I(·) is thus equal to I 0 for any control u(·), which solves the optimal control problem. We shall now consider that the non-trivial case.
Under this assumption, we thus know that for any admissible solution, the maximum of I(·) is reached for S ≥ S h . For the control u = 0, one can easily check that following property is fulfilled and the maximum of I(·) is then reached for the value We define the "NSN" (for null-singular-null) strategy as follows.
We denote the L 1 norm associated to the NSN control where u ψĪ (·) is the control generated by the feedback (6).
This control strategy consists in three phases: 1. no intervention until the prevalence I reachesĪ (null control), 2. maintain the prevalence I equal toĪ until S reaches S h (singular control),
no longer intervention when S > S h (null control)
Remark 2. There is no switch of the control between phases 2 and 3, because u(t) tends to zero when S(t) tends to S h , according to expression (6).
One can check straightforwardly the the following properties are fulfilled.
, the maximal value of the control u ψĪ (·) is given by Moreover, any solution given by the NSN strategy verifies
Optimal strategy
We first show that the function L can be made explicit.
Proposition 1. One has
Proof. Note first that whatever isĪ, S(·) is decreasing with the control (6). One can then equivalently parameterize the solution I(·), C(·) by As long as I <Ī, one has u = 0 which gives Remind, from the definition of I h , that the solution I(·) with u = 0 reaches I h in finite time. Therefore, one can define the numberσ One then obtains and with (8) one can write which finally gives the expression (7).
Then, the best admissible NSN control can be given as follows.
for which the solution with the NSN strategy is admissible is given by the valueĪ (Q) := I h QβS h + 1 (9) and one has L(Ī (Q)) = Q We give now our main result that shows that the NSN strategy is optimal.
Proposition 2. Let Assumptions 1 and 2 be fulfilled. Then, the NSN feedback is optimal with whereĪ (Q) is defined in (9), andĪ is the optimal value of problem (2).
Proof. When Q ≥ I h −I0 βS h I0 , the NSN strategy is admissible and the corresponding solution verifies max t≥0 I(t) = I 0 which is thus optimal. Consider now Q < I h −I0 βS h I0 . Let (S (·), I (·), C (·)) be the solution generated by the NSN strategy with I =Ī (Q), and denote u (·) the corresponding control. Let One can straightforwardly check with equations (3) that the solution is Remind, from Corollary 1, that one has C (t h ) = 0 by equation (10)). Clearly, one has (S(T ),Ĩ(T )) = (S h , I(t h )) andC(T ) < 0. We consider now in the (S, I) plane the simple closed curve Γ which is the concatenation of the trajectory (S(·),Ĩ(·)) on forward time with the trajectory (S(·), I(·)) in backward time: Then one has By the Green's Theorem, one obtains where D is the domain bounded by Γ (see Figure 1 as an illustration). This implies C(t h ) <C(T ) < 0 and thus a contradiction with the admissibility condition (4) of the solution (S(·), I(·), C(·)). We conclude that (S (·), I (·), C (·)) is optimal. D S h I Figure 1: The closed curve Γ is composed of the trajectory (S (·), I (·)) in blue up to to the point (S h ,Ī), the additional part (S(·),Ĩ(·)) in red and an hypothetical better trajectory (S(·), I(·)) in backward time in green.
Numerical illustrations and discussion
We have considered the same parameters and initial condition as in [20] (see Table 1). For these values, Figure 2 presents a simulation of the optimal solution for the budget Q = 28, as an example (the minimum peak is reached forĪ 0.1015). As a comparison, the optimal strategy obtained by Morris et al. in [20] for a fixed time duration of interventions without consideration of any budget is quite different (see Figure 3). It consists in four phases: no intervention, maintain I constant, apply the maximal control (i.e. u = 1) and stop the intervention. This control presents thus three switches and relies on a full break of the transmission, differently to the NSN strategy which presents only one switch (see Remark 2) and does not require a full break (see the maximal value of the control given in Lemma 1). Applying an Figure 3: Comparison of the time evolution of the infected population I between the optimal NSN strategy and the optimal one of Morris et al.
NSN strategy appears thus less restrictive to be applied in practice. The strategy proposed by Morris et al. induces also a second peak: after the third phase, the prevalence I increases again up to a peak which has to be equal to the level maintained during the second phase if its optimally chosen. But this second peak turns out to be non robust under a mischoice (or mistiming) of the second phase (see [20] for more details). Comparatively, the NSN is naturally robust with respect to a bad choice ofĪ: the maximum value of I is always guaranteed to be equal toĪ. However, a mischoice ofĪ has an impact on the budget of the NSN strategy, given by expression (7) and illustrated in Table 2 (for model parameters given in Table 1 and Q = 28).Ī −Ī −10% −5% −1% +1% +5% +10% L(Ī) − Q +17% +8% +1.5% +1.5% −7% −14% In case of a new epidemic among a large population, one can consider that the initial number of infected individuals is very low, while all the remaining population is susceptible. Therefore, one has S 0 + I 0 = 1 with I 0 very small, and the optimal value ofĪ can be well approximated by its limiting expression for I 0 = 0, that isĪ From property (5) Table 1, one obtains the limiting values given in Table 3. This means that depending on the budget Q only, one can determine the minimal peak and the optimal strategy to apply, without the knowledge of the initial size of the infected population, provided that parameters β and γ of the disease are known. Table 3: The limiting optimal values for arbitrarily small I 0 (with Q = 28) The question of parameters estimation in the SIR model from data is out of the scope of the present work. However, while reaching I =Ī without intervention, one may expect refinement of the estimates and thus an adjustment of the value ofĪ . Note that if it is rather the height of the peakĪ that is imposed, the corresponding effort can be determined with expression (11), that is as well with the duration of the intervention.
To have a better insight of the impacts of the available budget Q on the course of the epidemic, we have considered four characteristics numbers: • t i : the starting date of the intervention, which is quite high. Moreover, the maximal value of the control is bounded by the value far from the value 1 (that would consists in a total lockdown of the population). On Figure 4, one can see that the peakĪ can be drastically reduced under a reasonable budget, and that taking larger budgets slows down the decrease of the peak, while the duration of the intervention carries on increasing, almost linearly. Indeed, remind that one has d = (S − S h )/(γĪ) and for an optimal value ofĪ, one has Q = (I h −Ī)/(γĪ) from (9). Then one gets but for large values of Q,Ī is small andS closed to one, which gives an approximation of d as the linear function of Q This implies that for a long duration, fixing the budget Q or the duration d tends to be equivalent. Therefore, for a same large duration, the optimal peak gets closed from the optimal one of the strategy of Morris et al. which constraints the duration only, but the difference of the budgets of these two strategies gets increasing with always a lower one for the NSN strategy, as one can see on Figure 5.
Finally, this analysis highlights (as already mentioned in [20,19]) the importance to do not intervene too early (unless one has a very large budget) and to choose the "right" time to launch interventions. We believe that curves as in Figure 4 might be of some help for decision makers. | 2022-03-14T01:16:04.213Z | 2022-03-11T00:00:00.000 | {
"year": 2022,
"sha1": "b361a5307970b8d442974f317969f3bf16ecf214",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b361a5307970b8d442974f317969f3bf16ecf214",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
255866078 | pes2o/s2orc | v3-fos-license | Efficacy of interceptor® G2, a long-lasting insecticide mixture net treated with chlorfenapyr and alpha-cypermethrin against Anopheles funestus: experimental hut trials in north-eastern Tanzania
The effectiveness of long-lasting insecticidal nets (LLIN), the primary method for preventing malaria in Africa, is compromised by evolution and spread of pyrethroid resistance. Further gains require new insecticides with novel modes of action. Chlorfenapyr is a pyrrole insecticide that disrupts mitochrondrial function and confers no cross-resistance to neurotoxic insecticides. Interceptor® G2 LN (IG2) is an insecticide-mixture LLIN, which combines wash-resistant formulations of chlorfenapyr and the pyrethroid alpha-cypermethrin. The objective was to determine IG2 efficacy under controlled household-like conditions for personal protection and control of wild, pyrethroid-resistant Anopheles funestus mosquitoes. Experimental hut trials tested IG2 efficacy against two positive controls—a chlorfenapyr-treated net and a standard alpha-cypermethrin LLIN, Interceptor LN (IG1)—consistent with World Health Organization (WHO) evaluation guidelines. Mosquito mortality, blood-feeding inhibition, personal protection, repellency and insecticide-induced exiting were recorded after zero and 20 washing cycles. The trial was repeated and analysed using multivariate and meta-analysis. In the two trials held in NE Tanzania, An. funestus mortality was 2.27 (risk ratio 95% CI 1.13–4.56) times greater with unwashed Interceptor G2 than with unwashed Interceptor LN (p = 0.012). There was no significant loss in mortality with IG2 between 0 and 20 washes (1.04, 95% CI 0.83–1.30, p = 0.73). Comparison with chlorfenapyr treated net indicated that most mortality was induced by the chlorfenapyr component of IG2 (0.96, CI 0.74–1.23), while comparison with Interceptor LN indicated blood-feeding was inhibited by the pyrethroid component of IG2 (IG2: 0.70, CI 0.44–1.11 vs IG1: 0.61, CI 0.39–0.97). Both insecticide components contributed to exiting from the huts but the contributions were heterogeneous between trials (heterogeneity Q = 36, P = 0.02). WHO susceptibility tests with pyrethroid papers recorded 44% survival in An. funestus. The high mortality recorded by IG2 against pyrethroid-resistant An. funestus provides first field evidence of high efficacy against this primary, anthropophilic, malaria vector.
Background
Long-lasting insecticidal nets (LLINs) are essential for malaria transmission control in sub-Saharan Africa [1]. The halving of the malaria burden over the last 15 years is largely attributed to increasing coverage of pyrethroid LLIN, which culminated in universal free distribution across all age groups in Africa [2]. Concurrent with this public health achievement and cultural shift in sleeping behaviour has been the evolution and spread of pyrethroid resistance across Africa in the two primary vector mosquito species complexes.
Pyrethroids, owing to their efficacy, safety and lowcost were once the only insecticides approved for use on LLINs [3]. Since 2015, further reduction in the annual malaria burden has stalled, and pyrethroids are no longer deemed sufficient on their own [1]. The evolution of severe resistance was anticipated, and when the first signs of field failure were reported in 2007 [4], steps had already been taken to identify alternatives [5,6]. The first active ingredient (AI) to be developed by pesticide industry to enhance pyrethroid efficacy on nets was the synergist PBO [7,8]. This supplemental compound, long used in domestic fly sprays to enhance pyrethroid toxicity, can neutralize metabolic mechanisms responsible for resistance to pyrethroids. Several brands of pyrethroid-PBO LLIN are currently being scaled up in countries where monooxygenase resistance mechanisms are contributing to impairment or loss of malaria control [9][10][11]. Pyrethroid-PBO LLIN is no panacea; it cannot neutralize all pyrethroid resistance mechanisms that have evolved, and this is no time for complacency. What is needed is an array of alternative insecticides that can complement the pyrethroids on Dual-AI LLIN. This is no trivial task as alternative insecticides for nets need to be safe to humans, toxic to mosquitoes, wash-tolerant on nets and exhibit no cross resistance to pyrethroids. One such insecticide, which is showing promise, is the pyrrole chlorfenapyr [12]. After 15 years of development and evaluation in laboratory and small-scale experimental hut trials against anopheline mosquitoes [13][14][15][16][17][18], the first cluster randomized trials (CRT) of a LLIN that combines chlorfenapyr with pyrethroid in a wash-tolerant formulation are currently underway and are due to report in 2021 in Tanzania, East Africa and in 2022 in Benin, West Africa. Epidemiological evidence of effectiveness against malaria in CRT is a prerequisite before the World Health Organization (WHO) will grant recommendation of any new class of LLIN for malaria control. Both CRTs are targeting the Anopheles gambiae complex: An. gambiae sensu stricto (s.s.) in NW Tanzania and Anopheles coluzzii in Benin. However, a third primary vector has re-emerged, Anopheles funestus [19], and this species is becoming the predominant vector along the eastern seaboard of Tanzania after a hiatus of several years when LLIN were first taken to scale in mass distribution campaigns and control of the then pyrethroid-susceptible An. funestus and An. gambiae was achieved [20,21]. The return of both An. funestus and An. gambiae s.s. is in pyrethroid-resistant form.
Anopheles funestus s.s. and An. gambiae s.s. in addition to being pyrethroid resistant are naturally highly anthropophagic and endophilic. These are the primary vector species to target with the new generation insecticides like chlorfenapyr. Unlike pyrethroids and other conventional public health insecticides which are neurotoxic, chlorfenapyr disrupts the oxidative pathways that enable proton transfer, conversion of ADP to ATP and cellular respiration in mitochondria [15,25]. With its non-neurological mode of action, chlorfenapyr shows no cross resistance to insecticide classes normally used for vector control and hence is a leading candidate for targeting vector species resistant to standard neurotoxic insecticides [13,17]. When evaluated on hand-treated mosquito nets against wild mosquitoes in experimental Keywords: Long-lasting insecticidal nets, Interceptor G2, Chlorfenapyr, Insecticide resistance, Anopheles funestus, Experimental huts, Tanzania huts, chlorfenapyr showed improved control of mosquitoes resistant to WHO-approved insecticides [14,26].
Interceptor G2 LN (IG2) is a Dual-AI LLIN developed by the manufacturer BASF SE which is designed to provide protection against pyrethroid-resistant mosquitoes by means of a mixture of chlorfenapyr and alpha-cypermethrin in a long-lasting wash-resistant formulation. The first experimental hut trials of IG2, undertaken in Benin, Burkina Faso and Cote d'Ivoire in West Africa, targeted members of the An gambiae complex: An coluzzii, An gambiae s.s. and Anopheles arabiensis [15,27,28]. The present paper reports on sequential hut trials in NE Tanzania on the East African seaboard designed to assess the efficacy of Interceptor G2 LN against the primary East African vectors An. funestus s.s. and Anopheles gambiae s.s. IG2 was tested unwashed and after 20 standardized washes as proxy for an ageing net consistent with WHO guidelines for evaluating LLIN. Two other net types served as positive controls: the pyrethroid-only Interceptor LN (IG1) and a net hand-treated with chlorfenapyr SC formulation. While it was anticipated that pyrethroid resistant An. funestus s.s. and An. gambiae s.s. would both be present, on these two occasions only An. funestus s.s. was present in significant densities.
Study site and experimental huts
Two experimental hut studies were conducted in Muheza district, Tanga region, at the field station in Zeneti (5°13′ S, 38°39′ E, 193 m altitude), where An. gambiae s.s. and An. funestus s.s. are the major malaria vectors [20,22]. Polymerase chain reaction sibling species analysis of 500 An. funestus collected from Zenet between 2016-2017 results showed all were An. funestus s.s. In World Health Organization insecticide susceptibility tests using permethrin papers conducted on F1 adult mosquitoes from Zeneti in the year before the hut trials, mortality was 56% among An. gambiae s.s. and 62% among An. funestus. In intensity bottle bioassay An. gambiae s.s. showed 30-fold resistance to permethrin relative to susceptible Kisumu strain [30]. There was no resistance to carbamates or organophosphates.
The WHO Phase II evaluation of Interceptor G2 was conducted in 6 experimental huts of the East African design [31]. The operating principle of the huts is described in WHO LLIN evaluation guidelines [32]. The hut design allows host-seeking mosquitoes unfettered access though two open eave gaps, 5 cm deep and 3 m wide, between wall and roof on two sides of the hut, attracted by the human host sleeping inside, and captures surviving mosquitoes exiting into window traps fitted on two of the walls or into verandah traps accessed through eave gaps above the walls. Other features include a ceiling lined with hessian sackcloth similar to thatch, a corrugated iron roof, a concrete plinth and water-filled moat to deny entry to scavenging ants. The eaves of the two unscreened verandahs were baffled inwardly to funnel host-seeking mosquitoes into the hut and to deter exiting through the same eave gaps. Two screened and closed veranda traps located on the other two sides of the hut, and two baffled window traps, capture any mosquito that exit the rooms via the two open eaves or windows. With this modification to the traditional verandah hut design there was no need to make any correction for escaping mosquitoes because all escapees are recorded [31].
Experimental hut trial design
Two experimental huts trials were undertaken. The first trial was conducted over 54 collection nights between November and December 2015, the second trial was conducted for 36 nights between May and July 2016. The following six treatment arms were included: Washing of LLINs was done according to WHO Phase II protocols [32]. The interval between washes was 1 day which is the established regeneration time for Interceptor G2 and Interceptor LN [8]. Each net was cut with six holes of 4 cm diameter to simulate wear and tear. For the washed nets, washing was done in 10 L of soap solution (2 g/l of Savon de Marseille). Nets were agitated for 3 min by stirring with a pole, then allowed to soak for four minutes, and then stirred again for 3 min. The nets were rinsed twice using the same procedure with clean tap water. All nets were 100-denier. Three nets were used per treatment arm.
Treatments were rotated between huts each week (3 nets tested 3 times over 9 days or 2 times over a 6-days) with sleepers rotated between huts and treatments each night using a randomized latin square design to adjust for variation in personal attractiveness to mosquitoes or hut positional effect. Each morning mosquitoes were collected and held for 72 h in cups with sugar solution to record any delayed mortality. All dead and surviving mosquitoes were retained on silica gel for molecular identification [33] and for genotyping of L1014S or L1014F kdr alleles using Taqman PCR [34].
The outcomes of the hut trials were: (i) Deterrence-the proportional reduction of mosquito entry into huts with insecticide treated nets relative to huts with untreated nets (ii) Mortality-the proportion of mosquitoes killed by a treatment relative to the total numbers entering huts with that treatment (iii) Overall killing effect-the number of mosquitoes killed by a treatment relative to the number dying in the untreated control, as derived from the formula: killing effect (%) = 100 (Kt-Ku)/Tu, where i) Kt is the number killed in the huts with treated nets, ii) Ku is the number dying in the huts with untreated nets, iii) Tu is the total entering the huts with untreated nets (iv) Blood-feeding inhibition-the proportional reduction blood-feeding in huts with treated nets relative to the proportion blood-feeding in huts with untreated nets (v) Personal protection-the reduction in the numbers of mosquitoes blood-feeding in huts with treated nets relative to numbers blood-feeding through untreated nets, as derived from the formula: % Personal protection = 100 (Bu-Bt)/Bu, where (i) Bu is the total number of blood-fed mosquitoes in the huts with untreated nets, and (ii) Bt is the total number of blood-fed mosquitoes in huts with treated nets (vi) Insecticide induced exiting-the proportional increase in exiting from huts with insecticide treated nets relative to the proportion exiting from huts with untreated nets
Chemical analysis
Netting samples were cut from each net before and after washing and after completion of the trial for determination of insecticide content. Determination of alpha-cypermethrin and chlorfenapyr content was performed at BASF (1st trial) and Walloon Agricultural Research Centre (CRA-W) (2nd trial) using a draft CIPAC method jointly developed by CRA-W and BASF based on CIPAC 454/LN/M/3.1. The method involves extraction of alpha-cypermethrin and chlorfenapyr by ultrasonication at ambient temperature for 30 min in heptane in the presence of dicyclohexyl phthalate as internal standard, by adding citric acid, and determination by gas chromatography with flame ionization detection (GCFID). The insecticide concentration of each sample (g/kg) was converted to mg/m 2 before presentation.
Mosquito strains
Anopheles gambiae s.s. Kisumu, a laboratory insecticide susceptible strain, originally from Kenya. Anopheles gambiae s.s. Zeneti, a pyrethroid resistant strain of An. gambiae s.s. from Zeneti village containing the L1014S pyrethroid resistance knockdown allele (kdr east) [29] and showing 30-fold resistance to permethrin relative to susceptible An. gambiae Kisumu.
WHO cone bioassays
These were conducted on standardised washed and unwashed nets to estimate the wash fastness of each net formulation. Five pieces were cut from each net and two replicates of five susceptible or resistant An. gambiae mosquitoes were exposed for 3 min. Mortality was scored at 24 h, 48 h and 72 h post-exposure.
Tunnel tests
These were conducted on standardised washed and unwashed pieces of Interceptor G2 LN netting after 0 and 20 washes. A total of 100 susceptible and resistant mosquitoes were tested in tunnel tests in replicates of 50 mosquitoes per test in accordance with WHO guidelines [32]. The tunnels were divided into two sections by a netting frame punctured with 9 holes slotted across the tunnel. In one section an anaesthetized guinea pig was housed unconstrained in a cage to attract mosquitoes from the release section overnight. Test conditions were 25 ± 2 °C and 80 ± 10% RH. Mosquito mortality was recorded after 24 h and 72 h holding periods.
Statistical analysis
Data were entered into an Excel database and transferred to Stata 11 (Stata Corp LP, College Station, TX, USA) for processing and analysis. Cone bioassays and tunnel test data were analysed using logistic regression for grouped data adjusting for clustering within replicate tests.
Proportional outcomes in the experimental hut trial (mortality, blood-feeding, exiting) related to each treatment were assessed using logistic regression for grouped data adjusting for daily collected mosquitoes. In addition to the fixed effect of each treatment, each model included random effects to account for variation between the hut position and sleeper attractiveness. Comparison between numeric outcomes of treatments (personal protection, killing effect, deterrence) was analysed using negative binomial regression with adjustment for variation in the same covariates described above.
Risk ratios of mortality, blood-feeding and exiting rates the two trials were pooled using meta-analysis using a random-effects model STATA ® statistical analysis software package version 16 (Stata corporation, Collage Station, Texas 77,845 USA, 2019). Overall heterogeneity across trials was calculated using Cochrane's Q test with a P value of less than 0.05 to indicate statistical heterogeneity and quantified heterogeneity using the I 2 statistic [35,36].
Resistance status
WHO susceptibility tests using permethrin and alphacypermethrin treated papers were conducted against F1 progeny of mosquitoes collected from huts containing untreated nets before and during the trial. Mortality recorded using 0.75% permethrin papers was 46.7% for An. gambiae and 56.7% for An. funestus in the first trial (2015) and 43% and 52.6%, respectively in the second (2016), indicating resistance to pyrethroids in both species. Mortality using 0.05% alpha-cypermethrin papers was 52.7% for An. gambiae during the first trial. Alphacypermethrin papers were not available during the 2 nd trial, but other alpha-cyano pyrethroids such as 0.05% deltamethrin and 0.05% lambdacyhalothrin gave a similar 73.8% and 50.6% mortality respectively. Concurrent mortality using the same insecticide test papers against susceptible An. gambiae Kisumu was 100% in each case. Insecticide resistance intensity testing showed Zenet field An. gambiae to have over 30-fold resistance to pyrethroid (permethrin) compared to susceptible An. gambiae Kisumu.
Mosquito entry into experimental huts
The average number of mosquitoes entering and exiting the hut are shown in Table 1. The geometric mean number of An. funestus collected during the first trial ranged from 0.6 to 1.5 per hut per night. During the second trial the geometric mean number of An. funestus ranged from 1.3 to 1.8 per hut per night. In both trials significantly fewer An. funestus were collected from the huts with chlorfenapyr CTN compared to the huts with the untreated nets. No consistent deterrent effect was observed with IG1 (alpha-cypermethin alone) or IG2 compared to untreated nets.
Mortality and overall killing effect
The overall percentage mortality by treatment arm is shown in Fig. 1. Because chlorfenapyr shows the property of delayed mortality, which reaches a zenith 72 h after mosquitoes enter into the huts with chlorfenapyr treated nets, both 24 h and 72 h mortality are presented in Table 2. Percentage mortality corrected for untreated net control is also shown.
In the first trial, control-corrected mortality of An. funestus after 24 h was 5-6% in the huts with the unwashed IG1 and in the huts with the IG1 washed 20 times (Table 2). Mortality in these treatment arms was significantly different from the mortality in the huts with the unwashed IG2 (42%), the IG2 washed 20 times (44%) and the chlorfenapyr CTN (37%). After 72 h, control corrected mortality was significantly higher than after 24 h across most of these treatments (Table 2). Mortality was significantly higher in the huts with the IG2 unwashed and washed 20 times compared with the IG1 unwashed and washed 20 times treatments.
In the second trial, the trend was slightly different. Control corrected mortality significantly increased once again between 24 and 72 h with the unwashed IG2 (from 22 to 46%), the IG2 washed 20 times (from 6 to 41%) and the chlorfenapyr CTN (from 18 to 36%) ( Table 2). But unlike the first trial, control-corrected mortality showed no significant change between 24 and 72 h with the unwashed IG1 (1.9% to 3.8%) and with the IG1washed Natural mortality of An. funestus after 72 h in the huts with the untreated nets in the first trial was significantly lower (21%) than the overall mortality in huts with the IG1 unwashed (37%) or IG1 washed 20 times (34%). In the second trial, natural mortality after 72 h in huts with the untreated nets was lower (13%) than in the first trial (21%), but on this occasion the untreated nets showed no difference in mortality compared to IG1 unwashed (16%) or IG1 washed 20 times (18%) which also stayed low. A further difference between the two trials: in the first, both IG2 and IG1 showed significantly delayed mortality between 24 and 72 h, in the second trial only IG2 showed significantly delayed mortality between 24 and 72 h and not IG1.
The 'overall killing effect' by the IG1 and IG2 interventions were consistent with percentage mortality of the IG1 and IG2 treatments observed in the huts. In the first and second trials, IG1 killed up to 16% and 0% of An. funestus, respectively, and IG2 killed up to 49% and 38%, respectively.
Meta-analysis of mortality
In the meta-analyses of mortality between the two trials, the comparison of relative risk between the unwashed IG2 and the untreated net was 3.36 (CI 2.3, 4.9) (P = 0.001). The comparison of mortality relative risk between the chlorfenapyr CTN and untreated net, 3.24 (CI 2.4, 4.2) (P = 0.001) was, therefore, quite similar to that of the unwashed IG2 and untreated net. The comparison of relative risk between the unwashed IG1 and the untreated net was rather less (1.60, CI 1.1-2.3) (P = 0.01), indicating a smaller effect size of alpha-cypermethrin on mortality. The effect of the comparison between IG2 and IG1 was 2.27 (1.1, 4.6) (P = 0.012), confirming the greater contribution of chlorfenapyr than of alphacypermethrin to IG2 mortality. This was further confirmed by the comparison of chlorfenapyr CTN to IG2: the risk ratio was a non-significant 0.96 (0.7, 1.23) (P = 0.231) implying that chlorfenapyr was making most of the contribution to mortality in IG2 and not alpha-cypermethrin. The similarity of relative risk between unwashed IG2 and IG2 after 20 washes (1.04, CI 0.8-1.3) (P = 0.73) indicated no loss of mortality effect in IG2 between 0 and 20 washes (Fig. 2a).
Blood feeding rates and personal protection
In the first trial, the percentage blood-feeding of An. funestus was significantly greater in the huts with the untreated net than in the huts with IG1 and IG2. There were no significant differences in blood-feeding rates between the huts with the IG1 or the IG2, with or without washing (Table 3). Neither was there significant difference in percentage blood-feeding between untreated net and chlorfenapyr CTN nor evidence of blood-feeding inhibition due to chlorfenapyr presence (percentage blood-feeding was greater in the huts with the chlorfenapyr CTN).
In the second trial, while the percentage bloodfeeding may have seemed greater in the huts with the untreated net than in the huts with the unwashed IG1 or IG1 washed 20 times the differences were not significant. Once again, no significant differences were evident between any of the IG1 and IG2 treatments. In the second trial, the difference between the untreated net and the chlorfenapyr CTN was also non-significant. Seven of the eight treatments that did show some degree of bloodfeeding inhibition contained an alpha-cypermethrin component whether in IG1 or when twinned with chlorfenapyr in IG2.
In the first trial, personal protection in huts with IG1 and IG2 was significantly greater than in huts with the untreated nets. The chlorfenapyr net also showed significantly greater personal protection compared to untreated nets. In the second trial, while the numbers of An. funestus that were blood fed were also less in huts with the insecticide treated nets, neither the IG1, IG2 nor the chlorfenapyr treatments showed significant reduction in number blood-fed compared numbers blood-fed in huts with the untreated net. From these results it is not possible to conclude definitively that chlorfenapyr has no role in personal protection in huts with the chlorfenapyr treated net, but as regards personal protection in IG2, it would seem that that the alphacypermethrin component has the major role mediated through reduced bloodfeeding just as in IG1.
Meta-analysis of percentage blood feeding
In the meta-analyses of blood-feeding between the two trials, the comparison of relative risk between the unwashed IG2 versus the untreated net was 0.70 (CI 0.44, 1.11) (P = 0.133). The comparison of relative risk between the unwashed IG1 versus the untreated net was also quite similar (0.61, CI 0.39, 0.97) (P = 0.035) to that of IG2 above (Fig. 2b). The comparison of relative risk between the chlorfenapyr CTN versus the untreated net was 0.97 (CI 0.39-2.44) (P = 0.95). Considering these results in reverse order: chlorfenapyr treatment seems to have no effect on blood-feeding compared to no treatment. Alpha-cypermethrin was the sole AI contributing to reduced blood-feeding in the comparison of IG1 to untreated net. The inference is, the contributing active ingredient to reduced blood-feeding in IG2 versus untreated net is the alpha-cypermethrin rather than the chlorfenapyr. Further, the meta-analysis of relative risk of the comparison of IG2 versus chlorfenapyr CTN was 0.74 (0.3-2.0) (P = 0.67). This relative risk, being in the same direction as the relative risk between IG1 versus untreated net (0.61, 0.39-0.97) may support the interpretation that the chlorfenapyr has little or no role in bloodfeeding in IG2 nor does it antagonize the positive effect alpha-cypermethrin has on reducing blood feeding in IG2 (Fig. 2b).
Exiting rates
In the first trial, mosquito exiting rates were significantly higher in the huts with IG1, IG2 and chlorfenapyr CTN treatments compared to the huts with untreated nets (Table 1). In the second trial the exiting rates from huts with IG1, 1G2 and chlorfenapyr CTN were not significantly different from exiting rates from huts with the untreated net nor from one another (Table 1).
Meta-analysis of enhanced exiting
In the meta-analysis these differences between the first and second trials led to heterogeneity in several of the comparisons of relative risk for exiting rates between treatments. No comparison between IG2 and any other treatment (untreated net, alpha-cypermethrin net, chlorfenapyr net) was significantly different from unity (Fig. 2c).
Anopheles gambiae sensu lato
Abundance of An. gambiae was very low in trial 1 with only 42 mosquitoes collected from the six treatments over 54 nights. However, differences in mortality were observed at 72 h with significantly higher mortality observed in huts with unwashed IG2 and IG2 washed 20 times (14/16) compared to IG1 (4/11) or untreated nets (1/10) (Supplementary file), which is consistent with the An. funestus dataset trends. Insufficient An. gambiae were collected during trial 2 for formal analysis.
Chemical analysis
The mean alpha-cypermethrin content in unwashed IG2 for trial 2 (the WHO trial) was 2.81 g/kg ( Table 4). The nets complied with the target dose of 2.4 g/kg ± 25% for 100 denier yarn. The mean chlorfenapyr content in unwashed IG2 for trial 2 was 5.22 g/kg. The nets complied with the target dose of 4.8 g/kg ± 25%. The withinnet variation showed an acceptable homogeneity of active ingredient within the nets. After 20 washes the IG2 alpha-cypermethrin content for trial 2 was 1.65 g/ kg, corresponding to an overall alpha-cypermethrin retention of 59%. The chlorfenapyr content was 1.66 g/kg after 20 washes, corresponding to an overall chlorfenapyr retention of 32% for trial 2. Netting samples were not kept back pre-washing in trial 1 for chemical analysis and therefore retention of chlorfenapyr and alpha-cypermethrin in IG2 after washing could not be accurately estimated. However, chemical analyses were conducted after the nets had been washed and tested in the huts and data was consistent with trial 2 post-trial retention estimates (see Table 4). The mean alpha-cypermethrin content in unwashed IG1from trial 2 was 5.55 g/kg. The alphacypermethrin content after twenty washes was 1.59 g/kg, corresponding to alpha-cypermethrin retention of 30% in IG1.
Supporting bioassay tests on Interceptor and Interceptor G2 nets used in the hut trials
The purpose of the supplementary bioassays was to sample netting from the IG2 and IG1 used in the experimental hut trials to 1) test bio-efficacy against pyrethroid resistant (Zenet) and susceptible (Kisumu) strains in mosquito bioassay, 2) confirm the bioefficacy of alphacypermethrin and chlorfenapyr components after multiple washing, 3) examine the capacity of tunnel tests to predict the performance IG2 netting under simulated hut conditions to control An gambiae s.s. Standard WHO Cone bioassay tests on nets with 3 min exposure of the susceptible strain and a 72 h holding period, induced mortality of 96% and 100% on the unwashed IG1 and the IG1 washed 20 times. With the chlorfenapyr CTN, mortality was 90%, 95% and 100% after 24 h, 48 h and 72 h. For the unwashed IG2, mortality was 100% after 24 h exposure. For IG2 washed 20 times mortality was 62%, 72% and 86% after 24 h, 48 h and 72 h intervals (Fig. 3a). In further supplementary 3 min cone tests using the susceptible strain, mortality was 100% on unwashed IG1 and IG2, and on the IG1 and IG2 washed 20 times mortality was reduced to 82% and 86%, respectively. With the Zenet pyrethroid resistant strain, cone mortality was reduced to 16% and 40% with unwashed IG1 and IG2, respectively, and 16% and 20% respectively after 20 washes (Fig. 3b).
Supplementary tunnel tests were conducted using susceptible and resistant strains tested on unwashed IG2 and IG2 washed 20 times (Fig. 3c). With untreated netting, 100% of the susceptible and 86% of the resistant mosquitoes penetrated the holes into the baited chamber, 100% of the susceptible and 69% of the resistant mosquitoes blood-fed, and 2% of the susceptible and 2% of the resistant mosquitoes died. With unwashed IG2 netting fewer of the susceptible (89%) and resistant (38%) mosquitoes penetrated the holes, and even fewer susceptible (70%) and resistant (0%) mosquitoes' blood-fed. However, 98% of the susceptible and 36% of the resistant mosquitoes were killed by the unwashed IG2 their attempts to feed. With the IG2 washed 20 times, a smaller percentage of the susceptible (65%) and resistant (20%) mosquitoes penetrated the holes (surprisingly), fewer susceptible (18%) and resistant (0%) blood-fed, and yet 100% of susceptible and 26% of resistant mosquitoes were killed by the IG2 washed 20 times. The newcomer Zenet strain was evidently less well adapted to the tunnel test, penetrating holed netting and responding/feeding on guinea pigs less well than did the long-established Kisumu.
Comparison of supplementary bioassay tests with hut trial results
Comparing the laboratory cone and tunnel bioassay results against the pyrethroid resistant An gambiae s.s. strain and the experimental hut results against the wild pyrethroid resistant An funestus population, both types of bioassay predicted the response in the hut to the pyrethroid-only IG1: mortality was 16% in the cone and 13% in the hut against the unwashed IG1, and 16% in the cone and 11% in the hut against the IG1 20 times washed (averaged control-corrected mortality). When tested against the unwashed IG2, mortality was 40% in the cone, 36% in the tunnel and 51% in the hut; when tested with the 20 times washed IG2 mortality was 20% in the cone, 26% in the tunnel and 46% in the hut.
With the unwashed and washed IG2, percentage passage and percentage blood-feeding in the tunnel test were significantly lower with the newly-colonised resistant Zenet strain as compared to the long-established susceptible Kisumu strain. While up to 70% of Kisumu blood-fed after penetrating the IG2 netting, none (0%) of the Zenet strain blood-fed through IG2. And while high mortality of Kisumu (up to 70%) was recorded with IG2, low mortality was recorded against unwashed and washed IG2 (36% and 26%, respectively). This very much reflected the new adaptation of the Zenet strain to the tunnel test, possibly an avoidance or irritation of the treated net, or 'reluctance' to feed on guinea pigs. However, for those Zenet strain mosquitoes that did penetrate the netting, mortality inflicted by unwashed and 20 times washed IG2 was high, 58% and 80% respectively, and more closely resembled mortality in experimental huts.
This series of bioassay tests demonstrates that the chlorfenapyr component of IG2 LN makes the major contribution to controlling pyrethroid resistant An. gambiae and An. funestus. The tunnel tests were more predictive of efficacy in experimental huts whilst cone bioassays were less predictive.
Discussion
Novel alternative insecticides which can complement the pyrethroids on LLIN and improve the control of pyrethroid resistant vectors are urgently needed to sustain progress against malaria. The objective of the present study was to determine the efficacy and wash-fastness of the chlorfenapyr-alphacypermethrin mixture net, Interceptor G2 LN, unwashed and after 20 washes, against the primary pyrethroid-resistant vectors An. funestus and An. gambiae s.s. under household-like conditions compared to the standard pyrethroid-only net Interceptor LN (IG1). Previously this very team had participated in the development and evaluation of IG1 against An. funestus and An. gambiae s.s. 10-14 years ago when these species were pyrethroid susceptible in NE Tanzania [8,20]. Latterly this team`s participation was extended to development and evaluation of the new generation long-lasting net IG2 against the An. gambiae sibling species An. coluzzii in Benin, W Africa, and An. arabiensis in Kilimananjaro, Tanzania, where the species had become pyrethroid resistant [13][14][15][16][17]. Two trials were more recently extended to Muheza, NE Tanzania, aimed at evaluating IG2 against pyrethroid resistant An. gambiae s.s. and An. funestus. Only An. funestus was caught in significant numbers. In the meta-analysis of the two trials, the mortality induced by IG2 against An. funestus was 3.4 times higher than with untreated nets and 2.3 times higher than with IG1. The comparison of mosquito mortality between the unwashed IG2 and IG2 washed 20 times produced a relative risk of 1.04 (CI 0.83-1. 30) indicating no loss of efficacy of IG2 over 20 washes. This means IG2 exceeds by a factor of 2.3 the mortality criterion required by WHO PQT to grant the product LLIN status [32]. The comparison of chlorfenapyr CTN with IG2 confirmed that the chlorfenapyr component of IG2 was the main contributor to mosquito mortality and net efficacy. However, it was also confirmed that the pyrethroid continues to have a valuable role with respect to blood-feeding inhibition, repellency and personal protection. The pyrethroid contributed 39% protection against blood-feeding of pyrethroid-resistant An. funestus in IG1 and 30% protection in IG2 compared to untreated nets. This was not far short of the 32% blood-feeding inhibition shown by IG1 against pyrethroid-susceptible An. funestus in Zenet hut trials over 10 years ago [20].
More important than the demonstration of equivalence of blood-feeding inhibition in resistant An. Other recent experimental hut trials in West Africa in which IG2 has generated high mortality include An. coluzzii in Benin (71%, 65%), in Burkina Faso (76%, 75%) and Côte d'Ivoire (90%, 82%) when unwashed and washed 20 times, respectively. This is comparable mortality to that achieved with IG1 and other pyrethroid-only nets in the 1990s and new millennium when standard ITN and LLIN were first demonstrating malaria control and personal protection [37]. Considering the impact of ITN and LLIN then, it is reasonable to anticipate that IG2 and other Dual-AI will achieve comparative control of pyrethroid-resistant mosquitoes as standard LLIN once did against susceptible mosquitoes.
It is certainly the case that high intensity resistance means that standard LLIN are no longer preventing malaria as they once did. In countries and regions bordering Lake Victoria, for example, standard LLIN no longer appear to be reducing malaria despite maintenance of high coverage [9,38,39]. A cluster randomised trial of standard pyrethroid LLIN conducted in the region of high resistance, Kagera, on the western shore of Lake Victoria, Tanzania, could only demonstrate stasis in 2018 after introduction of new pyrethroid-only LLIN [9] but in adjacent clusters which were randomised to receive pyrethroid-PBO synergist LLIN there was a significant reduction in entomological inoculation rate and malaria prevalence [10].
The only putative insecticide mixture LLIN on the horizon, apart from the pyrethroid-chlorfenapyr net IG2, is a net treated with pyrethroid and pyriproxifen which is a mosquito sterilant and insect growth regulator. In a stepped wedge cluster randomised trial conducted in Burkina Faso a reduced malaria incidence rate of 12% was observed in the intervention arm compared to the control, a standard pyrethroid-only LLIN [40,41]. As a mixture of two adulticides, IG2 would appear to hold more promise. Owing to the diversity of novel AI and modes of action being tested on LLINs, the WHO is no longer willing to accept entomological evidence as generated in experimental hut trials as adequate evidence for recommendation of a novel LLIN class. Since 2017, the WHO has required all new classes of LLIN to be subject to cluster randomized trials (CRT) with malaria control outcomes before they can gain approval or recommendation for wide-scale use as new methods of malaria control [42]. Chlorfenapyr is currently the only novel adulticide being evaluated on LLIN in a CRT. Such a trial takes at least 2 years to complete. This means that chlorfenapyr is a very precious AI, squandered at our peril. If chlorfenapyr fails due to evolution of resistance, there will be only PBO and piriproxyfen left in the armoury for use on nets. Fortunately, chlorfenapyr is novel chemistry and there is no sign of resistance so far, but resistance will evolve just as it always does. What must be done now is to identify ways preserve this AI as much as it is used to good effect. There is temptation to use it as an IRS insecticide too. In hut trials, it appears less effective applied as an IRS adulticide and the WHO proposes cluster randomized trial evidence of malaria effect [43,44]. Blanket IRS coverage may accelerate resistance selection, as was demonstrated after 7 years of pyrethroid IRS in Kagera region that led to premature loss of pyrethroid effectiveness in LLIN just as LLIN were being scaled up [9,10]. What is needed is a far-sighted resistance management strategy which prioritizes PBO, chlorfenapyr, and the few AI that can be used safely on nets and reduces their use in other applications, like IRS, if there are good alternatives that can be used or rotated to reduce selection pressure on chlorfenapyr in IG2.
Conclusion
Novel alternative insecticides suitable that can complement the pyrethroids and improve the control of pyrethroid resistant malaria vectors are urgently required for sustaining LLIN as a means of malaria control. The mortality of pyrethroid resistant An. funestus induced by unwashed and 20 times washed Interceptor G2 appears to meet the entomological requirements set by the WHO for efficacy and wash-resistance. Thus far there is no epidemiological evidence to back up the entomological evidence nor any knowledge of how long Interceptor G2 LN or its chlorfenapyr component will remain effective under field conditions. Therefore, large-scale cluster randomized trials of Interceptor G2 with epidemiological end-points are an essential next step. A CRT in NW Tanzania against An. funestus and An. gambiae is due to report in mid-2021 for recommendation as a new class of LLIN product to WHO. | 2023-01-17T14:45:56.701Z | 2021-04-09T00:00:00.000 | {
"year": 2021,
"sha1": "2698b54de1e5281f4538a1a4023ab410df1e71b4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-021-03716-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2698b54de1e5281f4538a1a4023ab410df1e71b4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
17115820 | pes2o/s2orc | v3-fos-license | Preventing "a bridge too far": promoting earlier identification of dislodged dental appliances during the perioperative period.
The presence of fixed partial dentures presents a unique threat to the perioperative safety of patients that require orotracheal intubation or placement of instruments into the gastrointestinal (GI) tract. There are many chances for the displacement of a fixed partial denture: instrumentation of the airway for intubation, or introduction of temporary devices, such as gastroscopes or transesophageal echo probes. If dislodged, the fixed partial dentures can enter the hypopharynx, esophagus or lungs and cause perforations with their sharp tines. Oral or esophageal perforation can lead to potentially fatal mediastinitis. We describe a case of a patient with a fixed partial denture who underwent cardiac surgery with intubation and transesophageal echocardiography (TEE). His partial denture was intact after the procedure. After extubation, he reported that his teeth were missing. Multiple procedures were required to remove his dislodged partial dentures. In sign-out reports, verbal descriptions of the patient’s partial dentures were not adequate in this case. A picture of the patient’s denture and oral pharynx pre-operatively would have provided a more accurate template for the post-operative team to refer to when caring for the patient. This may have avoided the multiple potentially risky procedures the patient had to undergo. We describe a suggested protocol utilizing a pre-operative photo to reduce the incidence of unrecognized partial denture dislodgement in the perioperative period. Because the population is aging, this will become a more frequent issue confronting practitioners. This protocol could mitigate this complication.
Introduction
The presence of fixed partial dentures presents a unique problem to the perioperative safety of patients that require orotracheal intubation or placement of instruments into the gastrointestinal (GI) tract. There are many opportunities for the displacement of a fixed partial denture. During the perioperative period, when patients have manipulation of their oropharynx to accommodate the placement of an endotracheal tube and/or transesophageal echocardiography (TEE) probe, the fixed partial denture can become dislodged. Likewise, in the post-operative period, if the patient bites or chews on the endotracheal tube, the fixed partial denture can be displaced.
The fixed partial dentures can enter the hypopharynx, esophagus or lungs and cause perforations with their sharp tines. Esophageal perforations can result in mediastinitis with a mortality of 48% [1]. The retrieval process can be traumatic even with endoscopic retrieval: as the denture is pulled out of the GI tract or airway, the tines can rake the mucosal surface, and cause more perforations. In the worst case, the fragile dental appliance can fragment, resulting in multiple smaller, sharp objects which can easily migrate distally. In cases where endoscopy is unsuccessful, thoracotomy is needed to retrieve the fixed partial denture. We introduce and recommend a new safety protocol to reduce the morbidity of a dislodged fixed partial denture.
Case Report
A 70-year-old man underwent urgent cardiac surgery for coronary artery bypass grafting. Pre-operative assessment showed that his teeth were in poor condition, with the presence of gingivitis. Pre-operative examination of his oral cavity showed a stable fixed partial denture. During the surgery, he had easy, atraumatic orotracheal intubation in one attempt. After intubation, a TEE probe was easily placed in his esophagus and used during the entire case. The TEE probe was moved within the esophagus by routine traction forces applied at the mouth to manipulate the scope -and slide the scope in and out.
The patient remained intubated and sedated at the end of the case and was transported to the cardiac ICU. At that time, the patient had his fixed partial denture in place. A chest X-ray was taken at the time of arrival to the ICU, and there was no evidence of a dislodged denture. During the first 12 h in the ICU, the patient had serial chest X-rays to evaluate his lungs and placement of the endotracheal tube and central lines. He was not unduly agitated. After the patient was extubated, he told the ICU nurses that he was missing his teeth. The patient did not experience dyspnea, coughing or dysphagia. Review of serial chest X-rays confirmed that his fixed partial denture had migrated into his hypopharynx; an unsuccessful attempt was made to retrieve the denture in the hypopharynx and the denture migrated into the esophagus. The gastroenterology service was consulted to remove the denture, but they were unable to retrieve the denture with their endoscope. ENT was subsequently consulted, and they were able to retrieve the denture with a rigid esophagoscope under general anesthesia in the operating room.
Discussion
The incidence of ingestion of dental appliances after orotra-cheal intubation is unknown, but it can be compared to information collected on the ingestion of foreign bodies. Some reports state that 1,500 people die annually from the ingestion of foreign bodies [2]. Complications of foreign body ingestion include gut perforation, sepsis, peritonitis, esophagitis, hemorrhage, and impaction of the GI tract. In an unconscious or sedated patient with an unprotected airway, aspiration into the trachea, and commonly the right bronchus, can occur. If not recognized, the fragment(s) can lead to an abscess, and pneumonia. Mediastinitis is already a risk in uncomplicated coronary artery bypass surgery [3,4]. Perforation in the oral pharynx can lead to cervical necrotizing fasciitis, which can be fatal [5,6]. As a dislodged device passes distally, esophageal perforation can occur, and also lead to mediastinitis [7]. Esophageal perforation is especially problematic, as mediastinitis has been reported even after botulinum toxin esophageal injection for spasm [8].
Although the term fixed partial denture is used to describe dental appliances, these items can be dislodged if there is no manipulation of the oropharynx. Also, if the patient has poor dentition, it is more likely that the fixed partial dentures can be dislodged from their abutments. The fixed partial dentures contain sharp tines that cause perforations of the GI tract at any point during their migration out of their dental abutments. A safety protocol to document and emphasize the presence of fixed partial dentures may help reduce the incidence of morbidity related to fixed partial denture dislodgement. In the pre-operative holding area, a digital camera with printer can be made available to document the oral exam of every patient with fixed partial dentures. A picture of the fixed partial dentures in situ can be attached to the chart and given as part of patient information during patient verification and sign-out report. The practice of using digital photography to document oral appliances is well known to the dental field; in fact, many employ consumer-level digital cameras over more specialized intraoral cameras as the former are now capable of high resolution capture in macro modes [2]. The old saying of "a picture is worth a thousand words" is apropos here: Being able to show the ICU team a picture of the patients dental appearance pre-operatively is worth more than a verbal description. In our case, despite an adequate description of the patients' denture to the ICU team, when it subsequently became dislodged, it was only realized when the patient complained. Fixed partial dentures are very hard to verbally describe based on their various shapes, so a verbal description is likely to be inadequate.
The type of procedure should lower the threshold for removing unstable fixed partial dentures. When the patient is undergoing procedures that involve passing instruments repeatedly through the mouth, such as the movement of a TEE during cardiac anesthesia, or the movement of an endoscope for an EGD, the fixed partial denture should be removed if it is slightly unstable. Even if the fixed partial denture is stable, it should be checked periodically during the case for loosening. Furthermore, in cases where the head is not easily accessible to the anesthesiologist, the patient should not be allowed to have the fixed partial denture in his mouth if it is not completely secure. In contrast, if there are no instruments being dynamically placed in the mouth, the clinician can reduce the number of inspections of the fixed partial denture.
In the post-operative period, a detailed report, with pictures, should be given about the location of fixed partial dentures. Furthermore, serial exams of the oropharynx should be done to confirm the location and status of fixed partial dentures. If there is evidence of loosening or displacement of the fixed partial dentures, the post-operative care team should remove the denture or consult dentistry to remove the denture. Special attention should be given to patients who remain intubated postoperatively. In the post-operative care unit, the patient must have adequate sedation so they do not grind their denture on the endotracheal tube and displace their denture. In addition, like the routine monitoring of vital signs, the stability of fixed partial dentures should be verified periodically.
Fixed partial dentures have an increasing presence in our aging population. The fixed partial denture has sharp tines which anchor the denture to surrounding teeth. These "fixed" partial dentures have the potential to come loose during general anesthesia when an endotracheal tube is introduced through the mouth to secure the airway. If the patient has a TEE placed in his mouth during a case, it is vital to periodically check on the stability of the denture as a TEE is moved in and out of the patient's mouth. In addition, when a patient remains intubated at the end of a case, adequate sedation must be provided so that the patient does not bite on his endotracheal tube. With increased surveillance of the fixed partial denture, we can reduce the dislodgement of fixed partial dentures, and avoid the morbidity of the denture traumatizing the GI tract (Fig. 1). | 2016-05-12T22:15:10.714Z | 2014-11-19T00:00:00.000 | {
"year": 2015,
"sha1": "0f6fcd06e171cd08e557ef7a97b61d45aa96f243",
"oa_license": "CCBY",
"oa_url": "https://www.jocmr.org/index.php/JOCMR/article/download/1981/973",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f6fcd06e171cd08e557ef7a97b61d45aa96f243",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238583010 | pes2o/s2orc | v3-fos-license | Phonon-induced magnetoresistivity of Weyl semimetal nanowires
We study longitudinal magnetotransport in disorder-free cylindrical Weyl semimetal nanowires. Our theory includes a magnetic flux $\Phi$ piercing the nanowire and captures the finite curvature of the Fermi arc in the surface Brillouin zone through a boundary angle $\alpha$. Electron backscattering by acoustic phonons via the deformation potential causes a finite resistivity which we evaluate by means of the semiclassical Boltzmann approach. We find that low-energy transport is dominated by surface states, where transport observables are highly sensitive to the angle $\alpha$ and to Aharonov-Bohm phases due to $\Phi$. A generic subband dispersion relation allows for either one or two pairs of Fermi points. In the latter case, intra-node backscattering is possible and implies a parametrically larger resistivity than for a single Fermi point pair. As a consequence, large and abrupt resistivity changes take place across the transition points separating parameter regions with a different number of Fermi point pairs in a given subband.
I. INTRODUCTION
Weyl semimetal (WSM) materials represent one of the most intensely studied topics in current condensed matter physics; for recent reviews, see Refs. [1][2][3][4][5][6]. WSM materials have pairs of Weyl nodes in the Brillouin zone which act as sources of Berry curvature, with topological Fermi arc surface states connecting the surface projections of different Weyl nodes. Experimental evidence for Fermi arcs has already been accumulated for several WSM materials by means of surface probe techniques [2,4], and experimental studies of other interesting phenomena such as the chiral anomaly [5] or nonlocal Weyl orbits [6] are well advanced. Nonetheless, a satisfactory understanding of the transport properties of WSM materials is often difficult to reach due to the intricate interplay between topological protection and backscattering mechanisms. In addition, it is important to include electromagnetic fields and finite size effects in specific device setups. To give just one example, while measurements of the magnetoresistivity could in principle reveal the chiral anomaly [7], the precise relation between transport observations and the chiral anomaly remains under intense debate [5].
In this paper, we present a theory of magnetotransport in disorder-free WSM nanowires, taking into account electron backscattering by acoustic phonons. Since this device geometry is experimentally realizable and at the same time analytically tractable, the interplay between topological Fermi arcs, backscattering effects, electromagnetic fields, and finite-size effects can here be analyzed in a comprehensive manner. The band structure and the noninteracting transport properties of clean WSM nanowires have been studied in Refs. [8][9][10][11][12][13]. In particular, for cylindrical wires, the authors of Ref. [12] have shown that the contribution of Fermi arcs to the conductance often outweighs the effect of bulk states. This conclusion also applies for large values of the nanowire radius, see Refs. [14,15] for related studies. One of the goals of this work is to quantify phonon-induced backscattering effects on the magnetoresistivity of WSM nanowires, in particular in parameter regions where transport is dominated by surface states.
The importance of phonons in WSMs has been established by recent experiments [16][17][18][19]. Phonon effects can be identified, for instance, through the characteristic temperature dependence of phonon-induced contributions to transport observables. Theoretical studies of electron-phonon coupling effects have so far mainly focused on optical phonons and/or phenomena unrelated to transport, see, e.g., Ref. [20]. Phonon-induced backscattering effects on transport in WSMs have been studied for the slab geometry [21] but (to the best of our knowledge) not for nanowires. We note that the phonon-induced resistivity of conventional one-dimensional (1D) quantum wires with parabolic (or linear) dispersion was studied by many authors [22][23][24][25][26][27][28][29]. However, the dispersion relations of 1D subbands in WSM nanowires turn out to be more complex. For instance, a given 1D subband may allow for more than one pair of Fermi momenta. In such cases, new scattering processes appear which in turn directly affect the dependence of the resistivity on key parameters such as temperature, Fermi energy, and magnetic field.
The consequences of this enriched complexity will here be studied for cylindrical WSM nanowires. We employ a two-band model describing WSMs with broken time reversal symmetry and just two bulk Weyl nodes [14,[30][31][32][33], where a boundary condition ensures that the current density perpendicular to the cylinder surface vanishes. This boundary condition is parametrized by a boundary angle α [11,34], where the commonly used infinite mass boundary conditions are recovered for α = 0. For a planar surface with α = 0, the Fermi arc curves in the surface Brillouin zone are straight lines. For α = 0, however, one finds that Fermi arcs acquire curvature. By including the phenomenological parameter α, we therefore can also address the case of WSM materials with curved Fermi arcs.
We use the well-known phonon modes predicted by isotropic elastic continuum theory with stress-free bound-ary conditions in the wire geometry [35], and we assume that the deformation potential provides the dominant electron-phonon coupling. Including a constant magnetic field along the wire axis, we then compute the resistivity from Boltzmann theory [36,37]. For a complementary study in the context of topological insulator nanowires, see Ref. [38]. In addition, we will discuss the two-terminal conductance of clean WSM nanowires in the zero-temperature limit, where phonon effects are frozen out. It is interesting to compare WSM nanowires and topological insulator nanowires [39,40]. Even though only the latter have gapped bulk states, we show below that surface states in both types of nanowires show a similar response to a magnetic flux threading the wire. With some modifications along the lines of Ref. [13], our theory can also be adapted to Dirac semimetal nanowires. Nanowires made of the Dirac semimetal material Cd 3 As 2 have recently been synthesized; for transport experiments, see Refs. [41][42][43][44][45]. We note that first transport experiments have recently been reported for WSM nanowires as well [46,47].
The paper is structured as follows. In Sec. II, we derive and discuss the electronic band structure. Assuming that the deformation potential produces the dominant electron-phonon coupling, the phonon-induced resistivity has been computed within the semiclassical Boltzmann approach as explained in Sec. III. Our results for transport observables are then discussed in Sec. IV. The paper concludes with a brief summary and an outlook in Sec. V. Details about our calculations can be found in several Appendices, and we often put = e = c = k B = 1.
II. ELECTRONIC BAND STRUCTURE
In this section, we address the band structure of WSM nanowires. In Sec. II A, we describe a two-band model for magnetic WSMs and derive the spectral equation for cylindrical wires. We then discuss the band structure in Sec. II B, in particular its dependence on magnetic flux and on the boundary angle α.
A. Model
We start from a well-known inversion-symmetric twoband model for the single-particle electron states of a magnetic WSM [14,[30][31][32][33]. This model describes the simplest case with just two Weyl points located at momenta k = ±bê z in the Brillouin zone, where the unit vectorê z is along the z-direction. We will study a cylindrical nanowire geometry with radius R and wire axisê z by imposing a boundary condition at the cylinder surface. In addition, we include the effects of a constant magnetic field B = Bê z along the wire axis, with B > 0. We note that for a magnetic field perpendicular to the wire axis, transport observables are strongly suppressed, see Ref. [10] for a detailed study. Electronic states are then described by the low-energy model [14,[30][31][32][33] with the bulk Fermi velocity v and Pauli matrices σ x,y,z acting in a combined spin-orbital space. Clearly, the momentum k alongê z is a good quantum number, and the effective mass function is given by Throughout we focus on energies |E| vb/2 such that the two Weyl nodes at k = ±b can be clearly distinguished. The magnetic field is given by where we use the symmetric gauge, A = 1 2 B(−y, x, 0). In units of the flux quantum Φ 0 = hc/e, the magnetic flux through the cross-section of the nanowire is encoded by the dimensionless flux parameter with the magnetic length l B = c/eB. For a nanowire of radius R = 25 nm, one finds Φ ≈ 1 for a magnetic field B ≈ 2 T. We note that the magnetic Zeeman term has been neglected in Eq. (2.1). As shown in Ref. [48], even though the g factor can be large in typical WSM materials, the Zeeman coupling is expected to cause only small quantitative changes in the band structure. The orbital effects of the magnetic field, on the other hand, cause qualitative differences.
Before turning to the derivation of the spectrum, let us summarize the relevant energy scales. First, the scale vb/2 corresponds to the mass gap at k = 0, see Eq. (2.2). Second, transverse quantization introduces the finite-size scale v/R. Third, the magnetic energy scale is v/l B . We are interested in relatively thin wires and consider low energies, |E| vb/2. The number of bands in this energy range can be roughly estimated by ∼ vb/(v/R) = bR. Throughout this paper, we consider the case bR 1; in concrete examples, we set bR = 10. Taking a typical value b ∼ 0.5 nm −1 in WSM materials [1,2], this choice corresponds to a nanowire radius R ∼ 20 nm. The ratio between the magnetic scale v/l B and the finite-size scale v/R remains as free parameter determined by Φ.
We proceed by employing polar coordinates, (x, y) = r(cos φ, sin φ), with unit vectorsê r andê φ . Below we will also use the dimensionless radial variable From Eq. (2.1) one then finds that the angular momentum operator J z = −i∂ φ + 1 2 σ z with half-integer eigenvalues j is conserved. Spinor eigenfunctions are thus given by where the wire length L appears for normalization. The real-valued radial eigenfunctions Y ± (ξ) are combined to form radial spinors, where the normalization condition has been adapted to the cylindrical geometry. Using Eqs. (2.5) and (2.6), H 0 Ψ = EΨ reduces to the radial equation with the dimensionless quantities We require regularity of Y (ξ) at the origin ξ = 0. Then the general solution of Eq. (2.7) can be expressed in terms of the confluent hypergeometric function M (a, b; ξ) [49]. Using the notation with the Heaviside step function Θ and keeping the dependence on k and E implicit, we obtain (up to normalization) , j < 0. (2.10) The finite cylinder radius R now enters through a boundary condition at the surface r = R, i.e., for ξ = Φ. Following Refs. [11,34], this boundary condition is written in the form We consider the +1 eigenvalue in Eq. (2.11) for −π/2 < α ≤ π/2 in what follows. The boundary condition (2.11) imposes that on the surface of the wire the pseudospin direction lies in the tangent plane, at an angle α with respect to the circumferential directionê φ . Importantly, this condition preserves angular momentum conservation and ensures a vanishing local current density through the surface. This last con-dition is the same one would impose on a conventional semiconducting nanowire, but the form of the effective Hamiltonian in a WSM allows for one free parameter, the boundary angle α. This is a non-universal parameter which in general will depend on both the WSM material and the precise surface structure.
Using Eq. (2.5) to express Ψ in terms of radial functions, Eq. (2.11) is equivalently written as (2.12) The choice α = 0 implements infinite mass boundary conditions [10,12], defined by a ξ-dependent mass given by m k in Eq. (2.2) for ξ < Φ but m k → ∞ for ξ > Φ. Energy bands E k,j,p vs momentum k for α = 0 and several Φ. All other parameters and conventions are as in Fig. 1. Figure 3. Energy dispersion E k,j,p vs k for α = π/4 and several Φ. All other parameters and conventions are as in Fig. 1.
B. Band structure
The solutions admitted by the boundary condition (2.12) determine the energy spectrum of the nanowire, which consists of 1D subbands labeled by the angular momentum j and a radial band index p. By inversion symmetry, the respective subband dispersion ε k ≡ E k,j,p is always symmetric, ε −k = ε k . The qualitative features of the spectrum depend on the interplay of the three dimensionless parameters bR, Φ, and α characterizing our system.
In general, the spectral condition (2.12) has to be solved numerically, but in several limiting cases, analytical progress is possible. In particular, an approximate solution for the dispersion of Fermi arc surface states will be given below. The full spectrum can be obtained in closed form for the boundary angle α = π/2, see App. A, and is illustrated in Fig. 1 for several values of the magnetic flux parameter Φ. For all angular momenta j > 0, we obtain degenerate Fermi arc surface states with the Φ-independent dispersion relation ε k = m k . However, the point α = π/2 is quite special since for α < π/2, we will see below that the Fermi arc degeneracy is lifted and the arc dispersion depends on Φ. To illustrate the typical band structure found for α < π/2, results obtained by numerical solution of Eq. (2.12) are shown for α = 0 in Fig. 2, and for α = π/4 in Fig. 3. The radial probability density distribution is shown for selected states in Fig. 4.
In order to better understand the band structure, we next discuss surface states. As we show in App. B, the radial Dirac-Weyl equation (2.7) admits solutions where the radial spinor wave function is localized at the surface, Y (r) ∝ e −κ(R−r) Y (R). The inverse decay length must satisfy κR 1 to describe a proper surface state and follows as where the surface state dispersion is given by Equations (2.14) and (2.15) describe Fermi arc states in WSM nanowires in the presence of a magnetic flux threading the wire. This flux enters only through the shift j → j + Φ, just as for the surface states in topological insulator nanowires [38,39]. In the absence of a magnetic field and for very large R, Eq. (2.14) reproduces the known Fermi arc dispersion for a planar surface [33]. The approximations leading to Eqs. (2.14) and (2.15), see App. B, hold under the condition We observe that for κR 1, Eq. (2.16) is always fulfilled except for nearly half-integer values of Φ, where the subband with the angular momentum j closest to −Φ can violate Eq. (2.16).
A comparison to the numerical solution of Eq. (2.12) shows that under the above conditions, the dispersion of Fermi arc states in cylindrical WSM nanowires is well approximated by Eq. (2.14), see App. B. For α = 0, the spectrum in Fig. 2 exhibits a sequence of almost flat Fermi arc states for −b < k < b, with energy spacing given by the finize-size scale v/R. This numerical result is in accordance with Eq. (2.14). For finite α, the bands disperse. This case is illustrated for α = π/4 in Fig. 3, where the Fermi arc dispersion again agrees with Eq. (2.14). Apart from an increase in radial probability density as the surface is approached, see Fig. 4, surface states can therefore also be identified by a strong sensitivity of the dispersion to the boundary angle α.
Next we turn to bulk states, where the probability density is large away from the surface. For R → ∞, Landau states follow by standard steps from the expressions in Sec. II A. Using the magnetic length l B = c/eB and the index n = 0, 1, 2, . . ., their dispersion is given by 17) The states with j < 0 and n = 0 are chiral zero modes [1]. For a finite radius R, these bulk dispersions are obtained as long as l B R and the corresponding wave functions are centered within the nanowire, far from the surface. For a given Landau level, upon decreasing j, the states have increasing weight near the surface and eventually become chiral edge states. In general, surface states can thus represent Fermi arc or chiral edge states. By monitoring the magnetic field dependence, the character of a given surface state can be revealed, as only Fermi arcs remain well-defined surface states for B → 0.
We finally note that in the finite-size geometry considered here, there is not a sharp distinction between surface bands and bulk bands. The character of the states (bulk vs surface) within a given subband depends on k. This is illustrated in Fig. 4, where we show the radial profile of the probability amplitude for states with energy E = −0.15vb in bands with j = ±1/2 as an example. The probability density mainly accumulates near the surface for the state with j = 1/2. However, for the two states in the j = −1/2 subband, which correspond to opposite sides of the extremum in the dispersion at k ≈ b, we observe that one is a bulk state and the other a surface state. Specifically, in Fig. 4, the j = −1/2 state with k = 1.08b has a large probability density near the center of the nanowire (bulk state), while the state with k = 0.62b is peaked near its boundary (surface state). . Probability density |Ψ k,j | 2 vs radial coordinate ξ/Φ = (r/R) 2 for three eigenstates with energy E = −0.15vb, using α = π/4, bR = 10, and Φ = 2, see central panel in Fig. 3. The case k = 0.32b and j = 1/2 corresponds to a Fermi arc state. For the j = −1/2 subband, we find a bulk state at k = 1.08b but a surface state at k = 0.62b.
III. PHONON-INDUCED RESISTIVITY: BOLTZMANN THEORY
In this section, we derive the phonon-induced resistivity in WSM nanowires with the band structure described in Sec. II. Our model for including electron-phonon scattering effects is summarized in Sec. III A. We compute the longitudinal magnetoresistivity, ρ = ρ(T, µ, Φ, α), in the linear response regime from semiclassical Boltzmann theory [36,37], see Sec. III B. We separately consider the resistivity contributions from bands with a single pair of Fermi points, see Sec. III C, and from bands with two pairs of Fermi points, see Sec. III D.
A. Electron-phonon coupling
We first describe the effects of a deformation potential coupling between phonons and electrons at low energy scales, where we include only acoustic phonon modes that are able to generate such a coupling. Experiments on WSM nanowires are often carried out on nanowires deposited on a substrate (see, e.g., [41][42][43]), and we here focus on phonon modes which remain gapless even in the presence of a substrate. Since the flexural (bending) modes with finite angular momentum are expected to be gapped, in what follows we only take into account the longitudinal acoustic phonon mode with zero angular momentum and dispersion ω q = c L |q|, where the sound velocity c L is typically small against the Fermi velocity v and the phonon momentum q is defined alongê z . Using typical parameters for c L and v in the WSM material TaAs [50] for an order of magnitude estimate, we find c L /v ∼ 0.01. The phonon momenta q responsible for low-temperature backscattering processes then satisfy qR 1 and correspond to effectively 1D phonon modes.
We assume an isotropic elastic continuum model with stress-free boundary conditions at the cylinder surface [35]. The resulting phonon modes are well known. In contrast to most previous works, where phonon backscattering in 1D wires has been examined for three-dimensional phonon modes, we focus on the 1D phonon mode corresponding to longitudinal acoustic phonons with zero angular momentum. With the bosonic annihilation operators b q , the bulk mass density ρ M , and Poisson's ratio ν (where 0 < ν < 1/2), the displacement field operator is then given by [35,38] (3.1) Assuming that the deformation potential is the dominant coupling mechanism, the electron-phonon interaction reads where the coupling constant g 0 has dimension of energy and ρ e (r) is the electron density operator. Unfortunately, it is hard to get reliable theoretical predictions for the value of g 0 since this coupling constant is strongly affected by screening processes. A standard Thomas-Fermi argument is the bulk density of states. Since the latter vanishes for chemical potential µ → 0, we expect large couplings for |µ| vb. Recent experimental results suggest that the electronphonon coupling is of the order of 10 meV but varies substantially in a small energy range [51]. In any case, the value of g 0 affects the phonon-induced resistivity only via the overall resistivity scale ρ 0 discussed below.
We then express the electronic density ρ e (r) in terms of the normalized radial eigenstates Y k,j,p (ξ) in Eq. (2.6), with fermion annihilation operators c k,j,p . Using Eq. (3.1) and taking the limit L → ∞, we obtain Since we include only longitudinal acoustic phonons with zero angular momentum, the electron-phonon interaction (3.3) only couples electronic states with the same angular momentum j. In principle, scattering processes between different radial eigenmodes with the same j are possible. However, we here focus on parameter regions where at most a single radial band for given j crosses the Fermi level. This simplification is justified for relatively thin nanowires at low energies, |µ| vb/2. (We have explicitly verified this point by monitoring the band structure for all results presented in this work.) We note that in order to describe the resistivity in the ultimate bulk limit bR → ∞, arbitrary scattering processes involving different radial modes with the same j become relevant. This problem is, however, beyond the scope of this work.
B. Boltzmann theory
For a translationally invariant nanowire in a weak constant electric field Eê z , Ohm's law states that a steadystate charge current density Jê z with J = σE will flow. In the Boltzmann approach, one uses transition rates obtained from Fermi's golden rule to compute the linear conductivity σ [36]; the resistivity then follows as ρ = 1/σ. On this perturbative level, electron-phonon scattering processes generated by H ep always scatter an initial electronic state with angular momentum j to a final state with the same angular momentum. Ohm's law then implies that the conductivity contributions σ j = 1/ρ j from different angular momentum channels simply add up, and we only have to tackle the problem for fixed angular momentum j. However, in cases where processes beyond Fermi's golden rule become important, Eq. (3.4) represents an approximation. We obtain the resistivity contribution ρ j by solving a linearized Boltzmann equation for the 1D subband with angular momentum j. We use the notation ε k = E k,j,p = ε −k and Y k = Y k,j,p , and as discussed in Sec. III A, we focus on parameter regions with a single radial band for given j. The steady-state distribution function is then written as where δn k is the nonequilibrium correction to the Fermi equilibrium distribution and β = 1/T . We follow standard practice and parametrize δn k by a function g(ε k ) [36], With ω q = c L |q| and following the notation of Ref. [37], the linearized Boltzmann equation can be written as with the symmetric kernel (3.8) Here W (k , k) denotes the transition probability for scattering from an initial state with an electron with momentum k to a final state with an electron with momentum k under emission of a phonon with momentum q = k − k . Microreversibility dictates that the same probability also describes the phonon absorption process [36,37], where the initial state contains an electron with momentum k and a phonon with momentum q = k − k, and the final state has an electron with momentum k . We thus have W (k, k ) = W (k , k).
Once the solution to Eq. (3.7) has been determined, the resistivity follows from The linearized Boltzmann equation (3.7) can be solved by a constant function g(ε) = g. Following [37], we find Below we separately consider subbands with one or two local extrema (dubbed "valleys" or "nodes"). Both singlevalley and two-valley subbands appear in the spectrum of WSM nanowires, see Sec. II B. Single-valley subbands have a local extremum at k = 0 and closely resemble the dispersion encountered in conventional 1D quantum wires with a single pair of Fermi points, k = ±k F . Twovalley subbands instead have local extrema near k ≈ ±b, giving rise to a regular or inverted mexican hat shape of the dispersion. In that case, the number of Fermi point pairs (one or two) depends on the chemical potential.
C. One pair of Fermi points
We first consider the case characterized by a single pair of Fermi points at k = ±k F (with k F > 0), where the Fermi velocity is given by v F = |∂ k ε k=k F |. We consider low temperatures and assume that typical phonon energies are much smaller than the relevant electron energies ε k and ε k in Eq. (3.13), i.e., the latter energies are very close to the Fermi energy µ = ε ±k F . The integration over momenta in Eq. (3.13) is then limited to a small region around the Fermi momenta, and we can linearize the dispersion for k ≈ ±k F . The linearization breaks down near the band bottom (or when approaching the transition to a regime with two pairs of Fermi points in a two-valley subband), where the respective resistivity contribution may formally diverge. However, as long as other bands with finite resistivity remain present, no contribution to the total resistivity (3.4) arises from such a divergence.
As detailed in App. C, from Eq. (3.13) we then find C v F /π and where we use the function The Bloch-Grüneisen temperature is defined by To give a typical order of magnitude, for k F ∼ b and TaAs parameters, we find T BG ∼ 10 K. Since only phonons with momentum q ∼ 2k F can efficiently backscatter electrons, phonons with energy ∼ T BG are required in such 2k F processes. From Eq. (3.12), we then find With the overall resistivity scale we thus arrive at We emphasize that both v F and k F , and therefore also T BG , depend on the angular momentum j. These quantities can be obtained numerically from the band structure discussed in Sec. II. Equation (3.19) describes the phonon-induced resistivity for a 1D electron channel with a single pair of Fermi points and agrees with previous results [27,28,38]. In particular, we obtain a linear dependence ρ j ∝ T for T T BG . However, for T T BG , Eq. (3.19) predicts an exponentially small resistivity, ρ j ∝ e −TBG/T , since the probability for having thermal phonons with the energy required for 2k F scattering processes is exponentially small.
D. Two pairs of Fermi points
Next we turn to the resistivity contribution generated by a two-valley band with the Fermi level adjusted to realize two pairs of Fermi points k = ±k γ=± , with Fermi momenta k + > k − > 0 and Fermi velocities v γ = |∂ k ε k=kγ |. Note that the group velocities for k ∼ k + and k ∼ k − have opposite sign. Three different scattering channels are now important, see Fig. 5. In particular, we distinguish the following processes: 1. In analogy to 2k F scattering, see Sec. III C, we have inter-node backscattering ("inter-bs") processes, where an electron scatters between k ∼ k γ and k ∼ −k γ (with γ = ±). The momentum exchange 2k γ has to be supplied by phonons.
2. For a two-valley band, the dispersion has two local extrema inherited from the Weyl nodes at k = ±b. As a consequence, for appropriate values of the chemical potential, intra-node backscattering ("intra-bs") processes become possible, where scattering takes place between k ∼ sk + and k ∼ sk − with s = ±. Since the momentum transfer k + − k − is typically small against the other relevant momentum transfers, the contributions due to intra-bs processes are particularly important at low temperatures.
3. Finally, inter-node forward scattering ("inter-fs") processes couple states with the same sign of the velocity, i.e., k ∼ sk + and k ∼ −sk − . Even though right movers scatter to right movers again, and similarly for left movers, resistivity contributions arise because of the velocity change for v + = v − . We note that forward scattering processes near a single Fermi point are always negligible, see App. C.
Repeating the analysis of Sec. III C for two pairs of Fermi points, see App. C for details, the solution of the Boltzmann equation follows from Eq. (3.13) with C (v + + v − )/π and A A inter−bs + A intra−bs + A inter−fs . (3.20) The inter-bs contribution is given by, cf. Eq. (3.14), with F(X) in Eq. (3.15) and the Bloch-Grüneisen scales T (±) inter−bs = 2c L k ± . Intra-bs processes imply the contribution with the overlap matrix element (3.10) and the Bloch-Grüneisen scale T intra−bs = c L (k + − k − ). Finally, inter-fs contributions are given by with T inter−fs = c L (k + + k − ). We here used I k+,−k− = I k+,k− , which holds because the radial spinor eigenfunctions Y k (ξ) only depend on |k|.
Collecting all terms, the resistivity contribution ρ j follows as ρ j = ρ inter−bs + ρ intra−bs + ρ inter−fs . (3.24) With the reference scale ρ 0 in Eq. (3.18), we obtain From Eq. (3.24), the contributions from different backscattering channels simply add up and Mathiessen's rule [36] seems to be valid. However, Mathiessen's rule is not valid for the two different inter-bs processes related to 2k + and 2k − backscattering, which cannot be treated separately because of the factor 1/(v + +v − ) 2 in ρ inter−bs . We stress that in Eq. (3.25), the quantities k ± and v ± , and thus also the overlap integral I k+,k− and the various Bloch-Grüneisen temperatures, depend on the specific subband under consideration, in particular on the angular momentum j.
In general, the scattering channel with the smallest of the above Bloch-Grüneisen scales (denoted by T bBG ) dominates the low-temperature resistivity. In particular, ρ j ∝ T for T T bBG while ρ j ∝ e −T bBG /T for T T bBG . In many cases of interest, T bBG can be well below the inter-bs scale T BG . The low-temperature resisitivity is thus dominated by those subbands which allow for intra-bs processes.
IV. TRANSPORT OBSERVABLES
In this section, we describe our results for transport observables. In Sec. IV A, we consider the two-terminal conductance for an ideal WSM nanowire in the zerotemperature limit, where phonons are frozen out. The conductance is then directly determined by the total number of transport channels at the Fermi level. In Sec. IV B, we present results for the phonon-induced resistivity as obtained from the Boltzmann theory in Sec. III.
A. Conductance of ideal WSM nanowires
We first consider the two-terminal linear magnetoconductance of a WSM nanowire without disorder and in the absence of electron-phonon interactions, assuming perfectly adiabatic contacts between the nanowire and the attached source and drain electrodes. This problem can be described by the Landauer-Büttiker scattering approach [52], which implies that the two-terminal conductance G 0 is given by [10,12,13] G 0 (µ, Φ, α) = N e 2 h , (4.1) where N = N (µ, Φ, α) is the number of transport channels at the Fermi level, which coincides with the number of positive Fermi momenta. The conductance in Eq. (4.1) then follows directly from the band structure in Sec. II. We note that G 0 has been studied before for WSM nanowires with boundary conditions correspond-ing to α = 0 [10,12,13]. Our results are consistent with those works and extend them to arbitrary values of α.
We illustrate the dependence of G 0 on the magnetic field in Fig. 6, both for chemical potential µ = 0 and various α (left panel), and for α = π/4 and several values of µ (center panel). The number N , and thus G 0 , jumps in discrete units upon changing Φ. The addition (or removal) of one pair of Fermi points to (from) the Fermi surface implies conductance steps of size ∆G 0 = ±e 2 /h from Eq. (4.1). We also see steps with ∆G 0 = ±2e 2 /h, where a two-valley band with two pairs of Fermi points is added or removed.
The flux dependence shown in Fig. 6 reveals that conductance steps occur with a typical spacing of order ∆Φ ≈ 1. To rationalize this observation, we recall that the Fermi arc dispersion depends on the Aharonov-Bohm phase through the shift j → j +Φ, see Eq. (2.14). Changing Φ → Φ + 1 shifts the sequence of surface subbands by one unit. In a surface-dominated regime, conductance variations thus have the (approximate) period ∆Φ ≈ 1. Similar features have been experimentally observed in Dirac semimetal wires [41,42].
From the left panel of Fig. 6, we observe that the boundary angle α has a major impact on the conductance. This strong sensitivity of G 0 on a boundary parameter is consistent with the fact that for the parameters in Fig. 6, we mainly have surface states at the Fermi level. In our model, the phenomenological parameter α encodes the surface feature of the WSM material. This sensitivity thus indicates that the surface structure of the material can strongly influence the conductance.
The rich band structure exemplified in Fig. 3 also implies that the two-terminal conductance is not a monotonic function of the magnetic flux. In an infinite WSM, a negative magnetoresistance is expected when E B, as a direct consequence of the chiral nature of the lowest Landau levels. In our cylindrical geometry, the spectrum is qualitatively very different from the bulk case, hence one may expect a different behavior. Indeed, as seen in the left panel of Fig. 6 for 0 ≤ α < π/2, the magnetoconductance shows a non-monotonic behavior with a minimum at Φ ≈ Φ min (α), even for the clean case under consideration, and strongly depends on the surface parameter α. This non-monotonicity of the magnetoresistance is a manifestation of the predominance of the surface over the bulk transport in this geometry. Interestingly, the value of Φ min can be determined by an approximate fit of G 0 (Φ) to a third-order polynomial function. For the conductance curves shown in the left panel of Fig. 6, we observe that Φ min is linked to the boundary angle by the empirical relation α 0.28Φ min − 0.01Φ 2 min . By determining the position of the magnetoconductance minimum, one can thus infer information about α from transport measurements, at least in the parameter regime under study here.
In analogy to the stepwise dependence on the flux, we also find conductance steps when varying µ at fixed magnetic flux, as shown in the right panel of Fig. 6 for several values of α. For α = 0, this parameter region was identified in Ref. [12], via the conductance steps, as the regime in which surface states dominate transport. Our results confirm this scenario. At the same time, we observe that a finite value of the boundary angle α can dramatically change the low-temperature transport properties. In fact, only for special values of α, we obtain insulating behavior at zero magnetic field and T v/R. For generic α, the two-terminal conductance is finite and can even become large. This observation again highlights the importance of non-universal surface physics in this geometry.
Finally, we note that even though we have a finite twoterminal conductance G 0 , the local resistivity ρ vanishes in the absence of phonon-induced (or other) backscattering processes.
B. Phonon-induced resistivity
We here discuss our results for the phonon-induced longitudinal magnetoresistivity (3.4) obtained in Sec. III using the semiclassical Boltzmann approach. We start by illustrating the α-dependence of ρ for fixed chemical potential µ = 0 and temperature T = 0.1c L b in Fig. 7. While it is not possible to experimentally change the boundary angle α in a given device, Fig. 7 shows that the resistivity strongly depends on α. Typically, with increasing α, 1D subbands with different j fall below the Fermi level one by one. As a consequence, the number N increases and the resistivity tends to become smaller according to Eq. (3.4). Once a new subband becomes just accessible, the corresponding resistivity contribution will become very large because of the smallness of the Fermi velocity and of the Fermi momentum in this limit. From Eq. (3.4), we see that such a contribution makes little difference as long as other subbands with finite ρ j are present. The dependence of ρ on α (or other parameters) thus remains smooth even when N changes, with an important exception discussed below.
For the parameters corresponding to the left panel in Fig. 7, where Φ = 1/2, only j > 0 bands with a single pair of Fermi points contribute. The expected smooth decrease of ρ(α) with increasing α is observed. In particular, for small α, there are no bands at the Fermi level and thus ρ → ∞. On the other hand, for α → π/2, the resistivity becomes extremely small since N increases to very large values. The right panel of Fig. 7 shows that for Φ = 2, the α-dependence of the resistitivty is more complex. In a finite window around α ≈ π/8, N vanishes and ρ → ∞. For α π/8, only j > 0 bands with a single pair of Fermi points are present, and ρ(α) shows a smooth decrease again. For α π/8, we have contributions from subbands with j = −1/2 and j = −3/2. At a critical value of α slightly above π/16, a transition from one to two pairs of Fermi points takes place within the two-valley subband with j = −1/2. As detailed below and in App. D, such a transition causes an abrupt and very large resistivity increase as seen in Fig. 7. This prominent feature arises because only for cases with more than one pair of Fermi points, intra-node backscattering processes become possible, see Sec. III D. Such processes dominate the resistivity at low temperatures.
Next, Fig. 8 shows the magnetic field dependence of the resistivity. Let us first discuss the case µ = 0 (left panel). We again see that ρ(Φ) is a smooth curve except for an abrupt resistivity drop near Φ ≈ 6. Recalling the logarithmic scales, the resistivity increase is very steep for small Φ. Again, the jump-like behavior at Φ ≈ 6 takes place at the transition point from two to one pairs of Fermi points within the two-valley subband with j = −1/2. For large Φ, we observe that ρ(Φ) also shows variations governed by the Aharonov-Bohm scale ∆Φ ∼ 1, see Sec. IV A. For µ = 0.1vb (right panel in Fig. 8), we find similar features.
We now turn to Fig. 9, which shows the µ-dependence of ρ. While for Φ = 2 (left panel), no abrupt resistivity changes occur in the shown chemical potential range, such behavior is found for Φ = 4 (right panel) near µ = µ c −0.136vb. We can trace this resistivity change to the two-valley subband with j = −1/2. For µ < µ c , this band contributes a single pair of Fermi points. For µ > µ c , on the other hand, we get two pairs of Fermi points. At the transition, µ µ c , the resistivity exhibits a sharp increase. We discuss this mechanism in some detail in App. D for a simple toy model dispersion. For µ → µ c from above, the Bloch-Grüneisen temperature for intra-bs processes sets the relevant scale, T bBG = T intra−bs = c L (k + − k − ), see Sec. III D. When approaching the transition from the other side, however, only inter-bs processes can take place, with T BG = 2c L k + . As a consequence, the resistivity is much larger for µ > µ c . We note that the linearized band structure used in Sec. III D is not applicable for µ → µ c . However, while the precise µ-dependence of ρ is expected to be continuous when going beyond the linearized band structure, the large low-temperature resistivity changes predicted here should be robust.
Finally, we briefly turn to the temperature dependence of ρ, which is shown for µ = 0 and different Φ in Fig. 10.
For T T b = 2c L b, we find a universal ρ ∝ T dependence, but for T → 0, the resistivity becomes exponentially small since all phonon backscattering mechanisms are frozen out in that limit.
V. CONCLUSIONS
In this work, we have discussed magnetotransport in a cylindrical WSM nanowire. Our analysis includes the effects of a magnetic flux threading the wire (via the Aharonov-Bohm flux Φ) and the consequences of a finite curvature of the Fermi arc (via the boundary angle α). We have presented detailed results for the band structure, in particular how the dispersion of Fermi arc states depends on Φ and α. The magnetic flux is here effectively captured by the replacement j → j + Φ, where j is the half-integer angular momentum of the Fermi arc state. Importantly, we have taken into account the electronphonon interaction via deformation potential. We have focused on phonon modes with zero angular momentum, since for nanowires deposited on a substrate, phonon modes with finite angular momentum are expected to be gapped.
Our analysis shows that the phonon-induced resistivity contains rich information about the underlying physics of the WSM material. The resistivity strongly depends on the boundary angle α and on the magnetic flux parameter Φ. We find that large and abrupt changes of the resistivity arise because of the mexican hat shape of the dispersion for two-valley subbands, where a change of the chemical potential can induce a transition between one vs two pairs of Fermi points. Since in the case of two pairs of Fermi points intra-node backscattering processes with small momentum transfer are possible, a much larger low-temperature resistivity is obtained than for the case with a single pair of Fermi points, where such processes are not available.
Comparing our results for WSM nanowires to the case of conventional quantum wires [22][23][24][25][26][27][28][29], we find a noteworthy difference. Even though it is difficult to quantify the impact of chiral anomaly on the phonon-induced magnetoresistivity in this finite-size wire geometry, the observed strong sensitivity of the resistivity on a boundary condition parameter is in marked contrast to the conventional setting and can be rationalized by the crucial role of Fermi-arc surface states.
Our work also points to several topics of interest for future studies: (i) For freely suspended WSM nanowires, phonon modes with finite angular momentum have to be included. In particular, flexural modes with l = ±1 will be the energetically lowest modes [35]. One then has to account for scattering processes connecting subbands with different angular momenta. (ii) Similarly, at higher energy scales and/or very large nanowire radius, the restriction to a single radial band for given angular momentum j has to be lifted even when keeping only l = 0 phonon modes. One may then encounter more than two pairs of Fermi points at fixed angular momentum j, and many additional scattering processes beyond those considered in Sec. III become possible. (iii) The above two points are important also for the proper description of nonequilibrium transport beyond the linear response regime considered here. (iv) In the present work, we have studied type-I WSM materials. In type-II WSM materials, one has (over-)tilted Dirac-Weyl cones with interesting analogies to black hole physics [53]. In such a setting, phonons may give spectacular effects, cf. Ref. [54]. (v) At very low temperatures, disorder effects will dominate the resistivity in real samples. While the zero-field resistivity of disordered WSM nanowires (without phonon effects) has been studied in Ref. [14], the magnetoresistivity has not been analyzed in a systematic way so far. (vi) In this work, we have neglected the Zeeman effect due to the magnetic field. While one expects such effects to be subleading [48], for a precise comparison to future experimental results, it may be necessary to include them into the theoretical description. (vii) An interesting generalization of our work could study WSM materials with more than two Weyl nodes. For instance, if the material enjoys time-reversal symmetry at zero magnetic field, there will be at least four Weyl nodes. In the presence of phonons and in a magnetic field, one then expects a multitude of possible scattering processes. (viii) Our theory assumes angular momentum conservation. Indeed, we consider a cylindrical wire geometry, where the magnetic field is aligned both with the wire axis and with the direction of the separation between Weyl nodes in reciprocal space. A weak violation of these conditions could be handled by perturbation theory, but for stronger deviations, one has to resort to a generalization of our theory and a corresponding numerical study. (ix) Finally, apart from the real magnetic field, it may be of interest to study the consequences of pseudo-magnetic fields generated by straining the sample [55].
To conclude, we hope that our paper will stimulate future work along these or other directions. |j + Φ|/R |(j − Φ)x|/R 2 , which in turn implies the condition (2.16).
We next compare the approximate dispersion relation Eq. (2.14) to the numerically exact band structure. In Fig. 11, we show the dispersion of Fermi arc states with j = ±1/2 for bR = 10 and several values of Φ and α. We find a fair agreement between numerical and analytical results. In accordance with Eq. (2.16), the deviations are more pronounced for j < 0 and Φ = 0, but even for j = −Φ = −1/2, Eq. (2.14) provides a rather good approximation. Since the penetration length κ −1 becomes very large near the arc ends, the analytical expression in Eq. (2.14) -which assumes κR 1 -becomes less accurate in these limits, in accordance with Fig. 11.
Appendix C: Solution of the Boltzmann equation
We present here the derivation of Eqs. (3.14) and (3.20) for one and two pairs of Fermi points, respectively. Following Ref. [37], we begin by rewriting the coefficient A in Eq. (3.13) as with the auxiliary function At low temperatures, the momentum integrations in Eq. (C2) can be restricted to the vicinity of the Fermi points.
Let us first consider the case of a single pair of Fermi momenta, see Sec. III C. Writing k = sk F +k and k = s k F +k with s, s = ± and |k|, |k | k F , we first linearize the dispersion relation, ε ±k F +k − µ ±v Fk . We then have backscattering contributions to Eq. (C2) when k and k are near opposite Fermi points (s = −s ), and forward scattering contributions when k and k are near the same Fermi point (s = s ). The forward scattering terms are strongly suppressed by the factor (v k − v k ) 2 ∝ (k −k ) 2 in Eq. (C2), and they are always neglected in what follows. With v k sv F , the backscattering contributions follow by approximating W (k, k ) W (k F , −k F ) = W (−k F , k F ) ≡ W bs . Since the k-dependence of the radial eigenfunctions Y k (ξ) arises only through m k , which is an even function of k, we have I k,−k = I k,k , and the normalization in Eq. (3.10) implies I k,k = 1. Thus, with W bs = 4πZv 2 k F from Eq. (3.9), we obtain Using the auxiliary relation [37] dεdε n F (ε)n F (ε ) e −β(ε−µ) − e −β(ε −µ) × × ν=± δ(ε − ε − νω) = ω 2 sinh 2 (βω/2) (C4) in Eq. (C1), we finally arrive at Eq. (3.14). The above approximations also imply C v F /π from Eq. (3.13).
Next we turn to a two-valley band with the Fermi level adjusted to allow for two pairs of Fermi momenta at k = ±k γ with γ = ±, see Sec. III D and Fig. 5. The symmetry ε k = ε −k then implies that the group velocity at k ∼ sk γ is given by v s,γ = sγv γ (where s = ±), with the positive Fermi velocities v + and v − . Linearizing the dispersion relation for k ≈ sk γ , contributions to Eq. (C2) from the three types of scattering processes illustrated in Fig. 5 arise. We find F (ε, ε , ω) F inter−bs + F intra−bs + F inter−fs , where, in analogy to the 2k F backscattering result (C3), inter-node backscattering processes give F inter−bs 4Zv 2 πc L γ=± δ(ω − 2c L k γ ).
Intra-node backscattering processes produce the term with I k,k in Eq. (3.10), and inter-node forward scattering contributions give (C8) Inserting the above results into Eq. (C1), we arrive at Eq. (3.20).
Appendix D: Abrupt resistivity changes
To demystify the jump-like behavior of the resistivity reported in Sec. IV B, we consider a toy model for a twovalley subband with the dispersion relation (v = b = 1) and analyze how the resistivity depends on the chemical potential µ < 0. For µ > µ c = −1, there are N = 2 pairs of Fermi points, ±k ± , with k ± = 1 ± |µ| and respective Fermi velocities v ± = 2 1 ± |µ|. On the other hand, for µ < µ c , there is only a single pair (N = 1), ±k F , with k F = k + and v F = v + . Therefore, according to Eq. (3.25), for µ > µ c , the dominant resistivity contribution comes from intra-bs processes with Bloch-Grüneisen temperature T intra−bs = c L (k + − k − ). For µ < µ c , instead, only inter-bs processes are possible and the relevant Bloch-Grüneisen temperature is T inter−bs = 2c L k + . The resistivity is thus parametrically larger on the N = 2 side since intra-bs processes are then possible, which are not available on the N = 1 side. This gives rise to a large jump of the resistivity when µ crosses the critical value µ = µ c , as ilustrated in Fig. 12.
We then conclude that the abrupt resistivity changes observed in Sec. IV B originate from transitions between one and two pairs of Fermi points within a two-valley band. | 2021-10-12T01:34:09.941Z | 2021-10-11T00:00:00.000 | {
"year": 2021,
"sha1": "e1d8cf590b301b938b4c3bb07eea9b269ef2963b",
"oa_license": null,
"oa_url": "https://openaccess.city.ac.uk/id/eprint/27127/1/2110.05149.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e1d8cf590b301b938b4c3bb07eea9b269ef2963b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54812372 | pes2o/s2orc | v3-fos-license | Effects of molasses level in a concentrate mixture on performances of crossbred heifer calves fed a basal diet of maize stover
This study was conducted to evaluate the growth performance and feed intake of crossbred (Boran x Holstein Friesian) female calves fed different levels of molasses in concentrate mixture using 24 yearlings calves with average initial weight of 142.4±20.7 kg which lasted for 90 days. The calves were assigned into treatments having 0 (T1), 15 (T2), 30 (T3) and 50% (T4) molasses which replaced wheat bran in concentrate mixture using randomized complete block design into six blocks of four animals. The dry matter (DM) degradability was determined by incubating 3 g of feed samples in fistulated steers fed natural pasture hay ad libitum supplemented with 2 kg of concentrate. The total DM and organic matter (OM) intake for T2 and T3 diets were higher (P< 0.05) than those fed T1 and T4 diets. The stover DM and OM intake for T2 and T3 diets were higher (p<0.05) than for other treatments. The highest (p<0.05) crude protein intake was observed in calves fed T3 diets. Metabolizable energy (ME) intake was higher (P<0.05) for calves fed on T2 and T3 diets, respectively. Calves fed on T2, T3 and T4 diets had higher average daily gain compared to those fed T1 diet. The DM degradability after 4, 8, 24, 48, and 96 h of incubation was higher (P < 0.05) for T4 than that of T1. Based on intake of DM, OM and ME and growth performance, 15 and 30% molasses could be used as a replacement to wheat bran in the ration of heifers fed maize stover with good performance.
INTRODUCTION
Livestock industry is an important and integral part of agriculture sector in Ethiopia.Livestock farming is vital for the supply of meat and milk; serves as a source of additional income both for smallholder farmers and livestock owners' (Ehui et al., 2002).Livestock are fed with diverse feed resources in Ethiopia.The major feed resources are the crop residues and grass hay which contains poorly digestible nutrients.To ensure better body condition of the animals under such situation, it is advisable that additional sources of readily fermentable carbohydrate and nitrogen be included in the diet of the ruminants thereby improving the utilization of crop residues, *Corresponding author.E-mail: dawitgudeta@yahoo.com.Tel: +251911045670 which is mainly attained through the supply of energy and nitrogen to rumen microbes (Osuji et al., 1995).Results of studies by Van Soest (1988) and Zhang et al. (1995) have indicated that crop residues are low in available nutrients, taking longer lag time and have slow microbial degradation.
Supplementation of ruminant animals fed on low quality roughages with carbohydrate and protein feed such as molasses-urea could be used to improve the digestibility and bioavailability of nutrients (Dass et al., 1996).Efforts were made previously regarding crossbred calves focusing on improving feeding regime of pre weaned calves to increase performances (Tadesse and Yohannes, 2003;Tadesse et al., 2004).In addition, post weaning calf management is crucial, because it determines the overall performance of an animal to attain early and long productive and reproductive life.It is also important to ensure recommended body weight at maturity.However, it is difficult to achieve such recommendations in areas where animals are dependent on crop residues.For example, maize (Zea Mays) stover is the major feed sources in Adami Tullu district (Tesfaye et al., 2001) which makes it difficult to meet the standard recommendations.Maize stover is poor in quality to allow sufficient nutrient intake to support potential rate of weight gain in growing calves.One of the problems with regard to the utilization of crop residues is that farmers cannot afford to supplement with high quality concentrate supplements due to their high price.Therefore, it is important to use supplementary feeds which are available and affordable by farmers.One of such ingredient is molasses, which is a relatively cheap sources of energy and can still replace conventional concentrate feeds like wheat bran to improve the feeding values of crop residue for ruminants.There is little information available with regards to crossbred claves from post weaning until the age of maturity which evaluated the feeding of wheat bran and molasses in different proportions.Therefore, the current study was conducted with the objectives to evaluate growth performance of crossbred heifer calves fed different levels of molasses in concentrate mixture.
Description of the study area
The study was conducted at Adami Tullu Agricultural Research Center which is located at 167 km south of Addis Ababa, Ethiopia, at an altitude of 1650 meter above sea level.The center is situated at latitude of 7° 9 ' N and 38° 7 ' E longitude.The soil type is fine, sandy loam with sand, clay in the proportion of 34, 48 and 18%, respectively.The average pH is 7.88 (ATARC, 1998).
Experimental animals and management
A total of 24, Borana-Holstein Friesian crossbred (25:75 %) heifer calves born at Adami Tullu Agricultural Research Center aged between 11 to 14 months with an average weight of 142.4± 20.7 kg were used for the experiment.Animals were drenched with 1200 g broad-spectrum antihelminthes (Albendazole) as recommended by the manufacturer and sprayed with acaricide before the commencement of the experiment.The calves were individually stall fed in loose house barn with corrugated iron roof and concrete floor.
Experimental feeds and treatment diets
The stover was manually harvested and chopped to 2 to 5 cm length using tractor operated chopper.Ingredients of concentrate mixtures that were assumed to be sufficient for the experimental period were procured and thereafter stored carefully to protect it against the rodents and to avoid any contamination.Wheat bran (WB), cotton seed cake (CSC), sugar cane molasses, urea, table salt and Bole (mineral supplement) were the ingredients used to formulate the ration.The treatment rations were formulated to contain 0, 15, 30, 50% molasses as a substitute to wheat bran.To make the treatment diets iso-nitrogenous, urea was included at 0, 1, 2, 3% levels for T1, T2, T3 and T4, respectively.Ration formulation was assumed to meet the total protein (552 g CP/head/day) and metabolisable energy (40.86 ME MJ/head/day) requirements for maintenance and expected growth rate of 750 g/head/day as recommended by Kearl (1982).
Experimental design and treatment groups
The experimental animals were blocked in to six groups based on initial body weight and randomly assigned to one of the four treatment diet as indicated in Table 1.
A preliminary period of 14 days was given to allow adjustments of growing animals to the diet.This was followed by 90 days of feeding period.The animals were fed maize stover ad libitum on individual basis.Daily DM intake of calves was calculated to be 3% of their live weight and the amount of concentrate was calculated to be 40% of the daily DM intake.The daily amount of concentrate mixture was divided into two equal portions and provided individually at 8 AM and 4 PM.The design and size of the watering trough in each feeding pen was not convenient and enough to provide water freely.Hence, animals were allowed to drink water two times a day from watering trough in the feed lot which was constructed outside the pen.The amount of concentrate offered during the experiment was adjusted to animals' body weight change on a weekly basis.
Voluntary feed intake and in vitro dry matter digestibility
The amounts of feed offered and refused by animals were measured daily to calculate intake.Based on OM intake and in vitro organic matter digestibility (IVOMD) of the diets, total digestible organic matter intake was determined on individual basis.The metabolisable energy contents of the feeds were estimated from in vitro organic matter digestibility as described by McDonald et al. (2002): ME (MJ/kg) = 0.016 DOMD, where, DOMD = digestible organic matter in dry matter.
In vitro dry matter digestibility was determined by two stage method developed by Tilley and Terry (1963).Rumen fluid was collected from three rumen fistulated steers before the morning feeding.The steers were fed ad libitum natural pasture hay supplemented with 2 kg concentrate once per day.
Live weight measurement
Live weights of animals were recorded weekly in the morning before the daily meal.The animals were weighed on two consecutive days at the beginning and end of the experiment and the average of the two were taken as initial and final weights, respectively.The daily live weight gains were calculated as the differences between final and initial live weight divided by number of experimental days.
In sacco DM degradability
The basal diet and concentrate feed samples were milled to pass through 2 mm sieve size for in sacco DM degradability study.The DM degradability was determined by incubating 3 g of dry feed samples in fistulated steers fed natural pasture hay ad libitum supplemented with 2 kg concentrate (55% wheat bran, 44% noug (Guizotia abyssinica) seed cake and 1% salt) once per day in the morning.To determine the DM degradability, the samples were incubated in the rumen for 4, 8, 24, 48, 72 and 96 h.After each incubation period, the bags were removed and hand washed under a running tap water until the water becomes clear.To determine undegraded DM, two bags were dried at 65°C for 48 h.Washing loss was similarly determined by washing duplicated feed samples that were not incubated in the rumen.The duplicated bags were dried in the same way to determine DM contents of the feed samples.
The degradability constants were determined using the exponential equation P = a + b (1-e -ct ) as described by Ørskov and McDonald (1979) using the Neway Excel-program (Chen, 1995), where P = DM degradability at a time t.The degradation characteristics of the feed were defined as A= washing loss (readily soluble fraction); B= (a + b)-A, representing insoluble but fermentable fraction; c= the rate of degradation of B (Ørskov and Ryle, 1990).Potential degradation (PD) of DM was estimated as (A + B), while ED was calculated according to Dhanoa (1988) using the formula ED = A + [Bc/(c + k)] at rumen outflow rates (k) of 0.03 h -1 .
Chemical analysis
Nutrient composition of feed offered and refusal was analyzed for DM and ash according to AOAC (1990).Nitrogen (N) content was determined by Kjeldahl method and CP was calculated as N x 6.25.Neutral detergent fiber (NDF), acid detergent fiber (ADF) and acid detergent lignin (ADL) were analyzed according to the method developed by Van Soest and Robertson (1985).
Statistical analysis
Data from live weight change, feed intake and in vitro digestibility were subjected to the analysis of variance (ANOVA) procedure for the general linear model (GLM) Statistical Analysis System (SAS, 2001).The treatment means were separated by least significant difference (LSD).
Chemical composition of the experimental feeds
The chemical composition of the experimental feeds used in the study is presented in Table 2.The ash contents of the ingredients used in the study varied from 5.6% in wheat bran to 18.4% in molasses.The CP contents of the feeds ranged from 4.2 % in molasses to 29.2% in cotton seed cake.The lower CP content of molasses as observed in the present study is in agreement with the works of Nega et al. (2006) and Zewdie (2010) who reported 3.5 and 3.99% CP, respectively.The CP contents of maize stover as obtained in the present study was comparable to the value of 5.6% CP reported by Nega et al. (2006) in the same district but higher than the values (3.7, 2.66 and 3.5% CP) reported by Adunga et al. (1998), Yitaye (1999) and Zewdie (2010) The ME values of the feed in the study ranged from 8.2 in maize stover to 14.5 MJ/kg DM in molasses.The ME contents of maize stover as obtained in the present observation is consistent with the values of 8.8 and 8.9 MJ/kg DM reported by Tesfaye and Musimba (2006), and Nega et al. (2006), respectively, but higher than the value (6.6 MJ/kg DM) reported by Zewdie (2010).The values of ME for wheat bran, cotton seed cake and molasses as observed in this study are similar to the value reported by Nega et al. (2006) and Zewdie (2010).
The fiber fractions (NDF, ADF) and lignin values of the concentrate feeds showed a decreasing trend from T1 to T4 across the treatment.The NDF fraction was highest for T1 (50 % wheat bran) and lowest for T4 (0 % wheat bran) at which wheat bran was totally replaced.The decrease in fiber fraction for concentrate feeds across the treatment can be related to the decreased level of wheat bran.The NDF content in the present study ranged from 20.1 in T4 to 31.8% for T1.The observations by Miessner et al. (1991) indicated that higher values of NDF (above 55 to 60%) may affect efficiency of the rumen environment and thus led to a decrease in feed intake.The ADF content of concentrate mixture as assayed in the study ranged between 13 to 17% which is less than the ranges of 19 to 21% recommended as ideal in ruminant diets (NRC, 1989).
In vitro organic matter digestibility (IVOMD) and ME contents of concentrate mixtures used in this trial are indicated in Table 2.The IVOMD as assessed in the current study ranged between 72.79 to 78.58%.The IVOMD as observed in T1 was lower compared with the other treatment groups.The concentrate mixtures, which contained molasses, had higher IVOMD which may be related to the lower cell wall fractions observed as the levels of molasses increased in concentrate mixture.
Feed intake
The mean dry matter (DM) and nutrient intake of crossbred heifer calves fed different level of molasses are presented in Table 3.The total DM and organic matter (OM) intake for T2 and T3 diets were higher (P< 0.05) than those heifers fed T1 and T4 diets.The stover DM and OM intake for T2 and T3 diets were higher (p<0.05)than for other treatments.The DM intake from maize contributed to 58.71 to 64.81% of the total DM intake of the diet.
In present study, the level of molasses in T2 and T3 improved (p<0.05) stover DM and OM intake.The lower feed intake in T1 may be related to the lower DM degradability as feed intake is related to digestibility of feeds which in turn affect the rate of feed passage and intake.Adding molasses in concentrate mixture by replacing wheat bran as in T2 and T3 increased DM intake in the current study which is in agreement with the observations of Broderick and Radloff (2004) who reported increased total DM intake due to replacement of liquid molasses for corn grain in feeding of dairy cow.The higher DM intake observed in T2 and T3, which may be related to the moderate levels of soluble carbohydrate from molasses, might led to an increase in the amount of readily fermentable energy in the diet.This is in agreement with the reports of Sean et al. (2005).Studies (Rooke et al., 1987;Khalili and Huhtanen, 1991) indicated improvement in the utilization of NPN nitrogen in the rumen which in turn could increase the out flow rate of feed consumed and resulted in an increased feed intake from maize stover.The increase in DM intake at 5.38 and 10.56% of molasses in the diets is in agreement with the observations of Petit and Veira (1994) and Petit et al. (1994) who obtained an increase in total DM intake of silage based diets when molasses was incorporated at 7.5 and 15% of the total feed DM.
The increase in OM intake for T2 and T3 is in agreement with the findings of Lawer-Neville et al. (2006) who reported that steers fed on 10% dietary concentrate separator by-product (desugared molasses) consumed more OM than the non supplemented steers fed either corn stover or alfalfa based diets.Molasses is usually used as a supplement for low-quality forages to stimulate intake (McLennan et al., 1981) and improve animal performance (Stephenson and Bird, 1992).
The present study indicates that IVOMD of concentrate mixtures improved with the inclusion of molasses solution.Higher (p<0.05)stover DOM intake was observed in T2 and T3 diets which may be related to the inclusion of molasses which has more favorable effects on the rumimal environment especially for the fiber digestion compared with starch which is in agreement with the observations of Broderick and Radloff (2004) and Broderick et al. (2008).
The highest (p<0.05)CP intake was observed in calves fed on T3 diet compared to other treatments.The ME intake was significantly higher for calves fed on T2 and T3, respectively.The higher CP intake in T3 and higher ME intake observed in T3 and T2 were expected due to the higher feed DM intake observed in both treatments.The ME intake across the treatment was sufficient to meet the daily ME requirement (40.88 MJ/head/day) for crossbred heifers with 150 kg live weight and at daily weight gain of 750 g/head (Kearl, 1982).The ME intake of 44.83 and 45.71 MJ/head/day in T2 and T3 was higher than the recommended ME requirement.As recommended by Kearl (1982), the CP requirement of the experimental animal was 552 g/head/day for 750 g weight gain.
Growth rate and feed utilization
The body weight changes of crossbred heifer calves fed different levels of molasses in a concentrate mixture and its feed utilization efficiency are given in Table 4.The effects of supplementation of different levels of molasses had a significant effect (P < 0.05) on average daily gain (ADG) of crossbred calves than calves fed on concentrate mixture without molasses.Calves fed on T2, T3 and T4 diets had higher (P < 0.05) ADG when compared to that of T1.
The growth performance of animal is directly related to the protein and energy obtained from a given ration.Hence, the higher ADG observed in T2 and T3 can be explained by the higher daily CP and ME intake.Lower daily gain in T4 compared to T3 may be related to the higher (216 g per kg diets) molasses in T4 which is above the optimum level reported by Cullison and Lowery (1987) who indicated the optimum level of molasses to be from 100 to 150 g per kilo of diet .The same author also indicated that a further increase may upset rumen microbial activity and reduce the feeding value of the basal diet.In addition to the supplementation of molasses to stover, the availability of readily fermentable carbohydrate in the molasses with the presence of protein sources from cotton seed cake in the rumen might have increased the synthesis of microbial protein or the yield of undegradable protein in the lower gut as suggested by Osuji et al. (1995).Thus, a higher DM intake and the additional ME source due to the increased DOM intake may explain the significant increase in daily live weight gain which was found in response to supplementation of molasses.
Calves in T3 and T4 consumed less amount of OM per kg of weight gain compared to calves in T1 and T2.Calves fed on higher levels of molasses in concentrate ; RSD, residual standard deviation.had higher feed conversion ratio when compared to the calves fed concentrate without molasses.The current result agrees with the reports of Jemal et al. (2004) and Chala et al. (2005) who reported increased feed conversion efficiency for Horro rams and Horro steers, respectively, as the quantity of molasses increased as a substitute to maize grain.
In sacco dry matter degradability
The results of the in sacco DM degradability and degradability characteristics of the feeds are presented in Table 5.The DM degradability after 4, 8, 24, 48, and 96 h of incubation was higher (P< 0.05) for T4 than T1.After 96 h of incubation, the highest (p<0.05)DM degradability was observed for T4 and the lowest was for T1.The DM degradability of concentrate mixture in the present study was higher (P<0.05)compared to maize stover and the result is consistent with the reports of Alemu et al. (1991) who indicated that since agro-industrial by-products are rich in energy and/or protein, and low fiber contents, they have high digestibility when compared with fibrous feeds.The lower DM degradability observed in T1 and maize stover could be attributed to the higher NDF content as in sacco DM degradability is negatively correlated to the levels of NDF in feedstuff (Vitti et al., 1999).The present observations are in accordance with the report of Broderick (2003) who reported that NDF concentration had a negative relationship with DM degradability.Generally, there is a negative correlation between NDF concentration of forage and intake, due to the long stay of forage in the rumen for further mastication and fermentation by microorganisms (Jung and Allen, 1995) which results in low degradability/digestibility.
The highest (p<0.05)rapidly soluble DM fraction (A) was observed in T4 and lowest in T1 and maize stover.The washing loss significantly increased (P<0.05) as the level of molasses increased in the concentrate mixture.The effective degradability was higher (P<0.05) in T4 than in T1 and maize stover.In sacco degradability of feed can be affected by many factors such as sample particle size, procedure and methods followed in washing and it can also be affected more by chemical composition and the method used to process the feed (Olivera, 1998).Generally, the higher washing loss in T4 indicates the presence of more soluble fraction in a feed while the lower washing loss in T1 and stover may explain the proportion of soluble fraction in the feed.The higher degradation rate in T4 can be related to higher proportion of molasses which had significantly higher degradation rate than other feed used in the present study.
Conclusion
From this study, it can be concluded that supplementing molasses at 15 and 30% as a substitute for wheat bran in a concentrate mixture improved the intake of DM, OM, CP, ME and ADG of female crossbred heifer calves.Increasing the level of molasses in concentrate mixture improved in vitro digestibility of feed and feed conversion efficiency of calves.Therefore, there is an opportunity to increase the utilization of crop residue by substituting wheat bran with molasses and substitution of molasses for high price energy feeds can be considered under farmers condition for future study.
Table 2 .
Dry matter, chemical composition (% of DM), in vitro organic matter digestibility and energy value of feed as used in the experiment .
in other parts of Ethiopia, respectively.The CP content of wheat bran as assessed in the present study was similar to the reports of Getinet (1998) and Tesefaye et al. (2001) who obtained 16.3 and 17.19% CP, respectively.Maize stover had the highest NDF and ADF contents off all ingredients used in the present study which is comparable to the reports of Adunga et al. (1998) who obtained 78.9% NDF
Table 3 .
Mean DM and nutrient intakes of crossbred heifer calves fed different levels of molasses in concentrate mixture.
Means with different superscript in the same row are significant (p<0.05).DM, dry matter; OM, organic matter; DOM, digestible organic matter; CP, crude protein; ME, metabolasiable energy; *treatment descriptions are as indicated in Table2.and 39.9% ADF.
Table 4 .
Live weights, average daily gain, and feed conversion efficiency of calves fed different levels of molasses in concentrate mixture.
ADG, Average daily weight gain; FCR, feed conversion ratio.
Table 5 .
In sacco DM degradability (%) and degradability characteristics of the experimental feeds.
Means with different superscript in the same row are significant (p<0.05).A, Washing loss; B, degradability of water insoluble; C, rate of degradation of water insoluble fraction; A + B, potential degradability; ED, effective degradability at out flow rate of 0.03 h -1 | 2018-12-11T04:43:32.757Z | 2013-01-31T00:00:00.000 | {
"year": 2013,
"sha1": "8577514cd3c32a4b9c556fbfd94aded6b48d01eb",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JCAB/article-full-text-pdf/695D35A13980.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8577514cd3c32a4b9c556fbfd94aded6b48d01eb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119662187 | pes2o/s2orc | v3-fos-license | A game theoretic approach to a network allocation problem
In this paper, we consider a network allocation problem motivated by peer-to-peer cloud storage models. The setting is that of a network of units (e.g. computers) that collaborate and offer each other space for the back up of the data of each unit. We formulate the problem as an optimization problem, we cast it into a game theoretic setting and we then propose a decentralized allocation algorithm based on the log-linear learning rule. Our main technical result is to prove the convergence of the algorithm to the optimal allocation. We also present some simulations that show the feasibility of our solution and corroborate the theoretical results.
Introduction
Recently, cooperative storage cloud models based on peer-to-peer architectures have been proposed as valid alternatives to traditional centralized cloud storage services. The idea is quite simple: instead of using dedicated servers for the storage of data, the participants themselves offer space available on their connected devices to host data from other users. In this way, each participant has two distinct roles: that of a unit that needs external storage to securely back up its data, and that of a resource available for the back up of data of other users. This approach has in principle a number of relevant advantages with respect to traditional cloud storage models. First, it eliminates the need for a significant dedicated hardware investment so that the service should be available at (order of magnitude) lower cost. Second, it overcomes the typical problems related to the use of a single external provider as security threats, including man-in-the-middle attacks, and malware, or fragility with respect to technical failures.
On the wake of the successful peer-to-peer file sharing model of applications like BitTorrent and its lookalike, the same philosophy may well be leveraged on a different but very similar application service like storage. Indeed a slew of fledgling and somehow successful startups are entering in this market niche. Among the most noteworthy examples are the platforms Sia http://sia.tech and Storj https://storj.io (see for some information [18] and [19]).
Clearly, a completely decentralized peer-to-peer model must account for some challenging technical difficulties that are absent in a centralized cloud model. Firstly, security and privacy must be carefully implemented by ensuring end-to-end encryption resistant to attackers, as well suitable coding to insure recovering from failure of some units. In addition, the model must account for the latency, performance, and downtime of average user devices. Albeit the above technical issues are challenging, they can be addressed with the right tools and architectures available at current state of the art technology and will not be considered in this paper.
A core part of such cooperative storage model is the mechanism by which users are made to interact, collaborate and share their storage commodity with each other. In the existing platforms as Sia and Storj, this part is worked out at the level of a central server to which all units are connected. At the best of our knowledge, the design of such a mechanism in a decentralized fashion has not been yet theoretically addressed and studied in the literature. Our contribution goes in this direction.
In this paper we present, accompanied by a rigorous mathematical analysis, a cooperative fully distributed algorithm through which a network of units (e.g. computers) can collaborate and offer each other space for the back up of the data of each unit. In this model, there is no need for central supervision and it can easily incorporate features that we want the system to possess depending on the application, as for instance, enforcing structure on the way data of each unit is treated (aggregate or rather disgregate in the back up process), avoiding congestion phenomena in the use of the resources, differentiate among resources on the basis of their reliability. A different version of the algorithm and without analytical results was presented in [5].
We formulate the problem as an optimal network allocation problem where a population of units is connected through a graph and each of them possesses a number of items that need to be allocated among the neighboring units. Each unit in turn offer a certain amount of storage space where neighboring units can allocate their items. The optimal allocation is the one maximizing a given functional that depends on the allocation status of each unit and that incorporates the desired features we want the solution to possess.
In order to solve the optimization problem in a scalable decentralized fashion we cast the allocation problem into a game theoretic framework and we design the algorithm using a learning dynamics. The use of game theory to solve distributed optimization problems and, more in general, in the design and control of large scale networked engineering systems is becoming increasingly popular [7,9,11,10]. The basic idea is that of modeling system units as rational agents whose behavior consists in a selfish search for their maximum utility. The goal is to design both the agents utility functions and a learning adaptation rule in such a way that the resulting global behavior is (close to) the desired one, the maximum in the specific case of optimization problems. Two are the challenges that typically we need to face. First, design agent utility functions that only uses information present at the level of the single units and that leads to a game whose Nash equilibria contain the desired configurations. Second, design the learning mechanism through which system converges to a desired Nash.
For optimization purposes, an interesting strategy [10,13] is to design utility functions so to yield a potential game whose potential coincide with the reward functional of the problem and then consider the log-linear learning dynamics (also known as noisy best response) [2,11,12]. Under certain assumptions, this rule is known to lead to a time-reversible ergodic Markov chain whose invariant probability distribution is a Gibbs measure with energy function described by the potential and (for small noise parameter) has its peak on the maxima of the potential. In this paper we follow this road.
For a very general family of functionals having an additive separable form, namely that can be expressed as sums of terms depending on the various units, we define a game by setting the utility function of each unit as simply the sum of those addends in the functional involving the unit itself and its neighbors, while the action set of a unit consists of the vectors describing the allocation among its various neighbors. The game so defined is easily shown to be potential with potential given by the original functional. The game, however, possesses a key critical feature: because of the hard storage constraints of the various resources, units are not free to choose their actions as they want, but they are constrained from the choice made by other units. For instance, if a unit is saturating the space available in a certain resource, other units connected to the same resource will not be able to use it. In cooperative cloud storage models where resources are common users, this hard storage constraint is a very natural assumption and can not be relaxed. This property is non-classical in game theory and has remarkable consequences on the structure of Nash equilibria and the behavior of the best response dynamics that is not guaranteed in general to approximate the optimum. Indeed, constrained equilibrium problems and convergence algorithm are widely studied in literature and they are known as Generalized Nash Equilibrium Problems ( [3,4] and reference therein).
The main technical contribution of this paper is to show that, despite these hard constraints, under mild technical conditions, a family of dynamics having their core on the log-linear learning rule converge to the desired solution. More precisely, by a careful analysis of the connectivity properties of the transition graph associated to the Markov process, we will obtain two results: (i) if there is enough space for allocation to take place, under the proposed algorithms, all units will complete their allocation in finite time with probability one; and (ii), under a slight stronger assumption, the allocation configuration converges, when the noise parameter approaches 0, to a maximum of the original functional. At the best of our knowledge, this analysis is new in game theory.
We want to remark that, the type of functionals considered are typically nonconvex (even when relaxed to continuous variables) so that many algorithms for distributed optimization may fail to converge to the global maximum. In addition, the proposed algorithm presents a number of interesting features. The algorithm is decentralized and adapted to any predefined graph. For the cloud application we have in mind, the choice of the graph topology can be seen as a design parameter that allows to control the computational complexity at the units level. It is asyncronous and it is robust with respect to temporary disconnection of units. Moreover, it is intrinsically open-ended: if new data or new units enters into the system, a new run of the algorithm will automatically permit the allocation of the new data and, possibly, the redistribution of the data stored by the old units to take advantage of new available space.
The remaining part of this section is dedicated to some literature review. In Section 2 we formally define the network allocation problem and recall some basic facts proven in [5] (in particular, a necessary and sufficient condition for the allocation problem to be solvable). We then introduce a family of functionals and define the optimal allocation problem. Section 3 is devoted to cast the problem to a potential game theoretic framework [16,14] and to propose a distributed algorithm that is an instance of a noisy best response dynamics. The main technical part of the paper is Section 4 where the fundamental results Theorem 6 and Corollary 12 are stated and proven. Theorem 6 ensures that the algorithm reaches a complete allocation with probability one, if a complete allocation is indeed possible. Corollary 12 studies the asymptotic behavior of the algorithm and explicitly exhibits the invariant probability distribution. Consequence of Corollary 12 is that in the double limit when time goes to infinity and the noise parameter goes to 0, the algorithm converges to a Nash equilibrium that is, in particular, a global maximum of the potential function. This guarantees that the solution will indeed be close to the global welfare of the community. Finally, Section 5 is devoted to the presentation of a set of simulations that show a practical implementation of the algorithm. Though we work out relatively simple examples, our simulations show the good properties of the algorithm, its scalability properties in terms of speed and complexity, and illustrate the effect of the parameters of the utility functions in the solution reached by the algorithm. A conclusion section ends the paper.
Related Work
Our problem fits into the wide class of distributed resource allocation problems. Among the many applications where such problems arise in a similar form to the one proposed in this paper, we can cite cloud computing [8], network routing [17], vehicle target assignment [1], content distribution [6], graph coloring [15]. The game theoretic approach to allocation problems and the consequent design of distributed algorithms has been systematically addressed in [13,10,11,12] where general techniques for the choice of the utility functions and of the dynamic learning rule have been proposed.
The model proposed in this paper and the algorithm based on noisy best response dynamics, is inspired by this literature. A key aspect of our model and that makes it different from the models treated in the above literature is the fact that resources have hard storage limitations. This is a natural feature of the distributed cloud storage problem considered in this paper and that, to our knowledge, had not yet been previously analyzed.
The cooperative storage model
Consider a set X of units that play the double role of users who have to allocate externally a back up of their data, as well resources where data from other units can be allocated. Generically, an element of X will be called a unit, while the terms user and resource will be used when the unit is considered in the two possible roles of, respectively, a source or a recipient of data. We assume units to be connected through a directed graph G = (X , E) where a link (x, y) ∈ E means that unit x is allowed to store data in unit y. We denote by respectively, the out-and the in-neighborhood of a node. Note the important different interpretation in our context: N x represents the set of resources available to unit x while N − y is the set of units having access to resource y.
x . We imagine the data possessed by the units to be quantized atoms of the same size. Each unit x is characterized by two non negative integers: • α x is the number of data atoms that unit x needs to back up into his neighbors, • β x is the number of data atoms that unit x can accept and store from his neighbors.
The numbers {α x } and {β x } will be assembled into two vectors denoted, respectively, α and β. Given the triple (G, α, β), we define a partial state allocation as any matrix W ∈ N X ×X that satisfies the following conditions (P1) W xy ≥ 0 for all x, y and W xy = 0 if (x, y) ∈ E.
(P3) W y := x∈X W xy ≤ β y for all y ∈ X .
We interpret W xy as the number of pieces of data that x has allocated in y under W . Property (P1) enforces the graph constraint: x can allocate in y iff (x, y) ∈ E. Property (P2) says that a unit can not allocate more data than the one it owns, and, finally, (P3) describes the storage constraint at the level of units considered as resources. Whenever W satisfies (P2) with equality for all x ∈ X , we say that W is an allocation state. The set of partial allocation states and the set of allocation states are denoted, respectively, with the symbols W p and W. We will say that the allocation problem is solvable if a state allocation W ∈ W exists.
Existence of allocations
The following result gives a necessary and sufficient condition for the existence of allocations. The proof, which follows from Hall's theorem, can be found in [5].
Theorem 1. Given (G, α, β), there exists a state allocation iff the following condition is satisfied: We first analyze the existence of allocations in the simple case of a complete network.
for all D such that |D| ≥ 2. Therefore, in this case, condition (1) reduces to We now focus on the special but intersting case when all units have the same amount of data to be stored and the same space available, namely, α x = a, β x = b for every x ∈ X . In this case, condition (2) that characterizes the existence of allocations for the complete graph, simply reduces to a ≤ b. In this case, among the possible allocation states there are those where each unit uses only one resource: given any permutation σ : X → X without fixed points, we can consider In general, an allocation state as W σ in (3) of the example above where each unit uses just one resource and each resource is only used by one unit, is called a matching allocation state. Existence of matching allocation states is guaranteed for more general graphs than the complete ones.
Proof (ii) ⇒ (i) is trivial. Notice that (iii) is, in this case, equivalent to condition (1). Therefore (i) ⇒ (iii) follows from Theorem 1. What remains to be shown is that (iii) ⇒ (ii). To this aim, notice that when (iii) is verified and we consider the bipartite graphG = ( Hall's theorem guarantees the existence of a matching inG complete on the first set, namely a permutation σ : X → X such that (x, σ(x)) ∈Ẽ for every x ∈ X . The corresponding state allocation W σ defined as in in (3) is a matching allocation state.
We can now extend the result contained in Example 1.
Corollary 3. Suppose that G = (X , E) is any undirected regular graph and that α x = a, β x = b for every x ∈ X with a ≤ b. Then, there exists a (matching) allocation state.
Proof Let s be the degree of each node in the graph. Fix any subset D ⊆ X . If E D is the set of directed edges starting from a node in D, we have that s|D| = |E D | ≥ s|N (D)|. This implies that that |D| ≤ |N (D)|. We conclude using Proposition 2.
Not necessarily a matching allocation state is the desirable one. In certain applications, security issues may rather require to fragment the data of each unit as much as possible. Suppose we are under the same assumptions than in previous result, namely G = (X , E) is an undirected regular graph with degree s, α x = a, β x = b for every x ∈ X with a ≤ b. If moreover s divides a we can also consider the 'diffused' allocation state given by where A is the adjacency matrix of G. Notice that all these matrices W can also be interpreted as valid allocation states for the case when the underlying graph is complete. For graphs that are not regular, simple characterizations of the existence of allocations in general do not exist. However, sufficient conditions can be obtained as the result below shows and whose proof follows along the same line than the proof of Corollary 3.
Proposition 4. Let G be any graph with minimal out-degree d min and max- Then, there exists an allocation state.
The above result can not be improved: indeed in a star graph with α x = a, β x = b for every x ∈ X , it is immediate to see that the condition a ≤ bd min /d − max is necessary for an allocation to exist.
The optimal allocation problem
On the set of allocation states, we define a reward functional measuring qualitative and realistic features that we desire the solution to possess, i.e., congestion and aggragation. Functionals considered in this paper have a separable structure that is a standard assumption in allocation problems [13]. We start with a notation. Given a (partial) allocation state W ∈ W p , we denote with the symbols (W x· ) and (W ·y ), respectively, the row vector of W with label x, and the column vector of W with label y. We consider functionals Ψ : W p → R of the type: consisting of two parts: one that takes into account the way each unit is succeeding in allocating its data and another that is typical a congestion term and considers the amount of data present in the various resources. Our goal is to maximize the functional Ψ over the set of allocation states W. The reason for defining Ψ in the larger set of partial allocation states W p will be clearer later when we present the game theoretic set up and the algorithm. Examples and simulations in this paper will focus on the following cases: We now explain the meaning of the various terms: • the term C all x y W xy where C all > 0 is sufficiently large, has the effect of pushing the optimum to be an allocation state (a configuration where all units have stored their entire set of data); • the term C agg x y∈X W 2 xy has different significance depending on the sign of C agg . If C agg > 0 plays the role of an aggregation term, it pushes units not to use many different resources for their allocation. If instead C agg < 0, the term has the opposite effect as it pushes towards fragmentation of the data.
• the term − y C con y (W y ) 2 is a classical congestion term: the constants −C con y < 0 for all y measure the reliability of the various resources and pushes the use of more reliable resources An alternative choice for the resource congestion term is the following. Put |W ·y | H = |{x ∈ X , W xy > 0}| the number of units that are using resource y and consider g y (W ·y ) = −C con y |W ·y | H This might be useful in contexts where it is necessary the control the number of units accessing the same resource, to avoid communication burden. The functionals (6) and (7) reflects the features that we wanted to enforce: congestion and aggregation. The reason for the latter feature comes from the fact that an exceeding fragmentation of the stored data will cause a blow up in the number of communications among the units, both in the storage and recovery phases. This feature should be considered against another feature, the diversification of back ups, which in this paper is not going to be addressed. In real applications, units will need to store multiple copies of their data in order to cope with security and failure phenomena. In that case, these multiple copies will need to be stored in different units. On the other hand, the congestion term is represented by a classical cost function that each user possibly experiences, for instance, as a delay in the storage/recovery actions.
The above desired features may be contradictory in general and we want to have tunable parameters to make the algorithm converge towards a desired compromised solution. The choice of this functionals has been made on the basis of simple realistic considerations and on the fact that, as exploited below, this leads to a potential game. In principle, different terms in the utility function can be introduced in order to make units to take into considerations other desired features (e.g multiple back up).
While our theory and algorithms will be formulated for a generic Ψ as defined in (5), the example proposed and the numerical simulations will be restricted to the specific cases we have described.
Below we present a couple of examples of explicit computation of the maxima of Ψ. We assume Ψ to be of the form described in (5) and (6) with C con y = C con for every y ∈ X . We also assume that α Example 2. Suppose that G = (X , E) is any undirected regular graph and assume that α x = a, β x = b for every x ∈ X with a ≤ b. Take Ψ to be of the form described in (5) and (6) with C con y = C con > 0 for every y ∈ X . There are two cases: • C agg > 0. In this case, the maxima of Ψ coincide with the matching allocation states. Indeed notice that any matching allocation state W (whose existence is guaranteed by Proposition 2) separately maximizes, for each y, the two expressions y∈X W xy and y∈X W 2 xy . Moreover, considering that W y = a for every resource y, simultaneously, minimize the congestion expression y∈X (W y ) 2 . The fact that these are the only possible maxima is evident from these considerations.
• C agg < 0. If the degree s of G divides a, arguing like above, we see that the unique maximum is given by the diffused allocation state (4). When s does not divide a, such a simple solution does not exist. In this case, maxima can be characterized as follows. Put a = sk + r = (s − r)k + r(k + 1) (with r < s) and consider a regular subgraphG of degree r. An optimal allocation is obtained by letting units allocate k + 1 atoms of their data in each of their neighbors inG and k atoms of their data in each of the remaining neighbors.
The game theoretic set-up and the algorithm
In this paper we recast the optimization problem into a game theoretic context and we then use learning dynamics to derive decentralized algorithms adapted to the given graph topology that solve the allocation problem and maximize the functional Ψ.
Assume that a functional Ψ as in (5) has been fixed. We associate a game to Ψ according to the ideas developed in [1,13] where there can be found other possible utility and potential functions.
The set of actions A x of a unit x is given by all possible row vectors (W x· ) such that x W xy ≤ α x . In this way the product set of actions x A x can be made to coincide with the space of non-negative matrices W ∈ R X ×X such that x W xy ≤ α x for every x ∈ X . Such a W in general is not a partial allocation. Indeed, such a W will automatically only possess properties (P1) and (P2). We have that W ∈ W p if the extra conditions (P3), y W xy ≤ β y for every y ∈ X , is satisfied. This is a key non classical feature of the game associated to our model: the storage limitations make the available actions of a unit depend on the choice made be the other ones. Now, for each unit x, we define its utility function U Note that, in order to compute U x (W ), unit x needs to know, besides the state of its own data allocation {W x· }, the congestion state g y (W ·y ) of the neighboring resources.
We now recall some basic facts of game theory. A Nash equilibrium is any allocation state W ∈ W p such that, for every agentx ∈ X , and for every W ′ ∈ W p such that W xy = W ′ xy for every x =x and for every y, it holds If W, W ′ ∈ W p are two allocation states such that W xy = W ′ xy for every x =x and for every y, it is straightforward to see that the following equality holds This says, in the language of game theory [14], that the game is potential with potential function given by Ψ itself. A simple classical result says that maxima of the potential are Nash equilibria for the game. In general the game will possess extra Nash equilibria. The choice (8) is not the only one to lead to a potential game with potential Ψ. Other possibilities can be constructed following [1,13]. As far as our theory is concerned, the specific form of the utility functions is not important as far as it leads to a potential game with potential Ψ. On the utility functions, (8) or its possible alternatives, we impose a monotonicity condition that essentially says that no unit will ever have a vantage to remove data already allocated.
Precisely, we assume that for every W, W ′ ∈ W p ,x ∈ X such that W xy = W ′ xy for every x =x and for every y, the following holds This condition is not strictly necessary for our results (as our algorithm actually will not allow units to remove data), it is however a meaningful assumption and simulations show that helps to speed up the algorithm. We now focus on the case when Ψ is of the form given by (6) with C con y = C con for every y ∈ X . In this case, a simple check shows that the monotonicity condition (11) is guaranteed if we impose the condition where ||v|| ∞ = max v i is the infinity norm of a vector. We conclude this section, computing the Nash equilibria in a couple of simple examples and discussing the relation with the maxima of Ψ.
Example 3. Suppose that G is the complete graph with three units and that α x = a = 2 and β x = b ≥ 2 for x = 1, 2, 3. Consider Ψ to be of the form (6) with C con y = C con for every y ∈ X and that condition (12) holds. Consider the following allocation states We know from the considerations in Example 2 that in the case when C agg > 0, the matching allocation states W 1 and W 2 are the (only) two maxima of Ψ and thus Nash equilibria. Instead, if C agg < 0, the diffused allocation state W 3 is the only maximum of Ψ and is in this case a Nash equilibrium.
Notice now that if b < 3, the only three possible allocation states are W i for i = 1, 2, 3. Since any two of these matrices differ in more than one row and condition (12) yields (11), we deduce that all three of them are in this case Nash equilibria, independently on the sign of C agg .
Suppose now that b ≥ 3. Explicit simple computations show that, if C agg ≤ C con , W 3 is a Nash equilibrium and if C agg ≥ −6C con , W 1 and W 2 are Nash equilibria. In summary, if b < 3 or if b ≥ 3 and −6C con ≤ C agg ≤ C con , the three matrices W i for i = 1, 2, 3 are Nash equilibria.
The next example shows that also partial allocations may be Nash equilibria. The one of the right is a maximum of Ψ, the one on the left is instead a partial allocation.
The algorithm
The allocation algorithm we are proposing is fully distributed and asynchronous and is only based on communications between units, taking place along the links of the graph G = (X , E). It is based on the ideas of learning dynamics where, randomly, units activate and modify their action (allocation state) in order to increase their utility. The most popular of these dynamics is the so-called best response where units at every step choose the action maximizing their utility. This dynamics is proven to converge almost surely, in finite time, to a Nash equilibrium. In presence of Nash equilibria that are not maxima of the potential (as it is in our case) best response dynamics is not guaranteed to converge to a maximum. This is simply because Nash Equilibria are always equilibrium points for the dynamics. A popular variation of the best response is the so-called noisy best response (also known as log-linear learning) where maximization of utility is relaxed to a random choice dictated by a Gibbs probability distribution. We now illustrate the details of our algorithm. For the sake of proposing a realistic model we immagine that units may temporarily be shut down or in any case disconnected from the network. We model this assuming that, at every instant of time, a unit is either in functional state on or off: units in functional state off are not available for communication and for any action including storage and data retrieval. A unit, which is currently in state on, can activate and either newly allocate or move some data among the available resources (e.g. those neighbors that still have place available and that are on at that time). The functional state of the network at a certain time will be denoted by ξ ∈ {0, 1} X : ξ x = 1 means that the unit x is on. The times when units modify their functional state (off to on or on to off) and the times when units in functional state on activate are modeled as a family of independent Poisson clocks whose rates will be denoted (for unit x), respectively, ν on x , ν of f x , and ν act x . The functional state of the network as a function of time ξ(t) is thus a continuous time Markov process whose components are independent Bernoulli processes.
We now describe the core of the algorithm, namely the rules under which activated units can modify their allocation state. We start with some notation. Given a (possibly partial) allocation state W ∈ W p , a functional state ξ ∈ {0, 1} X , and a unitx ∈ X such that ξx = 1, define: Wx(W, ξ) describes the possible partial allocation states obtainable from W by modifications done by the unitx: only the terms Wx y where y is on can be modified and the total amount of allocated data Wx can only increase or remain equal. Since the sets Wx(W, ξ) can in general be very large, it is convenient to consider the possibility that the algorithm might use a smaller set of actions where units either allocate new data or simply move data from one resource to another one. Given (W, ξ) ∈ W p × {0, 1} X and a unitx, define Nx(W, ξ) := {y ∈ Nx | Wx y < β y , ξ y = 1} (13) the set of available neighbor resources forx under the allocation state W and the functional state ξ: those that are on and still have space available. A family of sets Mx(W, ξ) ⊆ Wx(W, ξ), defined for eachx ∈ X and each Conditions (i) and (ii) essentially asserts that when a unit has an available neighbor resource not yet saturated, then Mx(W, ξ) must incorporate the possibility to newly allocate or transfer already allocated data into it. Condition (iii) instead simply says that when the functional state does not change and we are in an allocation state, any transformation can be reversed.
Examples In the second case, modifications allowed are those where a unit either allocate a certain amount of new data into a single resource or it moves data from one resource to another one. The third case puts an extra constraint on the amount of data allocated or moved: the simplest case is Q = {1}, just an atomic piece of data is newly allocated or moved. Simulation presented in this paper all fit in this third case with various possible sets Q.
Given an admissible family Mx(W, ξ), we now define a Gibbs measure on it as follows. Given a parameter γ > 0, put where ||W || = xy W xy , and complete it to a probability by putting The algorithm is completely determined by the choice of the admissible family Mx(W, ξ) and of the probabilities (14). If unitx activates at time t, the systems is in partial allocation state W (t), and in functional state ξ(t), it will jump to the new partial allocation state W ′ with probabilities given by If unitx chooses a W ′ such that ||W ′ || > ||W || we say that it makes an allocation move, otherwise, if ||W ′ || = ||W ||, we talk of a distribution move.
Analysis of the algorithm
In this section we analyze the behavior of the algorithm introduced above. We will essentially show two results: 1. first, we prove that if the set of state allocation W is not empty (i.e. condition (1) is satisfied), the algorithm above will reach such an allocation in bounded time with probability 1 (e.g W (t) ∈ W for t sufficiently large); 2. second, we show that, under a slightly stronger assumption than (1), in the double limit t → +∞ and then γ → +∞, the process W (t) induced by the algorithm will always converge, in law, to a Nash equilibrium that is a global maximum of the potential function Ψ.
In order to prove such results, it will be necessary to go through a number of intermediate technical steps.
In the sequel we assume we have fixed a triple (G, α, β) satisfying the existence condition (1), an admissible family of sets Mx(W, ξ) and we consider the allocation process W (t) described by (15) with any possible initial condition W (0).
By the way it has been defined, the process W (t) is Markovian conditioned to the functional state process ξ(t). If we consider the augmented process (W (t), ξ(t)), this is Markovian and its only non zero transition rates are described below: We now introduce a graph on W p that will be denoted by L p : an edge (W, W ′ ) is present in L p if and only if W ′ ∈ Mx(W, 1). Notice that, if ν act x > 0 for everyx, this can be equivalently described as Λ (W,1),(W ′ ,1) > 0. The graph L p thus describes the possible jumps of the process W (t) conditioned to the fact that all resources are in functional state on. We want to stress the fact that the graph L p depends on the triple (G, α, β) as well on the choice of the admissible family Mx(W, 1) but not on the particular choice of the functional Ψ or of the utility functions Ux.
Our strategy, in order to prove our first claim, will be to show that from any element W ∈ W p there is a path in L p to some element W ′ ∈ W.
Given W ∈ W p we define the following subsets of units It is clear that from any W ∈ W p \ W sat p , there exists units that can make either an allocation or a distribution move. Instead, if we are in a state W ∈ W sat p , there are units that are not fully allocated and all these units con not make any move. The only units that can possibly make a move are the fully allocated ones. Notice that, because of condition (1), for sure there exist resources y such that W y < β y and these resources are indeed exclusively connected to fully allocated units. The key point is to show that in a finite number of distribution moves, performed by fully allocated units, it is always possible to move some data atoms from resources connected to saturated units to resources with available space: this will then make possible a further allocation move.
For any fixed W ∈ W p , we can consider the following graph structure on X thought as set of resources: H W = (X , E W ). Given y 1 , y 2 ∈ X , there is an edge from y 1 to y 2 if and only if there exists x ∈ X for which The edge from y 1 to y 2 will be indicated with the symbol y 1 → x y 2 (to also recall the unit x involved). The presence of the edge means that the two resources y 1 and y 2 are in the neighborhood of a common unit x that is using y 1 under W . This indicates that x can in principle move some of its data currently stored in y 1 into resource y 2 if this last one is available. We have the following technical result Lemma 5. Suppose (G, α, β) satisfies (1). Fix W ∈ W p and letȳ ∈ X be such that there existsx ∈ Nȳ with Wx < αx. Then, there exists a sequencē y = y 0 , x 0 , y 1 , . . . , y t−1 , x t−1 , y t (17) satisfying the following conditions (Sa) Both families of the y k 's and of the x k 's are each made of distinct elements; (Sb) y k → x k y k+1 for every k = 0, . . . , t − 1; (Sc) W y k = β y k for every k = 0, . . . , t − 1, and W yt < β yt .
Proof Let Y ⊆ X be the subset of nodes that can be reached fromȳ in H W . Preliminarily, we prove that there exists y ′ ∈ Y such that W y ′ < β y ′ . Let and notice that, by the way Y and Z have been defined, Suppose now that, contrarily to the thesis, W y ≥ β y for all y ∈ Y. Then, where the first inequality follows from (18) and (1), the first equality from the contradiction hypothesis, the second equality from the definition of Z, the third equality again from (18) and, finally, last inequality from the existence ofx. This is absurd and thus proves our claim. Consider now a path of minimal length fromȳ to Y in H W : and notice that the sequenceȳ = y 0 , x 0 , y 1 , . . . , y t−1 , x t−1 , y t will automatically satisfy properties (Sa) to (Sc).
We are now ready to prove the first main result.
Mx(W, ξ) is an admissible family.
Then, for every W ∈ W p there is a path in L p to some element W ′ ∈ W.
Proof We will prove the claim by a double induction process. To this aim we consider two indices associated to any W ∈ W p \ W. The first one is defined by To define the second, consider anyx ∈ X \ X f (W ). We can apply Lemma 5 to W and anyȳ ∈ Nx and obtain that we can find a sequence of agents y = y 0 , x 0 , y 1 , . . . , y t−1 , x t−1 , y t satisfying the properties (Sa), (Sb), and (Sc) above. Among all the possible choices ofx ∈ X ,ȳ ∈ Nx and of the corresponding sequence, assume we have chosen the one minimizing t and denote such minimal t by t W . The induction process will be performed with respect to the lexicographic order induced by the pair (m W , t W ).
In the case when t W = 0, it means we can findx ∈ X andȳ ∈ Nx such that Wȳ < βȳ. This yieldsȳ ∈ Nx(W, 1). Hence, by property (i) in the definition of an admissible family, it follows that there exists n such that W ′ = W + nexȳ ∈ Mx(W, 1). Notice that m W ′ < m W . In case m W = 1, this means that W ′ ∈ W.
Then, P(∃t 0 | W (t) ∈ W ∀t ≥ t 0 ) = 1 Proof It follows from the form of the transition rates (16) and assumption 2), that the process (W (t), ξ(t)), starting from any initial condition (W, ξ), will reach (W, 1) in bounded time with positive probability. Combining with Theorem 6 and using again 2), it then follows that (W (t), ξ(t)) reaches a couple (W ′ , 1) for some W ′ ∈ W in bounded time with positive probability. Since, by definition of an admissible family, the set {(W ′ , ξ), W ′ ∈ W} is invariant by the process (W (t), ξ(t)), standard results on Markov processes yield the thesis.
We are now left with studying the process W (t) on W. Noisy best response dynamics are known to yield reversible Markov processes. This is indeed the case also in our case once the process has reached the set of allocations W. Precisely, the following result holds: x , ν of f x > 0 for all x ∈ X . Then, (W (t), ξ(t)), restricted to W × {0, 1} X , is a time-reversible Markov process. More precisely, for every (W, ξ), where Proof It follows from relations (16) and the definition of admissible families, that the only cases when Λ (W,ξ),(W ′ ,ξ ′ ) and Λ (W ′ ,ξ ′ ),(W,ξ) are not both equal to zero are the following: In case (i), we have that Case (ii) can be analogously verified. Consider now case (iii). Using relations (10), (16), and (14), we obtain We now show that under a slight stronger assumption than (1), namely, the process (W (t), ξ(t)) restricted to W × {0, 1} X is ergodic. Denote by L the subgraph of L p restricted to the set W. Notice that, as a consequence of timereversibility, L is an undirected graph. Ergodicity is equivalent to proving that L is connected. We start with a lemma analogous to previous Lemma 5. Proof It is sufficient to follow the steps of to the proof of Lemma 5 noticing that in (19) the first equality is now a strict inequality, while the last strict inequality becomes an equality.
If W, W ′ ∈ W are connected through a path in L, we write that W ∼ W ′ . Introduce the following distance on W: Notice that L is connected if and only if for any minimal pair {W 1 , W 2 }, it holds W 1 = W 2 .
Lemma 10. Let {W 1 , W 2 } be a minimal pair. Suppose y ∈ X is such that W 1 y < β y . Then, W 1 xy = W 2 xy for all x ∈ X . Proof Suppose by contradiction that W 1 xy < W 2 xy for some x ∈ X . Then, necessarily, there exists y ′ = y such that W 1 xy ′ > W 2 xy ′ . Consider then W 1 ′ = W 1 −e xy ′ +e xy . Since δ(W 1 ′ , W 2 ) < δ(W 1 , W 2 ), this contradicts the minimality assumption. Thus W 1 xy ≥ W 2 xy for all x ∈ X . This yields W 2 y < β y . Exchanging the role of W 1 and W 2 we obtain the thesis.
Proposition 11. If condition (22) holds true, the graph L is connected.
Proof Let {W 1 , W 2 } be any minimal pair. We will prove that W 1 and W 2 are necessarily identical. Consider any resource y. It follows from Lemma 9 that we can find a sequence y = y 0 , x 0 , y 1 · · · , y t−1 , x t−1 , y t satisfying the same (Sa), (Sb), and (Sc) with respect to the state allocation W 1 . Among all the possible sequences, choose one with t minimal for given y. We will prove by induction on t that W 1 xy = W 2 xy for all x ∈ X . If t = 0, it means that W 1 y < β y . It then follows from Lemma 10 that W 1 xy = W 2 xy for all x ∈ X . Suppose now that the claim has been proven for all minimal pairs {W 1 , W 2 } and any y ∈ X for which t <t (w.r. to W 1 ) and assume that y = y 0 , x 0 , y 1 · · · , yt −1 , xt −1 , yt satisfies the properties (Sa), (Sb), and (Sc) with respect to W 1 .
We can now state our final result. Then, (W (t), ξ(t)), restricted to W × {0, 1} X , is an ergodic time-reversible Markov process whose unique invariant probability measure is given by where Z γ is the normalizing constant.
Proof Let (W, ξ), (W ′ , ξ ′ ) ∈ W × {0, 1} X . It follows from the form of the transition rates (16) and the fact that ν on x > 0 for all x, that the process (W (t), ξ(t)), starting from (W, ξ), will reach (W, 1) in bounded time with positive probability. Combining with Proposition 11 and using the fact that ν act x > 0 for all x, it then follows that (W (t), ξ(t)) reaches (W ′ , 1) in bounded time with positive probability. Finally, from (W ′ , 1) again the process reaches (W ′ , ξ ′ ) in bounded time with positive probability. This says that the process is ergodic and it thus possesses a unique invariant measure whose form can be derived by the time-reversibility property characterized in Proposition 8.
Remark: It follows from previous result that the process W (t) converges in law to the probability distributioñ Notice that when γ → +∞, the probabilityμ γ converges to a probability concentrated on the set argmax W ∈W Ψ(W ) of state allocations maximizing the potential. Thus, if γ is large, the distribution of the process W (t) for t sufficiently large will be close to a maximum of Ψ.
Remark: Condition (22) is necessary for ergodicity. Notice indeed that in the case when G is complete and α x = β x = a for all x ∈ X , under every allocation W such that W y = a for every y, all resources will be saturated and, consequently, no distribution move will be allowed in W . Such allocations W are thus all sinks in the graph L that is therefore not connected.
In this section, we present some numerical results to validate the theoretical approach. We aim to show that the algorithm is feasible for practical implementation and that it has good performance and scaling properties. Example presented are admittedly simple: our goal is here is not to work out codes with optimized performance, neither to present exhaustive sets of simulations.
All our examples are for the case when the functional has the form defined in (6), but the last one where instead we consider the form (7).
We always take This choice is motivated by the considerations in (11). We assume that C con y = 1 for all units and we consider both the case when the aggregation parameter C agg is positive or negative.
As a graph, we use either a complete graph or a regular graph of degree 10 randomly constructed according to the classical configuration model.
We assume the admissible family Mx(W, ξ) to be of type 3) presented before where modifications allowed are those where a unit either allocate or move a number of data constrained in a set Q. Most of the examples are for Q = {1}: just one data is allocated or moved each time.
On the basis of our theoretical analysis, the algorithm, in the limit when t → +∞ and the inverse noisy parameter γ → +∞, is known to converge to the optimum. In practical implementations, a typical choice in these cases is to take the parameter γ, time-varying and diverging to +∞. The tuning of the divergence rate is known to be critical to obtain good results. Here we have chosen the activation rate ν act = 1/n and γ(t + 1) = γ(t) + 1 100000 Moreover, we suppose the units to be always on (otherwise things get simply slowed down). The time horizon is fixed T = 5 * x∈X α x : in this way a unit x will activate, on average, a number of times equal to 5 times the number of data it needs to allocate. As we will see, this time range is sufficient for the allocation to be completed and to get close to the optimum (this has been checked in those cases when the optimum is analytically known).
For all examples, the performance of the algorithm is analyzed considering the following parameters computed, in a Montecarlo style, by averaging over 10 runs of the algorithm.
• Distance from Full Allocation: counts the quantity of atoms not yet allocated. If the allocation is complete, this parameter is 0.
• Allocation complexity: Denoted by m x the total number of moves (allocation and distribution) made by unit x, we consider ν moves measures the number of allocation and distribution moves per piece of data. Since moving data from a resource to another can be expensive, it is an interesting parameter to consider • Distance in ratio from optimum: In cases when the maximum of the potential Ψ opt is explicitly known (Example 2), we consider ψ = Ψ(W T ) Ψopt .
• Degree complexity: We consider the average number of resources used by a unit d is a measure of how concentrated or diffused is the allocation. For matching allocations d = 1.
We now present a number of simulations for the case when the functional has the form defined in (6). We first consider the case when Q = {1}: just one data is allocated or moved each time a unit activates. We always take C con = 1 and C all chosen according to (23) and different values for C agg .
Example 5. Consider to have n = 10 users on a complete graph such that α x = a = 45 and β x = b = 50 for every unit x. We consider the cases: C agg = −7, −1, 1/2, 3. First, we show the final states reached by the dynamics for a single run of the algorithm 7 0 4 0 0 10 1 3 0 12 7 2 7 2 5 0 7 1 4 0 2 15 2 7 1 12 1 15 6 2 0 2 14 0 1 4 1 4 4 0 1 0 4 13 0 3 16 18 0 0 1 3 0 8 2 8 5 2 7 6 12 2 For the same runs, we plot in Figure 1 the time evolution of the potentials and we confront it with the optimal potential represented by the red line. For C agg = 3 a matching allocation state is reached and it is a maximum of Ψ in this case. For C agg = −7 the solution is also very close to the maximum that is the diffused allocation state. For C agg = 1/2, −1, the presence of Nash equilibria that are not maxima of Ψ slows down the dynamics and the algorithm does not reach the maximum at time T (this particularly evident for the case C agg = 1/2). Increasing in this case the time horizon to T = 20 * α, the final state of the system gets quite close to the maximum as confirmed by the two plots in Figure 2. The following table shows the performance parameters in the Montecarlo simulation for the usual T . From now on we focus on the cases C agg = −7, 3, C con = 1 and C all chosen according to (23), showing that reasonably good properties are maintained for larger communities and different topologies. Example 6. Consider to have n = 50 users on a complete graph and on a regular graph of degree 10 such that α x = a = 45 and β x = b = 50 for every unit x. While a matching allocation state is not reached when C agg = 3, the value of the average degree shows that the solution is quite concentrated with most of units allocating in just one resource. Instead, for C agg = −7 we have reached an optimum diffused allocation. The next example shows how the presence of heterogeneous resources does not alter much the performance of the algorithm Example 7. Consider to have n = 50 users on a complete graph and on a regular graph of degree 10 such that α x = a = 43 for every x. Assume that half of the units have β x = 40 and half or them instead β x = 50. Notice that, in this case, for the regular graph topology, there is no a-priori guarantee that allocation is feasible. Simulations show however that allocation is reached in all cases.
In the following example we consider larger families of units connected through a regular graph of degree 10. Numerical results show the good scalability properties of the algorithm. Example 8. Suppose to have n = 100, 200, 300 users on a regular graph of degree 10 with α x = a = 45 and β x = b = 50. Table 4 shows the performance parameters. Next example consider the case when allocations and distributions are allowed with different granularity Q.
Example 9. Consider to have n = 10 users on a complete graph such that α x = a = 45 and β x = b = 50 for every unit x. We assume that units can allocate or move each time a quantity of data belonging to either Q 1 = {1, 5, 10} or Q 2 = {1, 25, 45}. We also report the case Q 0 = {1} for the sake of comparison.
As expected, the possibility to allocate at one time larger sets of data drastically reduces the number of allocation and distribution moves and speeds up the algorithm. Notice however that in one case, using the set Q 2 , the algorithm does not reach the maximum. This phenomenon is probably due to the fact that allocating large set of data at once can lead to allocation states quite far from the optimum and thus require longer time to converge. This says that the choice of the set Q is likely to play a crucial role in order to optimize the speed of convergence of the algorithm.
Finally, the last example is for the objective functional with the alternative congestion term (7).
Example 10. Consider to have n = 50 users on a complete graph and on regular graph of degree 10 such that α x = a = 45 and β x = b = 50 for every unit x. We take C con = 1 and C all chosen according to (23) while we take different values for C agg . As expected, in this case, varying the aggregation parameter C agg , we obtain solutions with a different degree of fragmentation. This is particularly evident in the case of a complete graph. The choice of the topology and of the functional parameters can be seen, in this case, as alternative or complementary ways to prescribe the complexity of the allocation in terms of links used.
Conclusions
We have presented and mathematically analyzed a decentralized allocation algorithm, motivated by the recent interest in cooperative cloud storage models. We have proved convergence and we have shown the practical implementability of the algorithm. The tuning of its parameters to optimize performance will be considered elsewhere. In this direction, it will also be useful to investigate the possibility to use different utility functions in the definition of the algorithm, following the ideas in [10] and [13]. On the other hand, it would be of interest to deepen the relation of this model with generalized Nash equilibrium problems [3,4] to better understand the level of generality of out approach. | 2018-09-19T16:38:43.000Z | 2018-09-19T00:00:00.000 | {
"year": 2018,
"sha1": "387336f0241e95f688f01d42e6098d43e98d66c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "387336f0241e95f688f01d42e6098d43e98d66c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256908887 | pes2o/s2orc | v3-fos-license | Internal Extractive Electrospray Ionization Mass Spectrometry for Quantitative Determination of Fluoroquinolones Captured by Magnetic Molecularly Imprinted Polymers from Raw Milk
Antibiotics contamination in food products is of increasing concern due to their potential threat on human health. Herein solid-phase extraction based on magnetic molecularly imprinted polymers coupled with internal extractive electrospray ionization mass spectrometry (MMIPs-SPE-iEESI-MS) was designed for the quantitative analysis of trace fluoroquinolones (FQs) in raw milk samples. FQs in the raw milk sample (2 mL) were selectively captured by the easily-lab-made magnetic molecularly imprinted polymers (MMIPs), and then directly eluted by 100 µL electrospraying solvent biased with +3.0 kV to produce protonated FQs ions for mass spectrometric characterization. Satisfactory analytical performance was obtained in the quantitative analysis of three kinds of FQs (i.e., norfloxacin, enoxacin, and fleroxacin). For all the samples tested, the established method showed a low limit of detection (LOD ≤ 0.03 µg L−1) and a high analysis speed (≤4 min per sample). The analytical performance for real sample analysis was validated by a nationally standardized protocol using LC-MS, resulting in acceptable relative error values from −5.8% to +6.9% for 6 tested samples. Our results demonstrate that MMIPs-SPE-iEESI-MS is a new strategy for the quantitative analysis of FQs in complex biological mixtures such as raw milk, showing promising applications in food safety control and biofluid sample analysis.
molecule 21 , its limited sensitivity may be a challenge in specific application. Moreover, tedious sample pretreatments (e.g., centrifugation, diluting, and multistep chemical extraction, etc.) for the matrix clean-up are routinely required, which prevents the high-throughput analysis of FQs in practical samples. Thus, there is an urgent demand for the development of highly efficient analytical methods of sensitive and selective identification or quantification of FQs in samples with complex matrices.
Recently, ambient mass spectrometry (AMS) allows the direct analysis of complex samples with high speed, high selectivity, and high sensitivity [22][23][24] . Charged droplet generated by electro-spray or sonic spray is a common ionization reagent, which is widely used in various ambient ionization technologies such as desorption electrospray ionization (DESI) 25 , probe electrospray ionization (PESI) 26 , extractive electrospray ionization (EESI) 27 , laser ablation electrospray ionization (LAESI) 28 , and easy ambient sonic spray ionization (EASI) 29 , etc. Benefited by the high ionization energy, the primary ions generated by electric field (electron/plasma) have been employed in many ambient ionization technologies including direct analysis in real time (DART) 30 , low temperature plasma (LTP) 31 , microwave plasma torch (MPT) 32 , plasma assisted laser desorption ionization (PALDI) 33 , dielectric barrier discharge ionization (DBDI) 34 , desorption atmospheric pressure chemical ionization (DAPCI) 35 , etc., which are of unique advantages for the preparation of specific analytes ions from raw samples. Great convenience has been provided by these versatile ambient ionization technologies owing to the direct sampling or ionization of raw samples. To date, efforts are still devoting to improve the analytical performance of AMS facing highly complex matrices. In recent years, fast and facile sample pretreatment methods (e.g., solid-phase microextraction (SPME) 36,37 , magnetic solid-phase extraction (MSPE) 38 , thin-layer chromatography 39 , solid phase mesh enhanced sorption from headspace (SPMESH) 40 , etc.) combined with AMS has been developed for direct analysis of trace target analytes in various highly complex samples (e.g., biological, environmental, food, forensic samples, or even individual small organism), which greatly improved the sensitivity and selectivity of AMS.
Given raw milk as a typical example of extremely complex matrix, a facile method of solid-phase extraction based on magnetic molecularly imprinted polymers combined with internal extractive electrospray ionization 41-43 mass spectrometry (MMIPs-SPE-iEESI-MS) was designed for the quantitative analysis of FQs in raw milk samples. FQs in the raw milk samples were selectively captured by the MMIPs for subsequent iEESI-MS interrogation. Overall, the established method showed a high sensitivity in the determination of three kinds of FQs (norfloxacin, enoxacin, and fleroxacin) in raw milk samples. Our results demonstrate that the established MMIPs-SPE-iEESI-MS is a powerful method for the quantitative analysis of FQs in raw milk samples, providing potential application value in other biofluid sample analysis. Fragment ions of (m/z 302, m/z 276), (m/z 303, m/z 277), and (m/z 352, m/z 326) were yielded by the precursor ions of m/z 320, m/z 321, and m/z 370, respectively, which were consistent with characteristic fragment ions produced by protonated molecule ions of [norfloxacin +H] + (m/z 320), [enoxacin +H] + (m/z 321), and [fleroxacin +H] + (m/z 370) according to previous literatures 44,45 . All the protonated molecule ions of [norfloxacin +H] + , [enoxacin +H] + , and [fleroxacin +H] + were easily to occur neutral loss of H 2 O and CO 2 under the CID conditions. The loss of CO 2 (−44) got characteristic fragment ions of m/z 276, m/z 277, and m/z 326 by precursor ions of norfloxacin, enoxacin, and fleroxacin, respectively. These CO 2 (−44) lost fragment ions should provide higher significance for the identity check of these three FQs. Thus, the signal intensities of fragment ions of m/z 276, m/z 277, and m/z 326 were selected as analytical response to establish the quantitative methods for norfloxacin, enoxacin, and fleroxacin, respectively. As a result, the FQs in the raw milk samples were successfully detected using MMIPs-SPE-iEESI-MS.
MMIPs-SPE-iEESI-MS
Optimization of MMIPs-SPE-iEESI. For better performance during MMIPs-SPE-iEESI-MS analysis, analytical parameters including sorbent amount, composition, volume of extraction solvent, and the flow rate of extraction were optimized using FQs spiked raw milk as samples. The concentration of each FQs (i.e., norfloxacin, enoxacin, and fleroxacin) was set at 10 μg L −1 in all the milk samples.
MMIPs material was simply fabricated by co-mixing of Fe 3 O 4 magnetic nanoparticles (MNPs) and a commercial molecularly imprinted polymers (MIPs) products in methanol. As shown in the SEM image of MMIPs material (Fig. 2), the MNPs were coated on the surface of the MIPs after the co-mixing preparation. Additional elemental analysis of the MMIPs, Fe 3 O 4 MNPs, and MIPs also imply the assembly of Fe 3 O 4 MNPs and MIPs ( Supplementary Fig. S1). A comparison experiment of the Fe 3 O 4 MNP material (without MIPs) and the MMIP material (with MIPs) was carried out. As expected, the target FQs signals were remarkably increased when using MMIP material ( Supplementary Fig. S2). To achieve high adsorption performance for the FQs, different amounts of MIPs material (i.e., 0, 0.5, 1.5, and 2.0 mg) were experimentally investigated for FQs adsorption, while the amount of MNPs was kept at 2.0 mg. The signal intensities of the three FQs notably increased with the increase of MIPs amount from 0 to 1.5 mg, and showed a decreasing trend when the MIPs amount increased to 2.0 mg (Fig. 3a). As a result, 1.5 mg MIPs and 2.0 mg MNPs were used for the preparation of MMIPs material. As shown in the SEM image of MMIPs material (Fig. 2), the MNPs were coated on the surface of the MIPs material. Considering the extraction solution was acted as both the elution solution for FQs desorption and the solution for electrospray, the extraction solution was also investigated. Methanol containing with different proportion of ammonia 0%, 0.5%, 1.0%, 2.0%, 4.0%, 6.0%, and 8.0% (w/w) were applied for the MMIPs-SPE-iEESI-MS analysis. As a result, 2.0% ammonia in methanol (w/w) was the optimal extraction solution (Fig. 3b). The increased ammonia proportion in methanol should be helpful for the desorption of FQs, while excessively high concentration of ammonia (e.g., 8.0%, w/w) may suppress the ionization efficiency of FQs. Moreover, the volume of the extraction solution for the elution of FQs from the MMIPs material and the flow rate of the solution were also optimized to achieve better elution and ionization efficiency. Higher FQs signal intensity was obtained under a volume of 100 μL and a flow rate at 8 μL min −1 ( Fig. 3c and d). Finally, optimized conditions showed satisfactory performance for the determination of three kinds of FQs in raw milk samples.
Quantitative analysis of FQs in milk samples using MMIPs-SPE-iEESI-MS. Three kinds of FQs
standard solutions (i.e., norfloxacin, enoxacin, and fleroxacin, respectively) were spiked in blank raw milk samples (2 mL) to make a series of working solutions containing 0.1-500.0 μg L −1 of FQs for MMIPs-SPE-iEESI-MS/ MS analysis. In the case of norfloxacin, the signal intensity of m/z 276 was linearly responded with norfloxacin concentrations over the range of 0.1-500.0 μg L −1 (R 2 = 0.9999) (Fig. 4a). The LOD of norfloxacin defined by a signal-to-noise ratio (S/N) of 3 was estimated to be 0.019 μg L −1 . The relative standard deviations (RSDs) of six replicates for the norfloxacin concentrations ranging from 0.1-500.0 μg L −1 were less than 8.7% (detailed in Supplementary Table S1). For the quantitative analysis of enoxacin and fleroxacin, the linear responding ranges and relative standard deviation values (n = 6) were 0.1-100.0 µg L −1 (R 2 = 0.9999) and less than 7.5% for enoxacin ( Fig. 4b and detailed in Supplementary Table S2), and 0.1-500.0 µg L −1 (R 2 = 0.9995) and less than 8.4% for fleroxacin ( Fig. 4c and detailed in Supplementary Table S3), respectively. The LODs defined by a signal-to-noise ratio (S/N) of 3 were estimated to be 0.022 μg L −1 for enoxacin and 0.024 μg L −1 for fleroxacin (Table 1), respectively. A short time estimated less than 4 min (exclude the time of MMIPs preparation) was taken for each measurement. Recoveries of all the three FQs from raw milk samples were also estimated by analyzing spiked samples. Acceptable recoveries from 82.5% to 110.0% were obtained for all the samples, and RSDs (n = 6) of all spiked samples were less than 9.4% (Table 1). Furthermore, intra/inter-day precision and accuracy of the method were carried out with the FQs spiked at three different concentrations in milk samples. The intra-day precision and accuracy were determined on the same day and consisted of six replicates at each of three concentration levels, and the inter-day precision and accuracy were carried out with a continuous fourteen days. The results obtained are shown in Table 2. The intra-and inter-day RSDs were less than 8.2% and 10.9%, respectively, while the intraand inter-day recoveries ranging from 84.7 to 104.8% and from 85.9 to 105.6% were obtained, respectively. Table 3, the MMIPs-SPE-iEESI-MS results were all in good agreement with those obtained by LC-MS/MS. The good recovery rates (94.2-106.9%) and relatively low relative errors (−5.8% to +6.9%) confirmed that the MMIPs-SPE-iEESI-MS perfectly meet the requirement for the quantitative determination of FQs in raw milk samples. Inter-day accuracy (recovery, %, n = 14) Table 2. Method precisions and accuracies at three concentrations for the determination of FQs from raw milk sample. a Blank milk samples were spiked with a series of concentration (µg L −1 ) of norfloxacin, enoxacin, fleroxacin, respectively.
Discussion
In the optimization of the MMIPs amounts, the signal intensities of three kinds of FQs were increased with the increase of MIPs amounts from 0 to 1.5 mg, indicating that more FQs molecules in the complex milk sample were captured by the MMIPs material, which is consistent with higher ratio of MIPs. Interestingly, the signal intensities were decreased by using 2.0 mg MIPs. The preparation of MMIPs by co-mixing method was interpreted as "aggregate-wrap" process, i.e., the MIPs were likely to be wrapped by MNPs and aggregated to form a magnetic composite 46,47 . In this respect, sufficient MNPs were necessary to ensure all the MIPs material could be magnetic coated for the milk matrix separation. As the amount of MNPs was fixed at 2.0 mg, the magnetism of the MMIPs particles was decreased when more mass of MIPs (e.g., 2.0 mg) added for the assembly, resulting part of the MMIPs material loss during the solid-liquid separation. Also, a higher mass of MMIPs material might cause serious aggregation effect, which hindered the elution of FQs with a fixed volume of elution solution. Thus, a lower FQs signal was obtained. Of course, more detailed material properties of the MMIPs and the spontaneous assembling mechanism of MNPs and MIPs will subject to our further studies. Matrix effects from highly complex samples are a great challenge on the quantitative analysis of AMS because of serious ion suppression. To achieve highly sensitive and selective determination of trace analytes in complex samples, coupling simple, rapid and sensitive sample pretreatment methods to AMS is a promising strategy to improve the performance of AMS [48][49][50][51] . Raw milk is a typical extremely complex sample which cannot introduce to MS analysis directly. To address this problem, a facile method of solid-phase extraction based on magnetic molecularly imprinted polymers (MMIPs) combined with iEESI-MS was designed for the quantitative analysis of FQs in raw milk samples. The FQs molecules in the milk were selectively adsorbed by MMIPs and the MMIPs (together with the adsorbed FQs) was separated from the milk matrix. Thus, the majority of the milk matrix was cleaned up. Additionally, to avoid the milk residues interference, the separated MMIPs material was washed three times using 1 mL deionized water, acetonitrile, and 15% acetonitrile in deionized water (v/v), respectively. As a result, the matrix of the milk was largely cleared. The target analytes are sequestered by MMIPs and directly analyzed by iEESI-MS. Due to the highly selective extraction of MMIPs, ionic suppression is minimized; hence no chromatographic separation is necessary, which greatly increases analytical speed and sensitivity. Moreover, during the MS interrogation, CID experiments were carried for the suspected FQs ions, i.e., the FQs were identified based on their characteristic fragment ions, which practically avoid false positive result. Our results demonstrate that MMIPs-SPE-iEESI-MS enables direct quantification of sub-ppb level of FQs in raw milk samples without tedious sample pretreatments (e.g., centrifugation and chemical extraction). Furthermore, a comprehensive analytical performance comparison of the proposed MMIPs-SPE-iEESI-MS method with those of previous reported methods 46,[52][53][54][55][56] in the analysis of FQs is presented in Table 4. The data showed that the method established in this work was of higher speed and better sensitivity than those previously reported methods.
Combination of MMIPs-SPE with iEESI-MS was benefited by the high performance of MMIPs material in the capture of FQs from milk (i.e., fast and easy sample matrix clean-up step), as well as the specially designed sample loading/ionization process of iEESI. Molecularly imprinted polymers (MIPs) are a class of material engineered to bind one target compound or a class of structurally related compounds with high selectivity 57,58 . Due to the highly selectivity of MIPs, ionic suppression during ESI could be minimized, e.g., Figueiredo et al. employed MIP-SPE in ESI-MS for analysis of drugs in human plasma 59 . Although merits such as no chromatographic separation and minimized ionic suppression were achieved in the combination of MIP-SPE-ESI 59 , but tedious and laborious sample pretreatments including liquid-liquid extraction (for proteins elimination), centrifugation, preconcentration, sample re-dissolution, etc. were still needed on the account of highly complex of the plasma sample. In this respect, ambient ionization technologies provide a unique strategy for direct sampling/ionization analytes from the sample with no/minimum sample pretreatments 22,60,61 . Undoubtedly, the combination of facile sample pretreatment strategies (e.g., SPE, SPME, etc.) and ambient ionization methods is of promising when facing highly complex samples such as plasma and milk samples [48][49][50][51] . iEESI belongs to the ambient ionization methods family, which has been developed as a direct and fast sampling and ionization method for mass spectrometric analysis of complex samples 41,62 . Combination of SPE method and iEESI is a promising strategy to improve the analytical performance of iEESI. In a previous study, coupling of magnetic solid-phase extraction with iEESI was developed to study 1-hydroxypyrene in undiluted human urine samples with the assistance of polypyrrole-coated Ppy nanocomposites is a notable drawback when treating chemicals with similar polarity 38,42,63 . To address this concern, highly selectivity and specificity could be introduced by MIPs material. Selectivity of MIPs is introduced during MIPs synthesis in which a template molecule, designed to mimic the analyte, guides the formation of specific cavities that are sterically and chemically complementary to the target analytes 64,65 . Strong retention is offered between a MIP phase and its target analyte(s) based on multiple interactions (e.g., Van der Waals, hydrogen bonding, ionic, hydrophobic) between the MIP cavity and analyte functional groups 64,65 . As a result, even trace FQs in the raw milk were captured and subsequently subject to iEESI-MS.
To conclude, combination of fast and easy-to-use sample pretreatment with mass spectrometry is a promising strategy for high throughput quantitative detection of trace analytes in highly complex samples. As a typical example of the analytical strategy, MMIPs-SPE-iEESI-MS was designed for the confidently quantitative analysis of FQs in raw milk samples. As a result, FQs in the raw milk sample were selectively enriched by the MMIPs and then directly eluted by the electrospraying solvent to produce protonated FQs ions for mass spectrometric interrogation. The LOD of ≤0.03 µg L −1 and the high speed of 4 min for per sample were achieved. The analytical performance for real sample analysis was validated by a nationally standardized protocol using LC-MS, resulting in acceptable relative errors from −5.8% to +6.9% for 6 tested samples. Our results demonstrate that MMIPs-SPE-iEESI-MS is a facile method for the high throughput quantitative analysis of FQs in raw milk samples, which shows promising applications in food safety control and biofluid sample analysis. 46 , i.e., 2.0 mg Fe 3 O 4 magnetite nanocomposites (MNPs) and 1.5 mg molecularly imprinted polymers (MIPs) were co-mixed in 1.0 mL methanol by vigorously vortexing for 1 min in a 5-mL glass vial. Then, the methanol was removed from the MMIPs with the assistance of an external magnet and residues of methanol in the MMIPs material volatilized away after about 1 min. The obtained MMIPs were used for the extraction of FQs from milk samples. A 2 mL aliquot of raw milk sample was added into the 5-mL glass vial containing MMIPs material (3.5 mg) and vortexed for 1 min. The suspension mixture was loaded in a 1 mL syringe (Hamilton company, Nevada, USA), and MMIPs captured with FQs were magnetically gathered to the inner wall of the syringe with an external magnet. The milk waste was discharged into a glass beaker. After twice repeats of the MMIPs collection, all the FQs captured MMIPs were gathered on the inner wall of the syringe. To avoid the milk residues interference during the ionization of FQs, the FQs captured MMIPs inside the syringe were washed using 1 mL deionized water, acetonitrile, and 15% acetonitrile in deionized water (v/v), respectively. After loading with 100 μL extraction solution (2% ammonia in methanol, w/w), the syringe was shaken for 20 s to allow the FQs eluted to form a FQs solution which is suitable for electrospray purpose. The FQs solution was pumped through a capillary for ESI at flow rate of 8 μL min −1 , a strong magnet was placed outside of the capillary to prevent the MMIPs material from reaching to the ESI nozzle. Thus, all the MMIPs material was purposely held by the external magnet and no particles reached to the ion entrance of the mass spectrometer instrument. The MS/MS signal collection duration was 1 min and the average signal intensities of fragment ions were selected from a 30 s window. The average signal intensities of fragment ions of m/z 276, m/z 277, and m/z 326 were selected as analytical response to establish the quantitative method for norfloxacin, enoxacin, and fleroxacin, respectively. It is noted that the lifetime of the MMIPs was about 3 times performances of MMIPs-SPE-iEESI-MS and the performance would decrease significantly over 3 times due to the matrix contamination.
Methods
All the experiments were carried out using an Orbitrap Fusion ™ Tribrid ™ mass spectrometer (Thermo Scientific, San Jose, CA, USA). Mass spectra were collected at the mass range of m/z 50-500 under positive ion detection mode. The electrospray solution was pumped at a flow rate of 8 μL min −1 using a syringe pump (Harvard Apparatus, Holliston, MA, USA). The ionization voltage was set at +3.0 kV, and the heated LTQ capillary was maintained at 250 °C. The pressure of nitrogen sheath gas was 60 Arb. CID experiments were carried out for MS/MS analysis. During the CID experiments, precursor ions were isolated with a window width of 1.0 Da, and normalized collision energy (NCE) was set to 30-40%. Other parameters were set to instrument default SCIentIfIC REPoRtS | 7: 14714 | DOI:10.1038/s41598-017-15202-1 values. Scanning electron microscopes (SEM) and energy dispersive X-ray analysis (EDX) were performed to investigate the morphological, size and elements of MIPs, MNPs, and MMIPs materials by the FIB-SEM instrument (Helios Nanolab 600i from FEI Co., USA). The electron beam and working distance of the instrument were set to 10-20 kV and 4 mm, respectively. | 2023-02-17T14:22:24.494Z | 2017-11-07T00:00:00.000 | {
"year": 2017,
"sha1": "67efaf4535c5ea9a92dbfe15d98ecbba1290f679",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15202-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "67efaf4535c5ea9a92dbfe15d98ecbba1290f679",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
248895022 | pes2o/s2orc | v3-fos-license | Bulk and single-cell transcriptome profiling reveal necroptosis-based molecular classification, tumor microenvironment infiltration characterization, and prognosis prediction in colorectal cancer
Background Necroptosis is a new form of programmed cell death that is associated with cancer initiation, progression, immunity, and chemoresistance. However, the roles of necroptosis-related genes (NRGs) in colorectal cancer (CRC) have not been explored comprehensively. Methods In this study, we obtained NRGs and performed consensus molecular subtyping by “ConsensusClusterPlus” to determine necroptosis-related subtypes in CRC bulk transcriptomic data. The ssGSEA and CIBERSORT algorithms were used to evaluate the relative infiltration levels of different cell types in the tumor microenvironment (TME). Single-cell transcriptomic analysis was performed to confirm classification related to NRGs. NRG_score was developed to predict patients’ survival outcomes with low-throughput validation in a patients’ cohort from Fudan University Shanghai Cancer Center. Results We identified three distinct necroptosis-related classifications (NRCs) with discrepant clinical outcomes and biological functions. Characterization of TME revealed that there were two stable necroptosis-related phenotypes in CRC: a phenotype characterized by few TME cells infiltration but with EMT/TGF-pathways activation, and another phenotype recognized as immune-excluded. NRG_score for predicting survival outcomes was established and its predictive capability was verified. In addition, we found NRCs and NRG_score could be used for patient or drug selection when considering immunotherapy and chemotherapy. Conclusions Based on comprehensive analysis, we revealed the potential roles of NRGs in the TME, and their correlations with clinicopathological parameters and patients’ prognosis in CRC. These findings could enhance our understanding of the biological functions of necroptosis, which thus may aid in prognosis prediction, drug selection, and therapeutics development. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03431-6.
Background
Necroptosis is a novel form of regulated necrotic cell death mechanistically mimicking apoptosis and morphologically resembling necrosis [1,2]. It is mainly regulated by the key proteins such as RIPK1, RIPK3, and their substrate, mixed-lineage kinase domain-like protein (MLKL) [3][4][5]. Previous researches have reported the relevance of necroptosis in many human diseases, including inflammatory diseases, neurodegenerative diseases, and cancer etc. [6][7][8]. In addition, it has been suggested to be involved in cancer initiation, progression, immunity, and chemoresistance, providing novel perspectives and potential targets for cancer therapy, for which several therapeutic agents aiming to treat cancer by inducing or manipulating necroptosis are under investigation [6,9].
Colorectal cancer (CRC) is a major lethal malignancy worldwide [10,11]. Like other malignancies, tumor microenvironment (TME) plays an indispensable role in CRC tumorigenesis [12]. Previous reports indicated that myeloid-derived suppressive cell (MDSC), an anti-tumor immune suppressor, accumulates in CRC tissue and promotes cancer metastasis [13,14]. In advanced stage CRC, the well-known immune-activated effectors, CD8 + T cells can be suppressed by IL-17A secretion from Th17 cells [15]. As the most exciting breakthrough in cancer treatment, immune-checkpoint blockade (ICB) therapy based on CTLA-4 and PD-1, has demonstrated promising efficacy in CRC patients [16][17][18]. However, only some of those with microsatellite instability high (MSI-H) or mismatch repair deficient (dMMR) status could benefit from ICB therapy [19]. Therefore, it is necessary and urgent to further investigate the TME characteristics in CRC to identify more effective immunotherapeutic targets.
The involvement of necroptosis has been reported not only in cancer cells but also in other components in the TME [20,21]. For example, necroptosis could promote pancreatic tumorigenesis by inducing the expression of CXCL1, a potent chemoattractant for myeloid cells that was highly expressed in a RIP1-and RIP3-dependent manner, which could shape the immune suppressing environment [22]. Therefore, further exploring the correlation between TME cells infiltration and necroptosis can provide new perspectives for understanding underlying mechanisms and developing cancer therapeutics, such as combination treatment of necroptosis-based therapy and immunotherapy.
By using bulk and single-cell transcriptomic data analysis, we identified two stable necroptosis-related phenotypes in CRC: a phenotype characterized by few TME cells infiltration but with EMT/TGF-β pathways activation, and another recognized as an immune-excluded phenotype [23]. We further established a scoring system, which could reveal TME characteristics, help accurately determine patients' survival outcomes, and predict responses to immunotherapy and chemotherapy.
Preparation of bulk RNA expression datasets
A total of 1003 patients from Gene Expression Omnibus (GEO) database (including GSE33113, GSE39582, GSE14333, and GSE37892) were recruited in this study. We corrected the batch effects of GEO datasets using combat method [24] and integrated them into a meta-GEO cohort.
A total 626 patients (578 tumors and 48 normal) in the TCGA cohort were obtained from the UCSC Xena (https:// xenab rowser. net/ datap ages/ TCGA-COAD/ READ). Somatic mutation data were downloaded from https:// portal. gdc. cancer. gov/ repos itory. Copy number variation information was extracted from UCSC Xena. The basic information of these datasets was shown in Additional file 10: Table S1.
Analysis of single-cell RNA data
Single-cell RNA (scRNA) datasets were downloaded from GEO database (including CRC datasets from GSE144735, GSE178318, LUAD datasets from GSE131907). We calculated the score of single-cell using ' AddModuleScore' function via signature α and β.
To calculate the risk score of single-cell data, we first averaged gene expression of each patient to represent their bulk gene expression level. Then we calculated their risk score as follows: risk score = Σ (Expi × coefi), according to methods in necroptosis-related gene score (NRG_score).
Consensus molecular clustering by "ConsensusClusterPlus"
We performed consensus clustering with "Consensus-ClusterPlus" to identify classifications in CRC patients based on the expression of necroptosis-related genes (NRGs). The final number of clusters was determined by cumulative distribution function (CDF). K = 3 was finally set as the number of clusters. The annotation of clusters of all datasets was shown in Additional file 10: Table S1.
Gene set variation analysis (GSVA) and single-sample gene set enrichment (ssGSEA) analysis
We calculated pathway activities of tumor samples ( Fig. 2E and Additional file 3: Fig. S3C) using GSVA R package. The gene-signatures included for analyzing were downloaded from Hallmark gene sets and C2 curated gene sets (MSigDB database v7.4) [25].
We evaluated immune cell types signature scores using ssGSEA analysis. The immune cell types signature was extracted from the study of Charoentong [26].
CMS classification for bulk RNA-seq
We utilized CMSclassifier [27] to classify TCGA-COAD/ READ tumor samples. The CMS subtypes of TCGA and GEO databases were shown in Additional file 10: Table S1.
TME infiltration evaluation using ssGSEA, CIBERSORT and ESTIMATE
We adopted the CIBERSORT [28] deconvolution approach to evaluating the relative abundance of 22 tumor-infiltrating immune cells (TIICs). To confirm the stable TME infiltration patterns of necroptosis-related clusters, we also evaluated immune cell infiltration with cell types from the study of Charoentong [26] using ssGESA analysis [29]. In addition, we used ESTIMATE algorithm to calculate tumor purity, immune and stromal scores of each patient.
Somatic mutation analysis
Varscan file format of somatic mutation data were downloaded from https:// portal. gdc. cancer. gov/ repos itory. Copy number variation information was curated from UCSC Xena online. Maftool R package was used to identify mutant genes and calculate TMB level.
Quantitative real-time polymerase chain reaction (RT-qPCR)
We collected 208 pairs of patients' tissues (including CRC and adjacent non-tumor tissues) from Fudan University Shanghai Cancer Center (FUSCC) in this study. The written informed consent was signed by all patients according to the Institutional Review Boards of FUSCC, and the study was approved by the Ethical Committee of FUSCC.
RNA was extracted from these samples by using TRIzol reagent (Invitrogen, Carlsbad, CA, USA), which was then reversed into complementary DNA (cDNA) with a PrimeScript RT reagent kit (Takara). Then RT-qPCR was performed using SYBR-Green assays (Takara). The data were calculated using the 2 −ΔΔCt value, and normalized with 18 s rRNA. The primer sequences used in our study are shown in Additional file 15: Table S6.
Construction of the prognostic NRG_score
NRG_score was calculated to quantify the expression patterns of NRGs the individual samples. First, the differentially expressed genes (DEGs) were subjected to univariate Cox regression analysis to identify those linked to CRC overall survival. Second, the patients were classified into different necroptosis phenotype-related groups (gene-cluster A, gene-cluster B, and gene-cluster C) for deeper analysis using an unsupervised clustering method based on the expression of prognostic DEGs (Additional file 13: Table S4) and 33 NRGs. Finally, based on necroptosis phenotype-related prognostic genes, the Lasso Cox regression algorithm was used to minimize the risk of over-fitting using the "glmnet" R package [30]. We analyzed the change trajectory of each independent variable and then used tenfold cross-validation to establish a model. As previously reported [31], we totally performed 1000 iterations and included 5 gene groups for further screening. A gene model with 13 genes showed the highest frequencies of 726 compared to other four-gene models (Fig. 5A). Thus, this 13-gene model was applied to generate the gene signature for calculating NRG_score, which was calculated as follows: Based on the median risk score, a total of 578 patients in the training set were divided into low-risk and highrisk groups in survival analysis. Similarly, the testing and all sets were divided into low-and high-risk groups, each of which was subjected to Kaplan-Meier survival analysis and the generation of receiver operating characteristic (ROC) curves. The NRG_score of TCGA and GEO datasets were shown in Additional file 14: Table S5.
Drug susceptibility analysis
To explore the differences in the therapeutic effects of drugs in CRC patients, we calculated the drug imputed sensitivity score of drugs from Sanger's Genomics of Drug Sensitivity in Cancer (GDSC) v2 using the "onco-Predict" package [32].
Kaplan-Meier survival analysis
We plotted the Kaplan-Meier (K-M) survival curve using R package 'Survminer' (0.4.6). We stratified samples into high and low gene expression subgroups using surv-cutpoint function.
Statistical analyses
Statistical analysis was performed using R (version 4.0.0) and GraphPad Prism (version 7.04
Landscape of genetic variation of NRGs in CRC
A flowchart of our research was shown in Fig. 1A. In this study, we investigated the roles of 33 NRGs (Additional file 11: Table S2) in CRC. As expected, gene ontology (GO) enrichment analysis showed that these genes were characterized by the biological processes of cell death, especially necroptosis (Fig. 1B). Then, frequency of somatic mutations of NRGs in CRC was analyzed (Fig. 1C). A total of 111 out of 502 CRC samples in TCGA cohort showed genetic alterations of NRGs. Among them, CASP8 had the highest mutation rate (4%), while four NRGs (FADD, TRADD, TNF and AURKA) didn't present any mutation. Further analysis of copy number variation (CNV) mutation revealed the prevalent copy number alterations in these NRGs (Fig. 1D). The locations of CNV alterations on chromosomes were shown in Fig. 1E. Based on paired tumor-normal sample data, principal component analysis (PCA) was conducted, which showed that NRGs could distinguish CRC samples from normal ones (Fig. 1F). Afterwards, expression of NRGs between CRC and normal samples was compared, revealing that genes with CNV amplification were significantly enriched in tumor samples compared to the normal, such as MYC, FADD, AURKA, TRAF2 and PGAM5, while the expression of TLR3, CHUK, RIPK1, FAS, NFKB1 and AXL was markedly decreased in tumor samples, consistent with that of CNV deletion (Fig. 1G). Taken together, the genetic landscape and expression levels of NRGs between CRC and normal samples were revealed to be significantly different, indicating that necroptosis might play an important role in regulating CRC tumorigenesis.
Identification of necroptosis-related subtypes in CRC
To comprehensively understand the expression patterns of NRGs involved in tumorigenesis, 1581 patients from five available CRC cohorts (TCGA-COAD/READ, GSE14333, GSE33113, GSE37892 and GSE39582) were integrated in our study for further analyses. The landscape of NRGs interactions, regulator connections, and their prognostic value in CRC patients were demonstrated in a necroptosis network ( Fig. 2A). Univariate Cox regression and Kaplan-Meier analysis showed that some of them had prognostic value, and the details were shown in Additional file 10: Fig. S1 and Additional file 12: Table S3. Based on these analyses, seven NRGs (TLR3, TLR4, BIRC2, TRAF2, CASP8, NFKB1 and TNFRSF10B) were identified as prognostic genes. We next used a consensus clustering algorithm [33] to stratify CRC tumor samples based on the expression of the 33 NRGs (Fig. 2B, C; Additional file 2: Fig. S2A, B). Accordingly, we identified three distinct clusters and referred them as necroptosis-related clusters (NRCs), including 141 cases in NRC1, 204 in NRC2 and 233 in NRC3 ( Fig. 2D-F, Additional file 10: Table S1), among which NRC1 and NRC3 had the worse long-term prognosis in TCGA-COAD/READ cohort ( Fig. 2D; overall survival (OS), P = 0.0053; log-rank test). In addition, we combined four GEO datasets with available clinical data (GSE33113, GSE39582, GSE14333 and GSE37892) into a meta-GEO cohort and obtained the similar results of classification and prognosis (Additional file 3: Fig. S3B-S3D; relapse-free survival (RFS), P < 0.0001, log-rank test). Moreover, further analysis revealed significantly different distribution of clinicopathological characteristics among different NRCs (Fig. 2E). For example, NRC1 had the most patients with advanced stage disease (stage IV) (15.60%, P = 0.0086, Pearson's Chi-square test) and lymphatic invasion (51.77%, P < 0.0001, Pearson's Chi-square test), evidencing why it showed the worst prognosis.
To understand the biological discrepancies among the three distinct clusters, we performed gene set variation analysis (GSVA) [34] on tumor samples ( Fig. 2E and Additional file 3: Fig. S3A, C, D). The results showed that NRC1 and NRC3 were enriched in pathways mainly correlated with tumor-specific and stromal pathways such as TGF-β and epithelial-mesenchymal transition (EMT), supporting their poor prognosis. Interestingly, among the three clusters, NRC3 was remarkably enriched with immune cells and immunotherapy-related pathways, such as lymphocyte, monocyte, PD-1 and CTLA4 signaling. All of these findings indicated the marked differences in the intrinsic biological underpinnings of the three NRCs in CRC.
Distinct tumor microenvironment infiltration in NRCs
Previous studies have indicated MSI-H/ dMMR status could predict the response to immunotherapy in CRC [16]. We next explored the MSI/MMR status in tumor samples of NRCs, which showed that MSI-H was mainly concentrated within NRC2 and NRC3 (Fig. 3A). When the association of NRCs with the consensus molecular subtype (CMS) system was analyzed, it revealed that CMS1-immune subtype was mainly clustered into NRC2 and NRC3 (Fig. 3B). In GSE39582 cohort, samples with dMMR status were predominantly grouped into NRC2 and NRC3 (Fig. 3C). Notably, CMS4 and CSC subtypes, characterized by prominent transforming growth factor-β (TGF-β) activation, stromal invasion and angiogenesis [26], were mainly concentrated within NRC3 (Fig. 3C).
To further characterize the microenvironment heterogeneity of NRCs, we performed CIBERSORT [28] and ssGSEA analyses (Fig. 3D, E; Additional file 4: Fig. S4A). The results showed that not only antitumor immune cell populations such as memory CD4 + T cells and activated CD4 + /CD8 + T cells, but also immune-suppressive cells such as MDSC and regulatory T cells were enriched within NRC3. Moreover, we used the ESTIMATE algorithm [35] to quantify the overall infiltration of immune cells (Immune score), stromal cells (Stromal score) and tumor cell purity (Tumor purity) across three NRCs ( Fig. 3F and Additional file 4: Fig. S4B). Here we demonstrated that NRC3 encompassed low tumor purity, and displayed remarkable stromal cells infiltration. Taken together, NRC3 was considered as an immuneexcluded phenotype characterized by stromal activation and weakened immune cell infiltration. However, there was no significant difference in immune cell infiltration between NRC1 and NRC2 by CIBERSORT and ssGSEA analyses. Using ESTIMATE algorithm, we observed that NRC2 had higher tumor purity than NRC1, while NRC1 displayed stronger stromal cells infiltration than NRC2 ( Fig. 3F and Additional file 4: Fig. S4B). These features were not consistent with MSI-H/CMS1-like characteristic of NRC2, which were shown in Fig. 3A-C. As previously reported, the expression of PD-1/PD-L1 could predict the response to immunotherapy in some cancers [36]. We next compared the PD-1/PD-L1 expression level among the three NRCs and observed the highest expression in NRC3 ( Fig. 3G and Additional file 4: Fig. S4C). However, considering the immune-excluded phenotype of NRC3, patients in NRC3 might display ineffective response to anti-PD-1/PD-L1 treatment, which might partially explain why high expression of PD-1/PD-L1 has not been clinically demonstrated to effectively predict immunotherapy response in CRC.
Necroptosis phenotype-related DEGs in CRC
To further confirm the underlying molecular and clinical patterns determined by NRGs, we overlapped 2862 DEGs among the three NRCs and recognized them as necroptosis phenotype-related signature (Additional file 5: Fig. S5). We next included these DEGs for univariate Cox regression analysis and obtained 475 prognostic genes. Then, we performed unsupervised consensus clustering analysis based on these 475 prognostic genes and divided TCGA patients into three necroptosis phenotype-related signature groups with different clinicopathologic subgroups, which were defined as gene-cluster A, B and C ( Fig. 4A; Additional file 10: Table S1). By hierarchical clustering and gene ontology enrichment (GO) analysis (Fig. 4C), 475 prognostic genes were only grouped into signature genes A and C (Additional file 13: Table S4).
Genes A were clustered into gene-cluster A and associated with metabolic processes and stromal biological processes such as endothelial tube morphogenesis. Genes C were enriched within gene-cluster C and associated with immune cells activation and antigen processing. We observed that gene-cluster A presented the worst prognosis ( Fig. 4B; overall survival (OS), P < 0.0001, log-rank test) with the highest proportion of advanced stage patients (stage IV) (15.66%, P = 0.0053, Pearson's Chi-square test) (Fig. 4A) and the most patients with lymphatic invasion (51.81%, P = 0.0003, Pearson's Chisquare test). We also found that gene-cluster A contained the most NRC1 tumors, while gene-cluster C had most of the NRC3 tumors (Fig. 4A). For CMS subtypes (Fig. 4A), CMS4 was mainly grouped into gene-cluster C, consistent with the pattern of NRC3 (28.32% in gene-cluster C, P = 0.0021, Pearson's Chi-square test). Subsequent ESTIMATE analysis showed that gene-cluster C had low tumor purity and remarkable stromal cells infiltration (Fig. 4D). Moreover, gene-cluster C displayed the highest expression level of PD-1/PD-L1, similar to NRC3 (Fig. 4E). For TME cell infiltration (Fig. 4F), both adaptive and innate immune cells were enriched in gene-cluster C.
Overall, based on necroptosis-related genes, there were two stable distinct phenotypes in CRC: like NRC1, genecluster A was characterized by few TME cells infiltration (Figs. 3F and 4D) but with EMT/ TGF-β pathways activation, and like NRC3, gene-cluster C was characterized by remarkable stromal, immune cells infiltration, and EMT/ TGF-β activation, which was similar to CMS4-like and thus recognized as an immune-excluded phenotype [23].
Single-cell analysis of NRCs
To further understand biological and TME characteristics of NRC1 and NRC3, we analyzed single-cell datasets of CRC (GSE144735 [37] and GSE178318 [38]). We first overlapped representative genes of NRC1, gene-cluster A and NRGs, and obtained a total of 10 genes (RIPK3, IKBKB, TRADD, TYRO3 MAP3K7, NFKB1, CASP8, CHUK, HSP90AA1; Fig. 5D). We then used these two signatures to score single-cell data of SMC and KUL CRC cohorts from GSE144735 (Fig. 5B, E, Additional file 6: Fig S6A and S6B). The results showed that score β in TME cells (especially in stromal and T cells) were higher than score α ( Fig. 5C and Additional file 6: Fig. S6C). Therefore, NRC3 and gene-cluster C were indeed infiltrated by stromal and immune cells, consistent with an immune-excluded phenotype. Just as previously reported [37,39], the strong stromal cell infiltration pattern might cause the CMS4-like phenotype and EMT/TGF-β activation in NRC3 and gene-cluster C. Next, we included a single-cell dataset (GSE178318) which contained liver metastasis and scored a total of 19,483 tumor epithelial cells using signature α and β (Fig. 5F). We found that score α in epithelial cells from liver metastasis was higher than that from CRC primary sites, while score β in liver metastasis was lower than primary sites (Fig. 5G), indicating that high score α might predict high risk of CRC liver metastasis. Because EMT is a crucial step that promotes tumor metastasis [40], we postulated that EMT phenotype of NRC1 was mainly exhibited on tumor cells, while EMT phenotype of NRC3 was caused by its stromal cell infiltration.
Finally, we would like to explore whether these interesting findings could be replicated in other cancer. We performed identical analyses on single-cell data of metastatic lung adenocarcinoma (LUAD) (GSE131907) [41]. We found that score β were higher in TME cells (Fig. 6A-C). Then, we extracted tumor epithelial cells from early-(tLung), advanced-stages(tL/B), metastatic lymph nodes (mLN) and brain metastases (mBrain). By scoring tumor cells using the two signatures (Fig. 6D), we observed that score α in mBrain was significantly higher than primary site tLung (Fig. 6E). Score α in mLN was significantly higher than primary sites including tLung and tL/B (Fig. 6E). However, score β was the highest in advanced-stage primary sites (tL/B; Fig. 6E). All these results were similar to that in CRC datasets. Taken together, there were indeed two stable patterns based on necroptosis-related genes.
Construction and validation of the prognostic NRG_score
A flowchart illustrating the generation of the signature for NRG_score was presented in Additional file 7: Fig. S7A-B. As previously reported [30,31], we conducted 1000 iterations in total and 5 gene groups were included for further screening. A gene model with 13 genes showed the highest frequencies of 726 compared to other four gene models (Fig. 7A), for which it was further applied to generate the gene signature for NRG_score calculation. We then calculated the c-index to validate the accuracy of NRG_score in survival prediction. The c-index for TCGA dataset, meta-GEO, GSE33113, GSE14333, GSE37892 and GSE39582 were 0.702, 0.568, 0.468, 0.621, 0.630, and 0.555, respectively (P < 0.05, Fig. 7B). The high-risk group in TCGA dataset, meta-GEO, GSE14333, GSE37892 and GSE39582 had worse survival rate than the low-risk group (Additional file 7: Fig. S7B). These results demonstrated the predictive power of the signature for survival in 5 datasets except in GSE33113. Finally, we constructed the NRG_score as follows: We next explored the differences in NRG_score between NRCs, and between necroptosis phenotype-related gene clusters, which showed the highest NRG_score in NRC1 and gene-cluster A, consistent with results of prognosis (Fig. 7C, D). The distribution plot of the risk of NRG_score showed that death rate increased with the increase of NRG_ score (Fig. 7E). The survival analysis revealed that patients with low NRG_score showed improved overall survival (log-rank test, P < 0.0001; Fig. 7F). Additionally, the 1-, 2-, 3-, and 5-year survival rates of NRG_score were reflected by AUC values of 0.699, 0.730, 0.724, and 0.767, respectively (Fig. 7F). Subsequently, we validated the prognostic predictive ability of the NRG_score in external datasets (Meta-GEO, GSE14333, GSE37892), which showed that patients could be dichotomized into low-and high-risk subgroups by using the aforementioned formula of the training set (Additional file 8: Fig. S8A-S8B). Moderate AUC values were reproduced in GSE14333 and GSE37892 when it comes to the prediction of the 1-, 2-, 3-, and 5-year survival using the NRG_score (Additional file 8: Fig. S8C). In addition, we also plotted K-M survival curves and calculated the AUC values of a cohort from FUSCC based on NRG_score. The results showed that high-risk score group displayed a worse prognosis (log-rank test, P = 0.0077; Fig. 7G) and AUC values at 1-, 2-, 3-year were 0.672, 0.624 and 0.603, respectively. Taken together, the NRG_score could be applied to predict the survival of CRC patients.
Risk score = (−0.004956222 × DHX15 expression) Because GSE39582 contained the patients who underwent adjuvant chemotherapy, we then examined whether the NRG_score could predict the response to adjuvant chemotherapy (ADJC). The results showed that patients receiving chemotherapy had the higher NRG_score (Fig. 7H). Subsequent survival analysis showed that low score group without ADJC manifested better overall survival. However, patients receiving ADJC in both high and low score group had poor survival (Fig. 7I). As presented above, patients with high NRG_score coupled with more advanced stage disease, which might partially explain why patients with high score and receiving ADJC showed poor survival. However, the result of patients with low score and receiving ADJC might indicate that these patients might not benefit from ADJC. We also calculated risk score (see methods) in single-cell dataset curated from GSE178318 [38], which contained three patients treated with chemotherapy (PC: Preoperative chemotherapy) (COL15, COL17, and COL18) and three patients were treatment naïve (COL07, COL12, and COL16) (Fig. 8A, B). We observed that most of the treated samples' scores were high, which was similar to bulk transcriptomic analysis (Fig. 8C).
Finally, we assessed the transcriptional signature between high and low NRG_score groups. The expression levels of 33 NRGs and 13 model genes between high-and low-risk groups in TCGA and meta-GEO cohort were shown in Fig. 7J, K and Additional file 8: Fig. S8D, E.
Evaluation of TME between the high-and low-risk groups
As presented by the immune scores of representative gene-signatures in Fig. 8D, high NRG_score was negatively related to T cells and cytotoxic CD8 + T cells, while it was positively correlated with myofibroblasts and TGF-β pathway, suggesting high-score group exhibited a suppressive immune microenvironment. For molecular classifications, we observed that low NRG_score group was enriched with more MSI-H tumors (Fig. 8E). Since infiltration level of cytotoxic CD8 + T cells predicted the response to immunotherapy, we explored the relationship between NRG_score and representative genes of cytotoxic CD8 + T cells, such as GZMA and IFNG (Fig. 8F, G). The results showed that low NRG_score group showed upregulation of GZMA and IFNG (Fig. 8F, G). These results suggested patients in low-score group might exhibit effective response to immunotherapy because of its high infiltration level of cytotoxic CD8 + T cells and MSI-H status.
Imputed drug sensitivity score in necroptosis-related phenotypes
We next evaluated the differences in drug susceptibility between the high-and low-risk groups. Differential analysis demonstrated that the imputed scores of 89 drugs from Sanger's Genomics of Drug Sensitivity in Cancer (GDSC) v2 [42] were significantly different (with imputed score elevation of 86 drugs and decline of 3 drugs) in CRC tumors in reference to normal samples (Fig. 9A). Afterwards, we selected drugs currently adopted to treat CRC in clinical practice to evaluate the drug sensitivity of patients in the high-and low-risk groups [32] (Fig. 9B). Interestingly, we found that patients in the high-risk group had higher imputed score for irinotecan, afatinib, sapitinib and gefitinib, suggesting that these patients might not respond to the aforementioned drugs effectively (Fig. 9A, B). Thus, patients with different NRG_ score might respond to drugs differently.
We also evaluated the drug susceptibility among the three NRCs. The imputed score of 190 drugs were shown in Fig. 9C. Our results showed that there were significant differences in imputed score of 5-Fluorouracil, Oxaliplatin, Irinotecan, Gefitinib and Afatinib among the three NRCs (Fig. 9D). For example, high imputed score of 5-Fluorouracil and Oxaliplatin in NRC3 suggested that these patients might not respond effectively to these two drugs, while high score indicated that patients in NRC1 might not respond to Irinotecan, Gefitinib, and Afatinib (Fig. 9D). Taken together, these results indicated that patients within different NRCs might present discrepant sensitivity to chemotherapeutic drugs. Construction and validation of the prognostic NRG_score. A Generation of the ten gene groups after 1000 iteration. The gene model with 13 genes was selected to construct the signature for NRG_score as its highest frequencies of 726 compared to other four gene models. B The c-index of both training and testing sets. C Alluvial diagram of NRCs in groups with different gene clusters and NRG_score groups. D Barplots show the risk score between three NRCs and three gene clusters. The statistical difference of three clusters was compared through the Kruskal-Wallis H test. *P < 0.05; **P < 0.01; ***P < 0.001. E Ranked dot and scatter plots showing the NRG_score distribution and patient survival status. F, G Kaplan-Meier analysis of the survival rate between the two groups. The high and low groups were divided by the median value of the NRG_score (left pannael). ROC curves to predict the sensitivity and specificity of 1-, 2-, 3-, and 5-year survival according to the NRG_score (right panel). H Barplot shows the NRG_score between groups with adjuvant chemotherapy (ADJC) and without adjuvant chemotherapy (ADJC). The statistical difference of two clusters was compared through the Wilcox test. *P < 0.05; **P < 0.01; ***P < 0.001. I Survival analysis among four patient groups stratified by both NRG_score and treatment with adjuvant chemotherapy (ADJC). J, K Differences in the expression of 33 NRGs and 13 genes among the two gene subtypes. The statistical difference of two groups was compared through the Wilcox test. *P < 0.05; **P < 0.01; ***P < 0.001
Developing a nomogram to predict patients' survival
Next, NRG_score and disease stage (TNM stage) were incorporated to establish a nomogram to predict the 1-, 3-, and 5-year RFS, the results of which were shown in Fig. 10A. The AUC of the nomogram model for survival at 1, 3, and 5 years showed high accuracy in the training set (TCGA), testing set (meta-GEO), and three validation sets (GSE14333, GSE37892 and FUSCC cohorts) (Fig. 10B-F). The predictive accuracy of the nomogram showed AUC values at 1-, 3-, and 5-year in TCGA were 0.791, 0.795 and 0.765, respectively. In the testing set (meta-GEO), the 1-, 3-and 5-year AUC values of the nomogram were 0.740, 0.731, and 0.715, respectively. AUC values of TCGA at 1-and 3-year in this part were higher than that based on only NRG_score in Fig. 7F. AUC values of nomogram (at 1-, 3-or 5-year) in three validation sets (GSE14333, GSE37892 and FUSCC cohorts) were also higher than that based on only NRG_ score in Additional file 8: Fig. S8C and Fig. 7G. Furthermore, AUC values of the nomogram at 1-, 3-and 5-year in TCGA, meta-GEO, and GSE14333 sets, and AUC values of the nomogram at 3-and 5-year in GSE37892 were higher than AUC values of TNM stage systems, suggesting that our nomogram displayed an advantage in survival predictive ability over TNM stage systems (Additional file 9: Fig. S9A-D). Subsequently, the calibration plots demonstrated that the nomogram we established could perform similarly in both the training and testing sets (Additional file 9: Fig. S9E-I).
Discussion
Cell death has recently attracted increasing attention for its potential role in triggering anti-tumor immunity [43]. Like apoptotic cells, emerging researches have showed that necroptotic tumor cells can induce anti-tumor immunity by their interaction with diverse immune cell types [44,45]. Although various studies have revealed the regulation of NRGs in TME [46,47], a landscape of TME characteristics mediated by NRGs have not been comprehensively understood.
In this study, we introduced necroptosis-related phenotypes of TME in CRC. Based on 33 NRGs and DEGs associated with necroptosis-related phenotypes, we could stratify CRC samples into three molecular phenotypes (NRC1-3). However, we observed that only two classifications kept stable according to their immune infiltration patterns. Therefore, we postulated that there were two stable TME patterns mediated by necroptosis in CRC: a phenotype characterized by few TME cells infiltration but with EMT/TGF-β pathways activation, and another phenotype characterized by remarkable stromal cells infiltration, together with EMT, TGF-β signaling pathway activation, corresponding to the immune-excluded and CMS4-like phenotype. To confirm these two stable phenotypes related to necroptosis, we performed singlecell transcriptomic analyses in CRC datasets and further validated in LUAD datasets. We observed that score of NRC1 represented by score α was increased in tumor metastatic sites, while score β was elevated in TME cells. We thus postulated that EMT phenotype in NRC1 was mainly exhibited on tumor cells, while CMS4-like and EMT phenotype in NRC3 were predominantly caused by its remarkable stromal cell infiltration. What's more, high α score might be used to predict the risk of CRC metastasis.
Previous reports suggested that immune context of TME could promote EMT. MDSCs, well-known as immature immune cells, are associated with poor prognosis of cancers for suppressing T cells activation [48]. TGF-β production from MDSCs have been experimentally proved to render a profound impact on tumor metastasis [49]. Stromal cells such as fibroblasts have been also reported as a major source of TGF-β production [50,51]. TGF-β expressed by cancer-associated fibroblast (CAF) (such as myofibroblast) induces recruitment of more fibroblasts, and might thus lead to a pro-tumorigenic and immunotolerant status [52]. Adaptive immune cells like CD8 + T cells respond to TGF-β may also cause an immunosuppressive environment. Since NRC3 was infiltrated by stromal cells and MDSCs, patients in NRC3 cannot respond to PD-1/PD-L1 therapy. Fortunately, NRC3 was remarkably infiltrated by activated T cell populations such as CD4 + and CD8 + T, which should have been related to anti-tumor immunity. High expression of PD-1/PD-L1 was observed in NRC3, which has been reported to predict response to immune checkpoint inhibitors [53]. Therefore, intervention targeting on stromal cells and MDSCs, and downregulation of TGF-β may help patients within NRC3 regain an effective response to immunotherapy. Without considering TME, the role of necroptosis in tumor cells has not been comprehensively understood either [54]. Previous findings showed (See figure on next page.) Fig. 8 Evaluation of TME between the high-and low-risk groups. A UMAP plot shows 113,331 single cells of GSE178318 cohort. B Bar-plot shows the proportion of samples corresponding to treatment (PC: Preoperative chemotherapy; nPC: non-Preoperative chemotherapy). C Dot plot shows the distribution of samples from GSE178318 based on their risk score. D Score of immune-related gene-signatures between high-and low-risk groups. E Differences of molecular subtypes between low-and high-risk groups. F, G Expression of GZMA and IFNG between low-and high-risk groups. The statistical difference of two clusters was compared through the Wilcox test. *P < 0.05; **P < 0.01; ***P < 0. that RIPK3 was upregulated in late-stage breast tumors, implying a promising role of necroptosis in tumor progression [54,55]. In NRC1, we observed upregulation of RIPK3 (Fig. 5A), EMT activity (Additional file 3: Fig. S3A and S3D), and enrichment of advanced stages (15.60%; Fig. 2E), suggesting that RIPK3 may play an indispensable role in CRC progression. Emerging evidences demonstrated that RIPK3 upregulation could potentiate Fig. 9 Imputed drug sensitivity score of necroptosis-related phenotype. A The number of drugs in GDSC v2 that was significantly upregulated or downregulated (P < 0.05) in the high-risk score group versus low-risk score group among each of 24 drug categories in the TGCA cohort. B Barplot shows the imputed drug sensitivity score between high-and low-risk groups. The statistical difference of two clusters was compared through the Wilcox test. *P < 0.05; **P < 0.01; ***P < 0.001. C Dot plot shows the imputed drug sensitivity score among three NRCs. D Barplot shows the imputed drug sensitivity score among three NRCs. The statistical difference of three clusters was compared through the Kruskal-Wallis H test. *P < 0.05; **P < 0.01; ***P < 0.001 chemotherapeutic effects by inducing necroptosis [56]. Therefore, RIPK3 may be a key mediator resulting in EMT and chemo-sensitive phenotype of patients within NRC1. Future experimental researches are required to investigate the key regulator RIPK3 in CRC development. We also constructed a robust and effective prognostic NRG_score and demonstrated its predictive ability in CRC survival by integrated analyses of public databases and a patients' cohort from FUSCC. Patients with lowand high-risk NRG_score displayed significantly different clinicopathological characteristics, prognosis, immune infiltration and drug susceptibility. We observed that high-risk score group was highly infiltrated by myofibroblast and characterized by TGF-β pathway activation. In contrast, low-risk group was enriched with more cytotoxic T cells. We further explored cytotoxic genes like GZMA and IFNG in public database, confirming the precise predictive ability of low-risk score in response to immunotherapy. Interestingly, the exploration of drug imputed score showed patients in high-and low-risk groups might present different chemotherapeutic efficacy, suggesting that NRG_score could be used for patient selection when considering ADJC and there might be potential molecular targets based on NRGs. Finally, by integrating NRG_score and tumor stage, we established a quantitative nomogram, which further improved the performance and facilitated the use of NRG_score. Overall, the NRG_score we constructed can be an accurate prognostic model for prognosis stratification of CRC patients, and a good predictor for immunotherapy and chemotherapy. In a nutshell, we comprehensively analyzed the mutations and expression patterns of NRGs in CRC. NRCs and NRG_score were established and their associations with TME were explored. Sensitivity to chemotherapy and response to immunotherapy were probed. These integrated analyses highlighted the main role of necroptosis in TME infiltration of CRC. Moreover, we put forward specific genes related to EMT phenotype on tumor cells, and genes related to stromal cells infiltration in TME, which will provide an interesting insight into the mechanism between necroptosis and TME infiltration. However, there are still some limitations: (1) the study was conducted based on retrospective data, thus, selection bias might be unavoidable; (2) though we validated our findings in validation sets based on public datasets, validation in prospective study will further add credibility to these findings; (3) molecular mechanisms of these observations necessitate exploration in the future. | 2022-05-20T13:36:08.765Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "e27769dddbc983151c2ab663441a53d83494623f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b833bad00ffeecbda67372916edaff7804a6cdf8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
78545740 | pes2o/s2orc | v3-fos-license | Breast reconstruction with autologous tissue : 380 consecutive cases
Article received September 02, 2014. Article accepted May 28, 2015. Introduction: Breast reconstruction plays an important role in the treatment of breast cancer. Several options are available for autologous breast reconstruction, the more widespread being the transverse rectus abdominis myocutaneous (TRAM) flap, the latissimus dorsi myocutaneous (LDM) flap, and the local muscle (LM) flap. The objective of this work was to demonstrate the initial experience in breast reconstruction with autologous tissue, with or without implants. Method: A retrospective analysis was performed of medical charts of 367 patients who underwent immediate and delayed breast reconstruction with the unipediculated TRAM flap, LD flap, or LM flap. Results: Three hundred eighty breasts were reconstructed. There were 156 TRAM flap procedures, 179 LD flap procedures, and 49 other techniques. The size of the implants ranged between 155 cc and 640 cc. The mean age of the patients was 49.33 years. One hundred ninety-seven patients underwent surgery on the right side and 169 on the left; 14 patients underwent bilateral reconstruction. Reconstruction was immediate in 80% of the patients. There were few moderate (partial dehiscence of the wound requiring suturing) and severe complications (flap liponecrosis, extrusion of the implant after infection, and pulmonary thromboembolism) and some minor complications that did not require surgical correction. Conclusions: Breast reconstruction with autologous tissue provides the plastic surgeon with a consistent and reliable method of breast reconstruction, with very satisfactory aesthetic results and low morbidity in selected patients. ■ ABSTRACT
INTRODUCTION
Breast cancer remains one of the most common malignant tumors in women and is one of the major causes of cancer-related mortality.
The adoption of more urbanized lifestyles and changes in reproductive behavior may be involved in the increased worldwide incidence of this cancer 1 .
Despite the current emphasis on conservative breast surgery, the rates of mastectomy remain around 30% 2 .Mastectomy is often associated with significant psychological sequelae, including body-image distortion and sexual dysfunction.The restoration of the breast allows emotional and physical recovery, even partially, after the trauma inflicted by the disease 2 .
Breast reconstruction plays an important role in the treatment of breast cancer.The indication to implement it or not and the choice of the technique are individualized decisions, which should take into account the medical staff and the patient.
Advances in breast reconstruction and mastectomy techniques have increased expectations as to the outcome.Options include placement of breast implants or use of autologous tissue.The advantages of autologous reconstruction are the creation of a soft breast cone, with natural ptosis, which tends to be more similar to the contralateral breast, either with or without the use of a bra 3,4 .In addition, the thick dermis of autologous tissue allows for excellent results in reconstructions of the nipple-areolar complex 5 .The results obtained with reconstruction methods tend to change less over time and do not require periodic reviews as seen in reconstructions exclusively with implants 4 .
Several options of donor areas of tissue are available for autologous reconstruction.The more widespread are the transverse rectus abdominis myocutaneous (TRAM) flap, the latissimus dorsi myocutaneous (LDM) flap, the gluteus maximus flap, and the local muscle (LM) flap.
METHODS
All 367 patients who underwent surgery between July 2006 and January 2014 were analyzed.This retrospective study followed the Principles of Helsinki.The patients underwent immediate or delayed breast reconstruction with the use of the following techniques: the TRAM flap, LDM flap, LM flap, and lateral thoracodorsal (Hölmstrom) flap; the last 3 procedures were associated with the insertion of silicone breast implants.When implants were used, they were naturalshaped, extra-high-projection, textured-surface implants.
The selection of the technique to be employed in each case initially followed the criteria outlined, and the final decision was taken together with the patient.Patients without abdominal obesity, non-smokers, and those without prior abdominal scarring (with the exception of a Pfannenstiel incision) were selected to be submitted to reconstruction with the TRAM flap.Patients with any of these conditions or who expressed the desire to become pregnant after the treatment were selected for reconstruction with the LDM flap.Finally, patients who requested smaller surgeries or had major comorbidities such as lung or heart diseases underwent reconstruction with the LM flap or Hölmstrom flap.
The technique used for reconstruction with the latissimus dorsi was described by Bostwick and Scheflan 6 , and involved creation of a transverse skin island on the dorsum and insertion of an implant in a muscular pocket between the latissimus dorsi flap and the greater pectoral muscle.Between 20 and 25 adhesion sutures were performed in the donor area to reduce the incidence of seroma and tension at the edges of the incision.
The technique used for reconstruction with the TRAM flap was described by Hartrampf 6 and involved creation of a horizontal dermofat island, unipediculated and contralateral to the mastectomy defect, using areas 1 and 2 in full for the reconstruction of the breast and up to 30% of area 3. Area 4 was always discarded.Prior flap autonomization of the flaps was not followed.Reconstruction of the abdominal wall was performed with direct closure of the aponeurosis above the navel and affixing and suture of polypropylene mesh in the infraumbilical paramedian area.
Reconstruction with a local muscle flap was performed by detaching the greater pectoral muscle, serratus anterior muscle, and aponeurosis of the rectus abdominis, and constructing a muscular pouch between these structures to completely cover the silicone implant.In cases where it was necessary to recruit adjacent skin, the technique described by Hölmstrom with a lateral thoracic fasciocutaneous flap was used 6 .
In all cases, active drainage was performed in donor and recipient areas.The average length of hospital stay was 2 days.The surgical time ranged from 1 hour (LM flap) to 3 hours (bilateral LDM flap), with an average of 1 hour and 40 minutes (unilateral LDM flap) or 1 hour and 50 minutes (TRAM flap).
Prophylaxis for venous thrombosis was performed in all patients, following the protocol of the Brazilian Society of Angiology and Vascular Surgery 7 .Antibiotic prophylaxis with 2nd generation cephalosporin (intravenous) was prescribed for 2 days, followed by 1st generation cephalosporin (oral) until the drain was removed.
In patients reconstructed with the TRAM flap, the use of an abdominal and breast shaper after surgery was maintained continuously for 30 days after surgery and only during the day for 60 days.No shaper or special bra was used in patients reconstructed with LDM or LM flaps.
RESULTS
In total, 367 patients underwent surgery and 380 breasts were reconstructed in 384 surgical procedures.Regarding the surgical technique, there were 156 TRAM flap procedures (Figures 1 to 5), 179 LDM flap procedures together with insertion of a silicone breast implant (Figures 6 to 10), and 49 other techniques such as the LM flap (40 procedures) (Figure 11) and Hölmstrom flap (9 procedures) (Figure 12).The size of the implants ranged between 155 cc and 640 cc; the most frequently used volumes were 365 cc and 425 cc with the extra-high projection profile.
The age of the patients varied between 24 and 88 years, with a mean age of 49.33.Among those who underwent reconstruction with the TRAM flap, the mean age was 48.55 years; with the LD flap, 48.93 years; and with the local flap and implant, 50.3 years.
One hundred ninety-seven patients underwent surgery on the right side and 169 on the left; 14 patients underwent bilateral reconstruction.Four cases underwent more than one type of technique because of failure of the first reconstruction with the same surgical team.Of note, in 4 of the reconstructed breasts, tumors were discovered on pathological examination of tissue removed in aesthetic mammoplasty, with no previous suggestive clinical or radiological signs.Ten other patients were referred from other services for reconstruction with autologous tissue after failure of reconstruction with skin expanders.
The time of follow-up of patients varied between 5 months and 7 years, and immediate reconstruction was implemented in 80% of patients (295 patients).
Moderate complications (5 cases) included dehiscence of part of the abdominal incision that needed resuturing (2 cases), skin necrosis at the mastectomy site that required a full-thickness skin graft on the TRAM flap (1 case), and abdominal bulging that required surgical correction with plication of the mesh screen and insertion of a new screen (2 cases) (Figure 13).
Serious complications included total necrosis of the flap (2 TRAM cases and 1 LDM case) (Figure 14), total liponecrosis of the flap with preservation of the skin envelope (2 TRAM cases), extrusion of the implant after skin necrosis of a skin-sparing mastectomy followed by infection (11 LDM cases and 1 LM case), pulmonary thromboembolism (1 LM case), and deep venous thrombosis (1 case).There were no deaths.
We observed 8 cases of capsular contracture around the implant; 3 of these patients underwent radiotherapy after immediate reconstruction while 5 did not.
There were minor complications, such as small dehiscence, seroma, or liponecrosis, which did not require secondary surgical procedures (Figure 15).
Smoking was associated with massive liponecrosis of the TRAM flap in 2 patients, as well as loss of the implant in 7 patients submitted to the LDM flap.In these patients, the myocutaneous flap remained intact, but infection was introduced through skin necrosis of the skin-sparing mastectomy used in immediate reconstruction.
There were, in total, 13 patients who suffered skin necrosis at the mastectomy site: 11 who underwent reconstruction with the LDM flap and developed loss of the implant, 1 with an LM flap, and 1 who underwent reconstruction with a de-epidermized TRAM flap, which required a skin graft to promote healing and not delay the initiation of adjuvant therapy.With respect to the initiation of complementary therapy, delays occurred in 2 patients with delayed healing, one being a smoker and the other a patient with psoriasis.
The patient who had pulmonary thromboembolism had a previous history of deep venous thrombosis, and the precautions recommended by the angiology team were all implemented in the pre-, trans-, and post-operative periods.She progressed satisfactorily and without sequelae and, after 4 years, was submitted to balancing mammoplasty, without complications.
Complications in the donor area of the TRAM flap included abdominal bulging without hernia in 5 patients (1.7%), 2 of which required surgical correction, and seroma formation on the abdomen (4.6%), which required surgical treatment in 2 cases and was resolved with repeated punctures in 5 cases.Regarding the donor area of the LDM flap, 4% of patients developed seroma, treated with up to 2 punctures (7 cases).There was, however, seroma formation in the receiving area and axilla in 1.5% of cases and hematoma formation in the immediate postoperative period in the same region in 3 cases.
Among the 295 patients who underwent immediate reconstruction, 187 were submitted to radiotherapy in the postoperative period, 192 to chemotherapy, and 250 to hormone therapy.
Among the 75 patients who underwent late reconstruction, 57 had been previously submitted to chemotherapy and 49, radiotherapy.None had extensive radiodermatitis or ulceration of the skin at the time of the initial consultation.The interval between mastectomy and reconstruction ranged between 5 months and 12 years, with a mean of 3 years.
There was no contraindication for reconstruction in any of the patients presented for the initial assessment.In the cases of comorbidities, we chose to perform surgical techniques with lower morbidity.
DISCUSSION
Breast reconstruction has evolved significantly since its first reports, and currently we have an extensive arsenal of techniques that include new autologous tissue flaps and new techniques in the use of implants and extenders.Thus, the main objectivethat initially was only to reconstruct the breast conereverted to reconstruction of the breast as naturally as possible and more similar to the contralateral breast 5 .
In addition, there has been a significant increase in the incidence of immediate breast reconstructions over the past 10 years, having grown from 20% to 31.8%, according to a report by Nelson et al. 8 .Although some reports associate this type of reconstruction with a higher incidence of complications and hospital readmission, Nelson et al., in a broad and general study, concluded that these disadvantages are only associated with obese patients and smokers.
Autologous breast reconstruction often provides a more favorable aesthetic outcome than other reconstruction options.The selection of the techniques to be used in reconstruction candidates in this series followed the criteria of the team, which are in agreement with published data 3,4 .The epidemiology of the sample showed very young patients already being affected by breast cancer 1 , encompassing economically active age groups, often without children, which necessitates special consideration regarding the desire of the patient to become pregnant after the end of treatment and the choice of the technique to be employed.
The methods of reconstruction used here are not free of complications, even though the incidence of severe complications is low.In this study, the incidence of flap necrosis is in accordance with that reported in the literature, although there is a wide variation in the values of each study (1%-24%) 4,9,10 .
Despite the advent of microsurgery, advances in surgeries of perforating vessels, and increased complexity of the procedures, the pedicled TRAM flap is still one of the most common methods of autologous reconstruction performed to date 5 .
Comparative studies between free and pedicled TRAM flaps found no significant differences in the function of the abdominal wall between the 2 groups 11,12 .Although some studies reported an objective advantage of deep inferior epigastric perforator (DIEP) flaps, this has not reflected in difficulties in performing activities of daily living 11,13 , as also found in our work.The abdominal bulging found did not prevent the patients from performing their daily professional or leisure activities; however, there was no patient in our study who performed intense physical activity prior to surgery.
The advantages of reconstruction with the TRAM flap include the achievement of breast volume in one surgical step, the creation of a soft and naturally ptosed breast, greater control over symmetry with the contralateral breast, and aesthetic benefits in the abdominal donor area 4 .
The main indication for reconstruction with a free TRAM flap-for which this flap is undeniably better than the pediculated flap-is patients with a higher risk of complications, such as obese patients and smokers, due to increased vascularization 4,10,12 .In this series, with these patients, we chose to use other surgical techniques, such as reconstruction with the LDM flap.
Reconstruction with the LDM flap and implant is an old and reliable technique, with a low rate of complications.In our study, the complications were restricted to 4 cases of seroma on the dorsum, resolved by puncture, and 4 cases of exposure with loss of the implant due to skin necrosis at the mastectomy site with resultant contamination and local infection.This rate of loss of the implant due to infection (6%) is similar to that found in the literature (7%-9%) 10,14 , while the rate of seroma is significantly lower.Branford et al. 15 described a rate of up to 79% for seroma on the dorsum after implementation of the extended LDM flap without use of adhesion sutures; in addition, other more severe complications described in the same study, such as 30% for necrosis in the donor area and 1% for total necrosis of the flap, were not observed in our study.Miranda et al. 16 reported a similar rate for seroma-approximately 72% with use of the conventional LDM flap.
We also observed an incidence of 20% for capsular contracture, with and without radiotherapy-a similar rate to that found in studies that assessed contracture after radiotherapy in immediate breast reconstruction 14 .
The analysis of complications between the groups shows a higher incidence of complications (small dehiscence and liponecrosis) in TRAM flaps than in LDM flaps, but when there was a complication with the LDM flap, however slight the dehiscence, there was loss of the implant, with evident aesthetic loss of reconstruction.There is a need, therefore, to evaluate the condition of the remaining skin at the end of the mastectomy and debride all the areas that have no reliable vascularization to minimize post-operatory necrosis 14 .
A major advantage of reconstruction with autologous tissue is that the deleterious effect of radiotherapy applied either before or after the reconstruction has less impact on the final aesthetic characteristics of the reconstructed breast 14,17 .In cases in which immediate breast reconstruction is indicated despite the certainty of adjuvant radiotherapy, the choice of reconstruction technique should be based on tissue characteristics and blood supply.Techniques involving reconstruction with autologous tissue should be given priority because they reflect higher vascularization and resistance to radiation 18,19 .
A disadvantage of the combination of reconstruction with autologous tissue and radiotherapy is the possibility of possible liponecrosis in the flap being mistaken for or masking tumor relapse, because definite diagnosis may not be possible with clinical examination.However, Matos et al. 20 demonstrated unequivocal findings that characterize liponecrosis and facilitate the diagnosis with magnetic resonance imaging.
CONCLUSION
Breast reconstruction with autologous tissue provides the plastic surgeon with a consistent and reliable method of breast reconstruction, with very satisfactory aesthetic results and low morbidity in patients selected for this technique.
Figure 1 .
Figure 1.Postoperative appearance of the left breast after delayed reconstruction with the transversus rectus abdominis myocutaneous (TRAM) flap involving reconstruction of the nipple-areolar complex (NAC) and contralateral mammoplasty (frontal view).
Figure 2 .
Figure 2. Postoperative appearance of the left breast after delayed reconstruction with an autologous flap involving reconstruction of the NAC and contralateral mammoplasty (profile view).
Figure 3 .
Figure 3. Postoperative appearance of the left breast after immediate reconstruction with an autologous flap involving reconstruction of the NAC and contralateral mammoplasty (frontal view).
Figure 4 .
Figure 4. Postoperative appearance of the left breast after immediate reconstruction with an autologous flap involving reconstruction of the NAC and contralateral mammoplasty (profile view).
Figure 5 .
Figure 5. Postoperative appearance of the left breast after immediate reconstruction with the TRAM flap in skin-sparing mastectomy after radiotherapy.
Figure 6 .
Figure 6.Transoperative representation of immediate reconstruction of the left breast.
Figure 7 .
Figure 7. Postoperative appearance of the left breast after immediate reconstruction with the latissimus dorsi myocutaneous (LDM) flap in skinsparing mastectomy.
Figure 9 .
Figure 9. Postoperative appearance of the left breast after immediate reconstruction with the LDM flap in nipple-sparing mastectomy.
Figure 10 .
Figure 10.Postoperative appearance of the left breast after immediate reconstruction with the LDM flap showing the position of the scar on the dorsum.
Figure 11 .
Figure 11.Postoperative appearance of the left breast after immediate reconstruction with the local muscle flap involving prosthetic reconstruction of the NAC and contralateral mammoplasty.
Figure 14 .
Figure 14.Total necrosis of the TRAM flap in a smoking patient. | 2019-03-16T13:10:25.162Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "2ae7d74ade5fcc456d394604abfe177ce190f42a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/2177-1235.2015rbcp0164",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2ae7d74ade5fcc456d394604abfe177ce190f42a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256038682 | pes2o/s2orc | v3-fos-license | Dark matter beams at LBNF
High-intensity neutrino beam facilities may produce a beam of light dark matter when protons strike the target. Searches for such a dark matter beam using its scattering in a nearby detector must overcome the large neutrino background. We characterize the spatial and energy distributions of the dark matter and neutrino beams, focusing on their differences to enhance the sensitivity to dark matter. We find that a dark matter beam produced by a Z′ boson in the GeV mass range is both broader and more energetic than the neutrino beam. The reach for dark matter is maximized for a detector sensitive to hard neutral-current scatterings, placed at a sizable angle off the neutrino beam axis. In the case of the Long-Baseline Neutrino Facility (LBNF), a detector placed at roughly 6 degrees off axis and at a distance of about 200 m from the target would be sensitive to Z′ couplings as low as 0.05. This search can proceed symbiotically with neutrino measurements. We also show that the MiniBooNE and MicroBooNE detectors, which are on Fermilab’s Booster beamline, happen to be at an optimal angle from the NuMI beam and could perform searches with existing data. This illustrates potential synergies between LBNF and the short-baseline neutrino program if the detectors are positioned appropriately.
Introduction
Dark matter (DM) provides solid evidence for physics beyond the Standard Model (SM), but its particle nature remains unknown. A central question is whether DM particles experience interactions with ordinary matter beyond gravity. Direct detection experiments [1] have imposed impressive constraints on the interactions between nucleons and DM particles of mass larger than about 5 GeV. These experiments lose sensitivity quickly at lower masses because light dark matter particles moving at the viral velocities of our galactic halo would yield very low recoil energies in collision with nuclei or atoms. Interactions of DM with quarks or gluons are also explored at high-energy colliders, for example through monojet searches [2][3][4][5][6][7][8][9]. If these interactions are due to a light mediator, however, the collider searches are less sensitive.
Therefore, the question of how to conduct light dark matter searches is urgent and compelling. A potentially promising direction is to use proton fixed-target experiments to probe DM couplings to quarks [10][11][12][13][14][15] (other proposals for light DM searches have been explored in [16][17][18][19][20][21][22][23][24][25][26][27]). An interesting type of mediator is a leptophobic Z boson. For a Z mass in the ∼ 1-10 GeV range, the limits on its coupling to quarks are remarkably loose [28]. A dark matter beam originating from the decay of a leptophobic Z , produced by protons accelerated in the Booster at Fermilab, may lead to a signal in the MiniBooNE experiment [14]. This signal decreases fast for M Z above 1 GeV, because the Booster proton energy is only 8 GeV. By contrast, protons accelerated at 120 GeV in the Main Injector scattering off nucleons may produce a leptophobic Z as heavy as ∼ 7 GeV, and the DM particles originating in the Z decay may lead to neutral-current events in neutrino detectors [15].
JHEP04(2016)047
Here we analyze the sensitivity of neutrino detectors to the DM beam produced in leptophobic Z decays. We focus on a high-intensity proton beam of ∼ 100 GeV, as that proposed at the Long-Baseline Neutrino Facility [29] (LBNF). We consider deep-inelastic neutral-current scattering as the main signal. The challenge of using neutrino facilities to look for a DM beam is that neutrino events represent an irreducible background. In [14] it is proposed to conduct a special run of the beam in which the magnetic horns are turned off, leading to a more dilute neutrino beam. Here we will take a different approach, namely to exploit the difference between the dark matter and a focused neutrino beam and consider a detector that is located accordingly. This search for dark matter does not disrupt the normal neutrino research program.
More specifically, we will see that the signal and main background contributions have very different energy and angular profiles, which can be exploited to enhance the signal significance. We perform a simple optimization study using the signal significance in order to determine the optimal position of a detector. We determine that an angle of approximately 6 degrees with respect to the decay pipe direction would maximize the sensitivity. Applying these results to the NuMI beamline, we find that the NOvA near detector, in spite of being located slightly off-axis, does not provide a sufficient suppression of the neutrino background.
The paper is structured as follows. Section 2 reviews the main features of the model considered. In section 3 we discuss the main differences between the DM signal and the neutrino background, paying special attention to their energy and angular distributions. In section 4 we identify the optimal off-axis location for a detector, based on the signalto-background expected ratio, and the χ 2 sensitivity contours for two close-to-optimal locations are presented. Our conclusions are presented in section 5. The computation of the neutrino flux due to kaon decays is outlined in the appendix.
Dark matter and a light Z boson
We consider a Z boson, associated with the U(1) B gauge group, which couples to the quarks q = u, d, s, c, b, t and to a dark matter fermion χ: where g z is the gauge coupling, and z χ is the U(1) B charge of χ. In the case where χ is a complex scalar, the χγ µ χ term in eq. (2.1) is replaced by iχ∂ µ χ + H.c. Additional fermions (referred to as "anomalons") charged under U(1) B are necessary to cancel the gauge anomalies. Examples of anomalons are given in refs. [28,[30][31][32]. We assume that these do not have an impact on the Z phenomenology. The ratio of decay widths into χ s and quarks is
JHEP04(2016)047
where Γ(Z → qq) stands for the sum over the partial decay widths into all quarks, N f (M Z ) is the effective number of quark flavors of mass below M Z /2, and the function F χ is defined by The effective number of quark flavors below M Z /2 takes into account the phase-space suppression for Z decays into hadrons, and thus is not an integer. For M Z in the ∼ 1-3.7 GeV range, N f (M Z ) ≈ 3, while for M Z in the ∼ 3.7-10 GeV range, N f (M Z ) ≈ 4, with large uncertainties for M Z near the ss and cc thresholds. The existing constraints on the Z coupling in the 1-10 GeV mass range are rather weak, given that this is a leptophobic boson: 1. Z exchange induces invisible decays of quarkonia with a branching fraction [33]: if m χ < M J/ψ /2, and the analogous expression for Υ. The 90% confidence level (C.L.) limits on invisible branching fractions are B(J/ψ → χχ) < 7 × 10 −4 [34], and B(Υ → χχ) < 3 × 10 −4 [35].
2.
A kinetic mixing between the Z boson and the photon, −( B /2)Z µν F µν , arises at one loop with [33]: at the 10 GeV scale. As a result, the Z boson can be produced in e + e − collisions, albeit with a very small rate. The BaBar limit [36] on Υ(3S) decay into a photon and missing energy has been reinterpreted [37] as a limit on e + e − → γZ with the Z produced through its kinetic mixing. This limit is competitive with the one from Υ → χχ decay only for M Z in the 4.6-5 GeV range.
3. Monojet searches [7] at hadron colliders set a bound on g z [38] This limit is almost independent of M Z in the range considered here, and it is weaker than the limit from Υ → χχ.
4. There is also a limit on g z from the requirement that the U(1) B gauge symmetry is anomaly free, which follows from the collider limits on anomalons [28]. This limit is rather stringent for M Z 3 GeV for a minimal set of anomalons, but with a larger anomalon set it becomes looser than the one from invisible J/ψ decays. Here we consider the latter case.
JHEP04(2016)047
Overall, values of the gauge coupling g z as large as of order 0.1 are allowed for M Z in the 1-10 GeV, with the exception of small regions near the J/ψ and Υ masses. We will plot the strongest limits on g z as a function of M Z in section 5 (figure 7). We will not discuss possible cosmological constraints on the parameter space which arise when χ is the dominant form of dark matter. Possible viable dark matter scenarios are discussed in [15].
Neutrinos versus dark matter at fixed target experiments
The search for a dark matter beam in a neutrino facility must deal with the neutrino background. To mitigate this, new physics searches need to be tailored to maximize the signal to background ratio (or the signal significance), by looking for particular signals and in particular regions of phase space. It is convenient to separate the production, which occurs mostly in the target, from detection, which takes place in a distant detector. In this section we discuss the production and detection mechanisms both for neutrinos and dark matter, emphasizing the main differences between them.
Production mechanisms for dark matter and neutrinos
In the model considered in this work, the dark matter is pair produced via the decay of a Z boson, of mass in the GeV range, resonantly produced in the target by proton scattering off nucleons, qq → Z → χχ . Searching for mediators in the GeV range requires the use of energetic beams (in NuMI the protons have 120 GeV), and the production cross section is much smaller than that for mesons. 1 This means that the problem of reducing the neutrino backgrounds produced in meson decays is nontrivial. Let us consider a Z of mass M Z that is produced in the target with an energy E Z . The energy of the dark matter particle χ in the final state can be derived from 2-body kinematics. In the lab frame it reads: where β is the Z velocity, θ is the angle between the χ and Z momenta, and we have neglected the mass of the dark matter assuming that it is much smaller than the Z mass.
Since the transverse momentum of the initial qq system is small (we are only considering production at leading order), the Z is produced in the forward direction. As a result the angle of the dark matter with respect to the decay pipe can be directly identified with θ.
As will be discussed in detail in section 3.2, the main background is due to very energetic neutrinos reaching the detector. For neutrinos produced in meson decays, a similar relation as in eq. the Z variables with the parent meson variables, and E χ → E ν . Thus, in the case of pions, neutrinos emitted with a sizable angle have very low energies regardless of the parent pion energy because of the low pion mass in the denominator. This fact, which is exploited both in the T2K and NOvA experiments to get a narrow neutrino spectrum at low neutrino energies, will also be beneficial in our case to reduce the neutrino background at high energies. For off-axis angles larger than 2 degrees no significant number of energetic neutrinos coming from pion decays would reach the detector, assuming a (relatively well) collimated pion beam. We may henceforth consider only angles above 2 degrees and ignore backgrounds from pion decay.
Following the above argument, it is clear that our main background is going to come from neutrinos produced in kaon decays, which will lead to a more energetic flux of neutrinos off axis. Nevertheless, since M K M Z , the resulting neutrino flux will still be much less energetic than the dark matter flux. This can be understood from eq. (3.2) and is illustrated in figure 1, where the energy of the daughter particle is shown as a function of the parent energy, both for Z and kaon decays. The results are shown for two different off-axis angles, which roughly correspond to the angles subtended by both the NOvA near detector and the MiniBooNE detector, measured with respect to the NuMI beamline.
So far we considered the decay of a Z boson or a kaon produced with a given energy. This qualitative understanding must be folded with their respective energy distributions as they exit the target. In order to compute the dark matter energy profile, we generate proton-proton collisions using MadGraph/MadEvent 5 [39] with NNPDF23LO1 parton distribution functions (PDFs) [40]. The implementation of the model into MadGraph has been done using the FeynRules package [41]. The LHE files have been parsed using PyLHEF [42].
Differential flux that reaches a MiniBooNE-size detector located 745 m away from the target, for DM particles (left) and for neutrinos (right), produced from 120 GeV protons scattering off nucleons at rest. Results are shown for two different off-axis angles, 2 • (solid) and 6 • (dashed).
Due to the short baselines considered for this setup, in the 100-700 m range, the size of the detector will also have an impact on the energy profile. For simplicity, we consider a generic spherical detector of a similar size to the MiniBooNE detector [43] (a radius R det = 6.1 m, and a mass of 800 tons).
The final dark matter flux expected at the detector can be seen in the left panel of figure 2 for a mediator with M Z = 3 GeV and a fermionic dark matter candidate with m χ = 750 MeV. Results are shown for two different values of the off-axis angle θ, as a function of the dark matter energy (see also figure 5 in ref. [15]). For comparison, in the right panel we show the neutrino flux as a function of the neutrino energy, for the same offaxis angles. Indeed, comparing the two panels of figure 2 we see that the difference in mass between a few GeV Z and kaons (and pions) offers an interesting handle to distinguish between dark matter and neutrinos, since the latter tend to be less energetic (especially when the detector is placed off-axis). This will also provide an extra relative suppression for the background with respect to the signal, since the interaction cross section at the detector grows with the energy of the incoming particle.
We have shown that the energy spectrum of dark matter that reaches an off-axis detector is harder than the neutrino spectrum reaching it. The second important difference between production of dark matter with a GeV mediator and neutrinos from kaon decay is going to be the angular dependence of the flux. While dark matter is produced from the decay of a spin 1 particle, neutrinos are produced from a spin zero meson, which will affect the angular distribution of the particles produced in the decay. Moreover, the probability for the daughter particle to be emitted in the direction of the off-axis detector will depend on its energy. For neutrinos this probability reads where Ω is the solid angle in the kaon rest frame, and β refers to the parent velocity. The dark matter distributions, on the other hand, will be different depending on whether χ is JHEP04(2016)047 a fermion or a scalar particle as follows: where F, S stand for fermion and scalar dark matter, respectively, and M=1/(γ(1−β cos θ)).
The solid angle discussed above is defined with respect to the rest frame of the parent particle, which does not necessarily coincide with the beam axis. As already mentioned, in the case of the Z this is a negligible effect -the Z is emitted very forward and to a good approximation its direction is the beam axis. Therefore, it is straightforward to obtain the dark matter flux as a function of its energy, by folding the Z energy distribution with eqs. (3.2) and (3.4). The case of kaon decay is more complicated, though, as the kaon is typically produced with other hadrons which can balance its p T . The kaon momentum thus generally subtends a non-zero angle with respect to the lab frame, which has to be accounted for when computing the neutrino flux reaching the detector. In this work, the neutrino flux has been computed using publicly available data for the kaon momenta and energy from Monte Carlo simulations of the NuMI target when exposed to 120 GeV protons [44][45][46]. More details on the computation of the neutrino flux can be found in the appendix. The angular distributions with respect to the off-axis angle are shown in figure 3, both for neutrinos coming from kaon decays and for dark matter resonantly produced via Z . Since we are only interested in events producing very energetic hadron showers in the detector, these distributions have been obtained considering only particles with energies above 2 GeV. The different lines correspond to total number of neutrinos, scalar χ or fermion χ, which reach a MiniBooNE-like detector placed at L = 745 m from the target. In all cases, the angular acceptance of the detector has been taken into account. From this figure it is evident that the suppression with the off-axis angle is stronger for the neutrino flux than for the dark matter fluxes.
Detection via neutral-current events
In the previous section we have shown that the dark matter flux tends to be more energetic than the neutrino flux at off-axis locations, and that the angular dependence of the spectrum is also different for the signal and background. We now evaluate if the signatures for the signal and background events in the detector are sufficiently different to allow a dark matter search at neutrino detectors.
In the model considered in this work, the dark matter particles produced at the target would give an excess of neutral-current events at the detector, which in principle may be confused with neutrino neutral-current events. Since the dark matter flux is expected to be more energetic, we consider only deep-inelastic scattering events, and we require that the energy deposited by the hadronic shower at the detector is above 3 GeV. This requirement further suppresses the neutrino contribution with respect to the dark matter signal.
The total cross section as a function of the energy of the incident particle, as well as the hadronic energy distributions, are computed with MadGraph both for the neutrino and dark matter events since, in this range, the cross section can be computed within the parton model. 2 We have checked that the neutrino neutral-current cross section obtained with MadGraph is approximately σ NC ν ∼ 10 −2 pb for neutrino energies around 10 GeV, which is in reasonable agreement with the literature, see e.g. ref. [47]. In the case of the dark matter, due to the much lighter mediator mass, the cross section is much larger. For instance, M Z = 3 GeV, g z = 0.1 and z χ = 3 gives a dark matter neutral-current cross section of σ NC χ ∼ 5 pb for E χ ∼ 10 GeV. The much larger interaction cross section will provide an extra enhancement of the signal with respect to the neutrino background.
At first sight, the kinematics of signal and background scattering should be rather different. At the matrix element level there is a notable difference due to the small M Z /M Z ratio. The Z propagator is proportional to (q 2 − M Z ) −1 , q 2 being the squared-momentum transfer. For the background, instead, M Z is replaced with the much larger Z mass, and the momentum transfer is negligible. Nevertheless, the differences do not translate into a very different energy deposition in the detector. To show this explicitly we have simulated both dark matter and neutrino interactions. The probability to get a hadronic shower with a given energy, for a fixed value of the energy of the incident particle (either a neutrino or a dark matter fermion) is shown in figure 4. As expected, the neutrino recoil energy is somewhat harder. However, this is a subdominant effect, while the largest differences between signal and background will be those associated to production.
In order to consider the optimal location for a detector and estimate the sensitivity to light dark matter, we should take into account production and detection together and compute the number of signal and background events that a detector would observe at an off-axis angle θ. Heavier mediators will generally broaden the angular distribution for the JHEP04(2016)047 dark matter particles exiting the target, therefore increasing the signal rates for off-axis locations. The angular distribution will also be different depending on whether the particle produced in the Z decay is a fermion or a scalar.
The behavior of the total number of events with the off-axis angle is shown in figure 5, for the background as well as for three potential dark matter signals. The distance to the detector is fixed to L = 745 m in this figure, and the angular acceptance of the detector is taken into consideration. As expected, the background falls much more rapidly than the signals with the off-axis angle, which motivates to put the detector a few degrees offaxis. The effect of the heavier mediator mass can be seen from the comparison between the dotted and dot-dashed lines, while the effect of the spin of the dark matter particle is clearly seen from the comparison between the dashed and dot-dashed lines. As can be seen from the figure, the effect coming from the spin of the produced particle is the dominant. As expected, in the scalar scenario, more off-axis locations are clearly preferred, while if the dark matter particle is a fermion the preference is not as strong. The effect of the Z mass is subdominant.
Optimal detector location and expected sensitivity
From the results shown in section 3 it is evident that, in order to achieve enough suppression of the neutrino background, an off-axis location for the detector is preferred. In this section, we make this statement more precise and determine the ideal location for a future LBNF detector to conduct a search for new light degrees of freedom coupled to the SM via a new vectorial force. For this purpose, we have computed the ratio between the total number of signal events (S) and the expected statistical uncertainty of the background event sample ( √ B), as a function of the off-axis angle and the distance to the detector. Our main result is summarized in figure 6, where the different lines correspond to iso-contours for particular values of S/ √ B, as indicated in the labels. The left panel shows the regions obtained for a Z with a mass of 3 GeV coupled to fermionic dark matter, while the right panel shows the results for a Z of 5 GeV coupled to a scalar particle. In both cases, the charge has been fixed to z χ = 3, and the coupling is set to g z = 0.1. A hypothetical ideal detector of approximately the MiniBooNE detector size has been assumed.
As expected from the results shown in section 3 (see also [15]), the dependence with respect to the off-axis angle is different for the fermion and scalar cases. As can be seen from the plot, the ideal position of the detector in the scalar case with a heavier mediator shows a stronger preference for off-axis locations, while in the case of fermions it is less pronounced. It should also be noted that, since in the right panel the mediator chosen is heavier, the signal event rates will be consequently suppressed. Thus, the values shown in the contours for the S/ √ B are lower in this case. In order to improve the sensitivity to light dark matter we must go further off-axis and study detectors that are not traditionally considered to be on the NuMI beamline. Our choice for an optimal detector is determined by the attempt of optimizing simultaneously the reach for both scalars and fermions. We therefore identify the ideal position (marked by a star) to be at roughly 6 • off-axis and at a distance of 200 m from the target, being the minimal distance physically allowed by the presence of the decay pipe and focusing horn. Interestingly, the MiniBooNE detector (marked by a circle), which is on-axis with respect to the Booster beamline, is very close to the optimal off-axis angle identified in our study, although at a longer distance from the NuMI target (L ∼ 745 m). For reference, the approximate location of the NOvA near detector is indicated by a triangle in figure 6. As explained in the previous section, we only consider neutrinos emitted from kaon decays as source of background. It should thus be kept in mind that, for angles close to the neutrino beam direction (i.e., for angles below 2 • approximately) our computation may be underestimating the total number of background events. This is indicated in figure 6 by the horizontal purple band. Just as an example, we checked that at the NOvA near detector about 10 6 deep-inelastic scattering neutral-current events are expected when all neutrinos (coming both from π and K decays) are considered in the computation. This is an order of magnitude above the result obtained when only neutrinos coming from K decays are considered. From a similar argument it follows that the MINOS near detector would be even less sensitive to a possible light DM signal, being on-axis with respect to the neutrino beam.
As explained in section 2, the model under consideration in this work contains a very small number of free parameters, namely: the coupling g z , the charge of the dark matter under the U(1) B group, z χ , and the mass of the mediator between the SM and the hidden sectors, M Z . In this section, we will keep the value of z χ fixed to z χ = 3, and determine the expected sensitivity to the coupling g z , as a function of the mediator mass. Our results will be shown for the optimal detector location identified in section 4, assuming an ideal detector of approximately MiniBooNE size, with perfect detection efficiency for neutralcurrent events. For comparison, we will also show the expected results for the MiniBooNE detector location (always considering the NuMI target as the production point for the dark JHEP04(2016)047 matter beam). It should be kept in mind that, since no special run would be needed to perform this search, an analysis could be done in principle using their past data 3 (including precise input about detector size and performance).
In order to determine the sensitivity to the new coupling, a binned χ 2 analysis is performed. The event rates are binned according to the energy deposited in the detector by the hadronic shower, using 1 GeV bins. In order to further reduce the background event rates, a minimum threshold of 3 GeV is imposed. A poissonian χ 2 is then built as: where N bg,i stands for the background events in the i-th bin, and N tot,i stands for the total number of events expected in the same bin including the background plus a possible contribution from the signal (which depends on M Z and g z ).
The expected sensitivity contours are shown in figure 7 for two possible detector locations: the optimal one (solid black lines) and the MiniBooNE location (dashed black lines). In both cases, a total exposure of 3.6×10 21 PoT has been considered. This corresponds to the nominal running time for the NOvA experiment of 6 natural years [48]. The contours are shown at the 90% C.L. for 2 degrees of freedom (d.o.f.), and have been obtained assuming fermionic dark matter. For comparison, the strongest previous experimental bounds are also shown by the colored regions: monophoton searches at BaBar (yellow); and J/ψ (green) and Υ (blue) invisible decay searches, as discussed in section 2.
For simplicity, no systematic errors have been considered when obtaining the χ 2 contour. The largest contribution to the total systematic error is expected to come from the uncertainties affecting the neutral-current deep-inelastic neutrino cross section, for which little experimental data is available [47]. For reference, the MiniBooNE collaboration recently measured the flux-averaged quasi-elastic neutral-current cross section with an integrated ∼ 20% uncertainty [49]. Similar uncertainties (at the 20-25% level) affect current measurements for neutral-current single-pion cross sections, see ref. [47]. The second important contribution to the systematic errors relevant for this search would come from flux uncertainties. For instance, in ref. [44] the uncertainties affecting the NuMI ν µ flux measured at the MiniBooNE location were at the 9% level (for neutrino energies at or below 3 GeV) In principle, one could expect these uncertainties to be larger at higher neutrino energies.
Nevertheless, due to the strong dependence of the signal event rates with the coupling (S ∼ g 6 z , see section 3), we expect the final χ 2 contour to remain largely unaffected by background normalization uncertainties. A larger effect could come from the detector performance parameters (detection efficiencies, for instance), since the sensitivity of the experiment in this scenario would be largely limited by statistics. A more careful study by the experimental collaborations is therefore needed to determine the final sensitivity for the search proposed here.
Conclusions
The NuMI and LBNF neutrino beams rely on high-intensity proton fixed target facilities, with proton energies around 100 GeV, which can also be exploited to search for new light degrees of freedom. In particular, they could be essential to search for dark matter particles with masses below a few GeV, inaccessible at conventional direct detection experiments.
The reason is that, if such dark matter particles exist and interact with nucleons, then a dark matter beam could be directly produced during proton collisions at the NuMI or LBNF targets. The subsequent dark matter detection would require a detector sensitive to neutral-current events, placed within a few hundred meters from the target. For a signal of this kind, though, neutrinos constitute the most relevant background. In this work we have investigated how it can be reduced. We have concentrated here on a scenario where both quarks and dark matter particles interact with a Z boson of mass in the 1-10 GeV range. The existing constraints on a Z boson of this type are loose, allowing its gauge coupling to be as large as 0.1. The Z can then be produced in large numbers at the LBNF, where its prompt decays into two dark matter particles would generate a wide beam. We have studied the dependence of the statistical significance of the signal with the off-axis angle and distance between the detector and the target. We have found that the ideal placement of a detector is at an off-axis angle of about 6 • , and that a detector of the size of the MiniBooNE detector would be sensitive to a Z gauge coupling as low as 0.05. Our study motivates a proton beam JHEP04(2016)047 at 120 GeV (or higher) in order to increase the sensitivity for models with a multi-GeV Z boson, resonantly produced at the target. It should be stressed that the strategy proposed in this work to search for dark matter can run symbiotically with the neutrino program, and a dedicated run would not be needed.
We have also discussed the detection of a dark matter beam that may be produced in the NuMI beam line using existing detectors. The NOνA near detector would suffer from a large neutrino background due to the small off-axis angle. A similar argument would apply to the MINOS near detector. On the other hand, the detectors placed along the Booster beamline, such as MiniBooNE, MicroBooNE and possibly ICARUS, coincidentally subtend an ideal angle with respect to the NuMI beamline in order to conduct these searches. The lessons from this are twofold. First, the existing data set from MiniBooNE may be used to probe new regions of the parameter space in dark matter models. Second, this reveals strong synergies between the long-and short-baseline neutrino programs regarding new physics searches, which should be exploited and maximized in the future.
Acknowledgments
We are grateful to Zarko Pavlovic for providing useful input regarding the kaon distributions and NuMI fluxes, as well as for numerous discussions. We thank Olivier Mattelaer for his help with MadGraph simulations, Zelimir Djurcic for providing the neutrino fluxes at the NOvA near detectors, and André de Gouvea, Lisa Goodenough, Raoul Rontsch and Sam Zeller for useful discussions. We would also like to thank Roberto Vidal for writing a python parser [42] for LHE files. A Computation of the neutrino flux from kaon decays As mentioned in section 3.1, the dark matter flux entering a detector placed at an off-axis location with respect to the beam direction is relatively easy to compute, since the Z is emitted very forward and to a good approximation its direction is the beam axis. However, the case of neutrinos being produced from kaon decays is very different, since kaons are typically produced at the target together with other hadrons which balance their p T . Thus, kaons generally subtend a non-zero angle with respect to the beam direction, which has to be accounted for when computing the neutrino flux entering the detector.
The kaon energy and momenta distributions have been obtained from publicly available data in refs. [44][45][46]. They were derived from a Monte Carlo simulation of the NuMI target, when exposed to a 120 GeV proton beam. Given this distribution of kaons, what is the neutrino distribution? Since kaons decay relatively promptly, it is a good approximation to consider that all kaons decay at the beginning of the decay pipe. We will denote by θ K , φ K JHEP04(2016)047 the polar coordinates of the kaon in the lab frame, where θ K is the polar coordinate with respect to the z-axis (which we choose to be the beam direction), and φ K corresponds to the angle for a rotation in the x-y plane around the z-axis.
It is important to recall that the angular distribution of neutrinos produced from a kaon decay with energy E K and momentum β K only depends on the kaon energy and on the neutrino angle with respect to the kaon momentum, θ ν : Moreover, the energy of a neutrino coming from a kaon with energy E K is: Therefore, for a fixed angle between the neutrino and the kaon rest frame, the energy of the neutrino is automatically determined by the kaon momentum.
The computation of the total number of neutrinos produced from kaon decays that will reach the detector can be written as: Here, N K (E K , θ K ) corresponds to the number of kaons with energy E K and angle θ K which are produced in the target and decay producing a neutrino, and are extracted from a binned histogram given by the Monte Carlo simulation in refs. [44][45][46]. Thus, the two integrals in θ K and E K can be replaced by a discrete sum. Moreover, the kaon-neutrino system has a symmetry around the lab frame z-axis, so integration over φ K only affects the overall normalization by a factor φ det K /π, where φ det K is the aperture of the detector in the φ K coordinate.
In order to obtain the number of neutrinos reaching the detector, the integration limits have to be chosen according to the aperture of the detector. In particular, once the detector shape is considered, the limits on φ ν will depend on the value of θ ν . Both neutrino coordinates in the lab system will also depend on the value of θ K . The integration in φ ν can be performed directly, and we are left with a function which depends on θ ν and the kaon variables. Therefore, eq. (A.2) can be rewritten as: In order to determine the integration limits, we have to take into account that the angular aperture of the detector is defined in the variable α, which can be expressed as a function of the kaon and neutrino angular coordinates as: cos α = − sin θ ν cos φ ν sin θ k + cos θ ν cos θ k .
(A. 4) This defines α as the angle between the neutrino produced in the decay and the beam (or z-) axis in the lab frame. In principle, the simplest solution would be to add a Heavyside
JHEP04(2016)047
function inside the integral, in such a way that the integrals in θ ν and φ ν are only performed for those values of θ ν and φ ν which satisfy the angular cut on α. We found this to be computationally rather expensive, though. Instead, we opted for the following approximation. For a very thin binning in the neutrino energy, the interval of allowed values of θ ν which give a neutrino inside the bin is very narrow, and much smaller than the aperture of the detector. Therefore, it can be easily checked whether the values of θ ν in this interval give a value of α inside the aperture of the detector. Within this approximation, the integral in θ ν can be taken as the value of the function in the middle of the integrating interval, times the size of the interval. Also, the integrand does not depend on φ ν anymore and can be integrated independently. As a result, we get: where N ν (E ν,i ) now corresponds to the number of neutrinos entering the detector with energies inside the i-th neutrino energy bin, and Before computing the contribution to the neutrino flux for a given energy bin by using eq. (A.5), though, the acceptance condition in α is required to be satisfied, i.e., it is required that the interval of θ ν corresponding to each neutrino energy bin gives an interval in α inside the angular acceptance of the detector. | 2023-01-21T14:11:18.603Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "fafdb30c9defaa579c4c87f7df3f9aca84554498",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2016)047.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fafdb30c9defaa579c4c87f7df3f9aca84554498",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
218824675 | pes2o/s2orc | v3-fos-license | Determinants of the mixed crop and livestock farming practice among smallholder farmers in Magelang Regency, Central Java Province
The mixed crop and livestock (MCL) farming could enhance farmers to improve their farming practice. This study aims to analyse factors that influence the adoption of MCL farming in Magelang Regency. A multistage random sampling method was used to select the locations and the respondents. Data were collected through personal interview based on structured questionnaire from 161 smallholder farmers. Logit models was applied to analyse the binary choices of practicing MCL farming. The result indicated that age, consulting toward extension agent, and number of livestock kept by farmer were significant at the level 10%, experience on raising livestock was significant at the level 5% and type of ruminants was significant at the level 1%. It may conclude that the younger farmers and farmers who raised large ruminants were have more possibility to adopt the MCL farming practice.
Introduction
Indonesia is known as an agrarian country, which makes agriculture as one of the main sector contributing national and regional economy, e.g. Magelang Regency. In this regency, agriculture dominates the Gross Regional Domestic Product (GRDP) in 2017 up to 21.78%. Agriculture absorbed 34.52% of the labour force, becomes the primary occupation in rural area [1], [2]. The largest agricultural subsector is food crops that ranked based on harvested area are rice, corn, cassava, sweet potato and peanuts, while rice commodity controls 33.78% of the total area [2]. Many farmers in rural area are defined as smallholder farmer who have own less than 2 ha of land, and mostly keep ruminants livestock between 2-4 head for large ruminants and 2-6 for small ruminants, as an assets and savings [1], [3]- [5]. Ruminant livestock populations in Magelang Regency for goats, sheep, beef cattle and buffalos are 87.750, 92.100, 78.286 and 5.978, which buffalo population ranks 3 rd within 35 regencies/cities in Central Java [6].
To develop agriculture sector in rural areas, the government of Magelang Regency introduced [9]. The use of agricultural and livestock waste as an input make farming more efficient, thus increasing farmers' income [10]. In addition, MCL farming is an eco-friendly farming system for sustaining agriculture [11], [12]. Many technologies, informations and knowledge about MCL farming were delivered in the programs, such as technology for making organic fertilizers, natural pesticides, processing of agricultural waste like dry straw and fermentation. Those technologies were disseminated to farmers, perhaps they could change their conventional farming methods, then farming becomes more efficient and profitable.
Although MCL farming is more efficient and profitable than the conventional farming, smallholder farmers still did not adopt it altogether [7]. Many studies showed that adoption of MCL farming was affected by farmers characteristics and farm characteristics [7], [13]- [19]. Farmer's age, family size, farmer experience both on agricultural sector and raising livestock could affect the adoption [15], [16]. Furthermore, the access in information and knowledge like formal education, frequently contacting extension agents, membership of farming group, and number of training that had participated could determine the adoption [13], [14], [18], [19]. The characteristics of farm such as land size, family labour, number and type of animals that kept, type of plants growed, land and animals ownership could also affect the adoption [16], [17]. Therefore, this study's goal is to analyse whether those factors influence the adoption of MCL integration. Primary data was used to analyse the determinants of MCL farming's integration adoption on smallholder farmers in Magelang Regency through multistage random sampling methods. The data were collected through personal interview based on structured questionnaire in February -July 2019, from 5 districts that were chosen randomly i.e. Bandongan, Candimulyo, Kaliangkrik, Ngluwar and Salam. The study was included 161 respondents of which 60 respondents were categorized as adopters of MCL farming while the others were non-adopters. Adopters were identified when farmers use crops waste as feed and livestock waste as fertilizer into their land's crop for 3 growing season constantly. Non-adopters were identified when farmers infrequently use either crops waste as feed or crops livestock waste as fertilizer, and also when farmers use neither crops waste feed nor livestock waste fertilizer.
Materials and methods
Many researchers have measured the factors that influenced the adoption of MCL farming, including studies about farmers characteristics and farms characteristics determine the adoption innovation [7], [13]- [18]. The variables were categorized into two groups consisted the dependent variable and the independent variable. The dependent variable is the adoption of MCL farming, meanwhile the independent variables showed in Table 1.
Logistic Regression Analysis was applied to analyse the determinants of the MCL farming adoption since it is more simpler to use [15], [16]. The adoption determinants can be analysed used the logit models as follows: Where represent MCL farming adoption which is a dependent variable. possess the value of 1 if farmers belong to adopter group and 0 if the opposite occurrence. The independent variables was represented by , which is expected to influence the adoption of MCL integration, represents the intercept, represents the regressions coefficients of the independent variables, is the residual and represents smallholder farmer individually [13], [15].
Results and discussion
The outcome of logistic regression showed that age, consulting toward extension agent, and number of livestock that kept by farmer were significant at the level 10%, experience on raising livestock was significant at the level 5% and type of ruminants was significant at the level 1%. The determinants that affect the adoption of MCL farming represent in Table 2. The outcome showed that farmer's age have a negative effect toward the adoption, which indicates that the younger farmers have a higher probability to adopt MCL farming than the older farmers [16], [20]. Consulting to extension agent have a positive influence to adopt innovation. Extension agent was the right hand of the government in carrying out the agricultural programs in order to disseminate an innovation technology to farmers [20]. Therefore, MCL farming will be more adopted if farmers can reach out extension agents easily. The number of livestock that kept by farmers increase the adoption of MCL farming. It relates due to the manure produced by livestock, the more livestock kept by farmers, the possibility to adopt MCL farming is increasingly high [14]. Farmers who owned large number of livestock are indicated more well-off than farmers who only have small numbers of livestock, so they could access more knowledge, information and others inputs, so that way the adoption rate is highly than the less one [21]. Experience on raising livestock was found significant, our finding found that the length of experience on raising livestock enlighten farmers for utilizing crops waste as feed for livestock. This occurrence due to the limited of land and feed resources, and also these experience increase farmers' awareness about profitable farming by adopting crops residue as feed [21].
Most of smallholder farmers in rural areas kept ruminants as a saving and insurance [5], [22]. The type of ruminant that kept by farmers was affected due to the maintenance costs of the ruminants. In most situations that occurrenced in rural areas, the indigent poor farmers will keep mainly poultry, the less poor will keep small ruminants, then large ruminants (cattle and buffaloes) would be kept by the prosperous farmers [23]. Our research found that farmers who keep small ruminants owned smaller land than farmers who keep cattle and buffaloes. Farmers who owned large farm size have a higher perchance to adopt new innovations or technologies, because of some innovations or technologies were likely more costs [18], [24]. Meanwhile the rate of adoption was lower by the poor farmers, whose lacked in many ways, such as information about new technologies, funding, and others resources [21], [24], [25]. This is why the extension agents have a key role in disseminating new innovations and technologies to rural farmers, helping them to solve their problems and enhance their quality of life.
Conclusions
Factors influenced the adoption of MCL farming are estimated by using primary data from smallholder farmers in Magelang Regency. This study showed that the adoption of MCL farming was affected by farmers and farm characteristics. MCL farming practice will be more adopted by younger farmers. Farmers who can reach out extension agents easily have a higher probability to adopt MCL farming. Experience on raising livestock have an impact on MCL farming's adoption. The possibility to adopt MCL farming practice was also influenced by the number and type of livestock kept by farmers. | 2020-04-16T09:13:09.977Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "f58dc7e7b86063ab1d94eb9f745771a0ccd66794",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/454/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f6384284daaf22ab8d7b47255ed4e2e07ed59d90",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
219061636 | pes2o/s2orc | v3-fos-license | METHODS FOR ASSESSING THE ECONOMIC VIABILITY OF BIOGAS PLANT INVESTMENTS
The development of the global economy and booming population growth have resulted in the increased consumption of energy and the growing need for alternative energy sources. This paper presents the use of yield methods for assessing the economic justification of investments in biogas plants, as well as a review of the economic results of biomass energy production, with the aim of determining the present value of future benefits from the electrical and heat energy generated in such plants for fruit drying and processing purposes. The purpose of this paper is to determine the economic viability of investments in renewable energy sources such as biogas plants that can be used effectively in the production and storage of dried fruit. The yield-method results obtained indicate that the production of electrical and heat energy from waste biomass in biogas plants can produce positive financial results.
INTRODUCTION
The growing population in the world and the global increase of economic activities lead to an increasing need for the production and consumption of energy. Fossil fuels such as oil, coal and natural gas (made from fossils of plants and animals) are non-renewable sources of energy which are continuously depleting. However, environmental pollution issues have been receiving increased attention as they pose significant challenges to the global economic growth. Natural resources and the environment itself are becoming the limiting factors for human economic activities. Over the past few decades, the concern about the environment has been growing not only globally, but also on the national and local levels (Rodić et al., 2011).
Serbia has a relatively high growth rate of energy consumption (6-7 % annually) and primary energy reserves six times lower than the world average. The use and management of energy sources in Serbia must be very rational, exploiting all available sources such as waste fuel (Zekić et al., 2010). Therefore, alternative renewable energy sources have been gaining prominence in the country over the past decades. The renewable energy comprises the energy of the sun, water, wind, geothermal fluids and biomass. The basic characteristic of renewable energy sources is their natural origin and the fact that they are (completely or partially) naturally replenished. Their overall energy potential is enormous. As a renewable energy source, biomass can be used for the production of biofuels, electrical energy and heat energy. In contrast to the production of energy from fossil fuels, the production of energy from biomass is considered an environmentally friendly concept of energy production as it only releases carbon dioxide in the process of photosynthesis. Communal, industrial and other types of wastewater, byproducts from animal breeding and plant biomass from field farming can be used as raw materials for biogas production. The production and use of biogas energy contributes to environmental protection and global population health. It also represents a superior type of energy production because the electrical energy thus generated can be transported to other locations. Moreover, heat energy is a byproduct of biogas production, which is best used where it is created.
The production of energy from renewable sources is more expensive than the fossil fuel energy produced using conventional technologies. Therefore, state support measures are of paramount importance to the renewable-source energy production (Tica et al., 2012). The economic results of producing electrical and heat energy from biomass in biogas plants should be studied with the aim of providing the energy thus generated to larger amounts of users (Tica et al., 2013).
The drying and processing of fresh fruit are characterized by significant electrical and heat requirements. Both environmental and economic aspects of dried fruit production emphasize the increased use of alternative energy sources such as biomass. The share of methane in biogas approximates to 60 %, and the heat power of biogas is 21.500 kJ/Nm 3 or 5.97 kWh/m 3 (Đulbić, 1986). Many areas in Serbia and particularly Vojvodina are considered favorable for agricultural production with tremendous potential for biomass energy production.
MATERIAL AND METHOD
This paper presents a calculation of the economic results of electrical and heat energy production from biomass in a biogas plant. The yield method was employed for assessing the economic results so as to determine the profitability of such energy production. The discounted cash flow, i.e. the cash flow after debt service, was used for the yield value appraisal (Tica, 1993(Tica, , 1997Tica, 2009;Marko et al., 1998;Leko et al., 1997;Ryan, 2007;Milić, 2010). The cash flow considered included all inflows and outflows of funds during a period of five years. Residual values were assessed on the basis of the net cash flow after the end of the projection period using the Gordon model.
The yield method of assessment implies that the value of fixed assets is based on the present values of cash flow in the projected period and the present residual values (Tica et al., 2009). Cash flow is a sum of the company's business results, which, relative to construction objects, appear as incomes from rent and amortization. Moreover, cash flow also consists of potential funds generated from the use of the construction object during the calculation period. The calculation of the net cash flow after debt service is shown in Table 1. Gross profit <amortization-interest-tax> 1 -2 4 Amortization 5 Operating profit 3 -4 6 Interest expenses 7 Profit before tax 5 -6 8 Taxes 9 Net profit 7 -8 10 Amortization 11 Gross cash flow 9 + 10 12 Increase of long-term debt 13 Increase of working capital 14 Fixed assets investment 15 Long-term debt repayment 16 Net cash flow 11 + 12 -13 -14 -15 According to Tica et al. (2009), the present value of expected future cash flows is determined using a discount rate which expresses the time value of money. The initial step of cash flow discounting is the multiplication of the cash flow values projected for a specific period of time and the discount factors determined for that period. Discount factors represent a reciprocal value of the compound interest calculation, where discount rates are used instead of interests.
Discount rates present risk-free rates which are calculated as follows: where -D -discount factor for a specific year, -d -discount rate -n -the year discount factor is applied to (values from 1 to + ∞). Discount rates represent the cost of capital which reflects the risk level of investments (Serdar Raković, 2016). As a measure of the time value of money, discount rates are used to calculate the present value of expected future cash flows on the basis of the risk-free rates, risks of investment in a certain country or region, and project-related risks.
According to Tica et al. (2009), the yield value represents a sum of the discounted cash flow values in the projected period and the residual values. As the projection period includes only a limited number of years, future cash flow values are calculated for the period following the projection period. The values of the future cash flow, i.e. the economic benefits not included in the projection period, are referred to as the residual values, which are calculated using the Gordon model: where RV -residual value, -DNNTr -discounted cash flow in the residual period (the first year after the projection period), -DS -discount rate, -SRr -growth rate in the residual period.
The data on the investments in and operating expenses of biogas plants for electrical and heat energy production in the Autonomous Province of Vojvodina were used for calculation purposes in this study. Operating expenses were computed using analytical calculations for each production line within a company or family household. The basic scheme for a production line was calculated using the following equation: pt = d (where p is the planned or market production value, t denotes the total production expenses, whereas d marks the financial result (namely profit or loss)). The results obtained for the biogas plant considered indicate that the mass of waste produced will amount to 20,000 tons annually, of which 15,000 tons will be the waste from the industrial processing of agricultural products and approximately 5,000 tons of silage mass.
The efficiency of production is one of the most significant indicators of business success. It reflects not only the soundness of the use of all production factors, but also the measure of production value and revenues against production costs. The reference value of this indicator is minimum 1.
The efficiency indicator (E) represents the ratio between total revenues and total expenses, i.e. production values and production expenses:
Total expenses
The profitability of production represents a tendency to achieve higher incomes on totally engaged assets and is most often calculated using the return on assets ratio (ROA):
ROA =
Profit before tax Total assets Higher ROA values indicate greater asset profitability.
RESULTS AND DISCUSSION
Any organic substances such as carbon, nitrogen, phosphorus, potassium, magnesium and etc. can be used as raw materials in biogas production. Accordingly, the most convenient inputs for biogas production are communal and industrial wastewater and plant biomass. Agriculture is the economic sector with the greatest potential for producing inputs for biogas plants such as manure and crop biomass. Biogas is a result of the anaerobic fermentation of manure (without the presence of oxygen), i.e. the activation of bacterial cultures found in compost. In the first phase, the activity of saprophytic bacteria causes carbon substances to convert into volatile acids and water. Subsequently, acids are transformed into methane and carbon dioxide. In this process, organic substances of solid waste are lowered by 50-70 %, resulting in boiled manure (containing nitrogen, potassium and phosphorus) as a byproduct of biogas production (Mulić, 1995). In the present study, waste crop biomass is used as the basic raw material in biogas production.
The primary objective of this paper was to analyze and assess the yield value of investments in a biogas plant for the production of heat and electrical energy from biomass, with a total projected capacity of 740 kW of electrical energy and 888 kW of heat energy. The estimated value of investments amounted to €3,767,310, which is comparable to the usual investment value for this type of plants. A total of €3,150,000 was required for investments in the biogas plant and equipment, whereas a total of €617,310 was allocated for investments in the property and facility of the plant. All the data used in this study were obtained from the technical and technological documentation, which was the basis for the plant construction and operation.
A predicted interest rate of 6 % was also capitalized and added to the investment value.
The planned electrical and heat energy production was projected not to exceed the technical capacity of the plant, enabling a projected increase in the economic results of 2 % annually. In order to achieve such an increase, the material costs and other operating expenses had to be adjusted, whereas other expense categories remained unchanged. The projection of revenues and expenses is shown in Table 2.
The economic results obtained indicate that the production of heat and electrical energy from waste biomass can be costeffective and profitable. The calculation of total revenues and expenses suggests that a biogas plant with a capacity of 740 kW can generate a profit before tax of €217,063 annually, supported by state-guaranteed purchase prices for the electrical energy from renewable sources. Moreover, the revenues from biomass heat energy production, which can be used for fruit drying and processing, were included in the calculation of the total revenues. The planned revenues from the delivered electrical energy accounted for 82 % of the total revenues, whereas the planned revenues from the heat energy produced accounted for 18 % of the total revenues.
A discount rate of 13.95 % was estimated for the biogas plant considered, of which a 4.5 % share was jointly claimed by the risk-free rate and the risk rate of investment in the Republic of Serbia, whereas the risk rate of investment in the biogas plant considered accounted for 9.45 % of the discount rate established. Prior to the cash flow projection, a calculation of the current assets and liabilities was performed to provide a basis for the planned projections relative to balance sheets and income statements. The cash flow projection performed is presented in Table 4. The residual value calculation was based on the Gordon model. The total present value of the discounted cash flow was €662,572 in the five-year period considered, whereas the present residual value was €1,057,607. The yield value of the biogas plant considered is expected to be €1,720,179, with an estimated discount rate of 13.95 %.
Calculation of the efficiency and profitability of the project
The efficiency ratio is calculated as follows: x 100 = 5.76 % Owing to higher initial investments, an efficiency ratio of 5.76 % was found to be unacceptably low, indicating that this investment cannot be financed by loans at the current interest rates offered by commercial banks in Serbia.
The financial results obtained do not include all the outcomes of biomass electrical and heat energy production such as quality fertilizers thus produced and environmental protection benefits (Tešić et al., 2005). This study confirmed that biogas production is characterized by a great number of environmental benefits, which is consistent with the results reported elsewhere in the literature. Therefore, the total results of this production can only be assessed from wide-scale social and economic perspectives, including government support measures and incentives for the production of electrical and heat energy from renewable energy sources (Zekić et al., 2007).
CONCLUSION
Previous research has shown that renewable energy sources have great energy potential. However, the production of energy from renewable energy sources is more expensive than the fossil fuel energy produced by conventional means. Therefore, state support measures are of paramount importance to the renewablesource energy production.
The results obtained in this study indicate that the production of electrical and heat energy from waste crop biomass in a biogas plant can generate positive financial results. The planned revenues from the delivered electrical energy claimed a share of 82 % of the total revenues, whereas the planned revenues from the heat energy produced accounted for 18 % of the total revenues. On the basis of the projected revenues and expenses, a high level of efficiency was determined (1.25). However, these results are insufficient to ensure the economic viability of this type of investment, even though the calculation includes the revenues from the heat energy produced (which can be used for fruit drying and processing). The large investment funds required resulted in a low project profitability of 5.76 %, suggesting that this investment cannot be financed by loans at the current interest rates offered by commercial banks in Serbia.
Although the production of energy from crop biomass using biogas does not ensure the economically viable energy production, it significantly contributes to environmental protection and the conservation of natural resources. Governments should support the renewable energy production through price and tax policies which would ensure the economic viability of such production. Therefore, significant tax relief measures and favorable state-guaranteed purchase prices for the renewable energy production have been established in Serbia. | 2020-05-07T09:16:22.647Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "96e3dafa45a5d82d1f97479c1360f32c3945751b",
"oa_license": null,
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1821-4487/2020/1821-44872001013M.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cecb6a3c800f4305f5ec00a4293da796bd12449c",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
13344224 | pes2o/s2orc | v3-fos-license | Prevalence and risk factors of moderate to severe obstructive sleep apnea syndrome in major depression: a observational and retrospective study on 703 subjects
Background Several studies have investigated the prevalence and risk factors of depression in subjects with obstructive sleep apnea syndrome. However, few studies have investigated the prevalence and risk factors for obstructive sleep apnea syndrome in major depression. The aim of this study was to examine the prevalence and risk factors of moderate to severe obstructive sleep apnea syndrome in a large sample of individuals with major depression. Methods Data from 703 individuals with major depression recruited from the research database of the sleep laboratory of the Erasme Hospital were analysed. An apnea-hypopnea index of ≥15 events per hour was used as cut-off score for moderate to severe obstructive sleep apnea syndrome. Logistic regression analyses were conducted to examine clinical and demographic risk factors of moderate to severe obstructive sleep apnea syndrome in major depression. Results The prevalence of moderate to severe obstructive sleep apnea syndrome in major depression is 13.94%. Multivariate logistic regression analysis revealed that male gender, snoring, excessive daytime sleepiness, lower insomnia complaint, presence of metabolic syndrome, age ≥ 50 years, BMI >30 kg/m2, ferritin >300 μg/L, CRP >7 mg/L and duration of sleep ≥8 h were significant risk factors of moderate to severe obstructive sleep apnea syndrome in major depression. Conclusion Moderate to severe obstructive sleep apnea syndrome is a common pathology in major depression. The identification of these different risk factors advances a new perspective for more effective screening of moderate to severe obstructive sleep apnea syndrome in major depression.
Background
Obstructive sleep apnea syndrome (OSA) is characterized by repetitive episodes of upper airway obstruction that occur during sleep and is usually associated with a reduction in blood oxygen saturation [1]. The clinical manifestations of OSA include witnessed apneas, snoring, choking/gasping episodes, excessive daytime sleepiness, non-restorative sleep, nocturia, sleep fragmentation/sleep maintenance insomnia, total sleep amount, morning headaches, loss of libido, irritability, and decreased concentration and memory [2]. Some of these symptoms are also present in mental pathologies, such as major depression, which may lead to an underdiagnosis of OSA in these subjects [3]. Both major depression and moderate to severe OSA (apnea-hypopnea index (AHI) ≥15/h) [4] are associated with a higher risk of cardiovascular morbidity and mortality [5,6], which justifies the need for effective treatment [7].
The co-occurrence of major depression and OSA may have a negative impact on the quality of life and is very frequent [8,9]. Indeed, in individuals with OSA, the prevalence of depressive affects may reach 63% [10], whereas in individuals with major depression, the prevalence of OAS (AHI ≥5/h) was 36.3% [11]. However, few studies have investigated the prevalence of moderate to severe OSA in major depression. In one example, Ong et al. [12] found a prevalence of 39% of this syndrome in a population of 51 individuals with major depression, whereas in the general population, the prevalence is 1-14% (9-14% of men and 2-7% of women) [13]. Thus, moderate to severe OSA appears to be more common in individuals with major depression than in the general population.
The classical OSA risk factors are age, male gender, body mass index (BMI), snoring, high blood pressure, metabolic syndrome, and sleep duration ≥8 h [14][15][16][17]. Although some of these OSA risk factors have been studied in major depression [11,12,18,19], the majority have not been validated for moderate to severe OSA in the particular subpopulation of individuals with major depression.
Regarding alcohol consumption, smoking, benzodiazepines and Z-drugs use, data in the literature are contradictory concerning their potentially promoting effect in obstructive apneas [20][21][22][23]. Excessive daytime sleepiness is a common symptom in individuals with OSA [24] and may be measured by the use of the Epworth scale (ESS) [2]. Nevertheless, its use as a predictor of OSA is controversial in the general population [25,26] and in major depression [18]. Additionally, even though the severity of depression is positively correlated with AHI [27], it does not predict the presence of OSA in major depression [12]. The use of these risk factors is therefore contradictory in the literature. Hence, it would be interesting to study such risk factors with a large sample of individuals with major depression to determine if, in this subpopulation, they are associated with a higher risk of moderate to severe OSA.
Furthermore, in major depression and OSA, there are arguments for the presence of chronic systemic inflammation resulting in higher levels of C-reactive protein (CRP) and ferritin [28][29][30]. In addition, OSA severity is correlated with the markers of this chronic inflammation [31,32] that have never been studied as a predictor of OSA in either the general population or those with major depression. Therefore, it would be interesting to investigate if the presence of an inflammatory syndrome is associated with a higher risk of moderate to severe OSA in the subpopulation of major depressed individuals.
Our first objective is to investigate the actual prevalence of moderate to severe OSA in the particular subpopulation of individuals with major depression. Our second objective is to identify in this subpopulation, the specific risk factors of moderate to severe OSA. To achieve these goals, we recruited a large sample of major depressed individuals that we divided into a control group without moderate to severe OSA and a patient group with moderate to severe OSA. The aim of this approach is to enable health professionals who treat those with major depression to reference reliable data concerning this particular problem in this subpopulation and to better identify those at risk of moderate to severe OSA, a diagnosis currently made difficult by the existence of an overlap between symptoms of major depression and OSA.
Population
The 703 individuals with major depression were recruited from the database of the sleep laboratory of Erasme Hospital, which contains data for 3511 individuals who completed sleep laboratory monitoring in the years 2002-2014. In our study, we did not recruit subjects without major depression because our objective is to focus on the subpopulation of those with major depression where the existence of an overlap between the symptoms of major depression and OSA makes the diagnosis of this syndrome more difficult. Physicians specializing in sleep medicine referred these individuals to the sleep laboratory following an ambulatory consultation to evaluate their complaint of poor sleep and their depressive affects.
The inclusion criteria were age ≥ 18 years and the presence of a major depressive episode meeting the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders fourth edition -Text Revision (DSM IV-TR) [33]. The exclusion criteria were presence of a psychiatric disorder other than major depression, presence of uncontrolled heavy somatic disease, presence of chronic pulmonary disease, presence of inflammatory or infectious disease, presence or history of cranial trauma, presence or history of central nervous system injury that could involve respiratory centres in the brain, presence or history of craniofacial or thoracic cavity malformations, presence of pregnancy, presence of OSA already known or course of treatment before sleep laboratory, presence of predominantly central apnea syndrome, presence of narcolepsy or primary hypersomnia, presence of parasomnia, and presence or history of substance abuse.
Medical and psychiatric evaluation of participants
All subjects upon admission to the sleep laboratory of Erasme Hospital had their medical records reviewed and a complete somatic check-up performed, including a blood test, electrocardiogram, a daytime electroencephalogram, urinalysis, and a chest X-ray (only for those over age 45). These steps allowed for a systematic diagnosis of potential somatic pathologies present in people admitted to our unit.
Metabolic syndrome was diagnosed when three or more of the following criteria were fulfilled: fasting blood glucose ≥100 mg/dl or receiving treatment for diabetes mellitus, blood pressure ≥ 135/85 mmHg or receiving antihypertensive drug treatment, serum triglycerides ≥150 mg/dl, serum HDL-Cholesterol <40 mg/dl or receiving treatment for dyslipidemia, and waist circumference ≥ 94 cm for men or ≥80 cm for women [34,35].
Patients also benefited on the day of admission from an appointment with a unit psychiatrist who potentially assigned psychiatric diagnoses per the DSM IV-TR criteria [33] to exclude subjects with psychiatric disorders other than major depression.
On admission, patients completed a series of selfquestionnaires to assess the severity of their subjective complaints of depression, poor sleep, and excessive daytime sleepiness as follows: -The presence of depressive symptoms was investigated using the Beck Depression Inventory (BDI reduced to 13 items). This scale consists of 13 items that can be scored from 1 to 3. The final score can vary from 0 to 39. A final score of 0-4 indicates an absence of depression, 5-7 a slight depression, 8-15 a moderate depression, and >16 severe depression [36]. -Daytime sleepiness was investigated using the Epworth scale. This scale consists of eight questions that can be scored from 0 to 3 and assesses sleepiness during different daytime situations. The final score varies from 0 to 24. A final score greater than 10 indicates excessive daytime sleepiness [37]. -The presence of insomnia symptoms was investigated using the Insomnia Severity Index (ISI). This index consists of seven questions that can be scored from 0 to 4. The final score can vary from 0 to 28. A score of 0-7 indicates a lack of insomnia, 8-14 subclinical insomnia,15-21 moderate insomnia. and 22-28 severe insomnia [38].
To avoid missing values, individuals who did not respond fully to these questionnaires were not included in our study.
Sleep evaluation and study
A psychiatrist of the unit conducted a specific interview focused on sleep on the day of admission to complete an assessment of complaints related to sleep.
Participants stayed in a sleep laboratory for two nights, including a first night of habituation and a night of polysomnography from which the data were collected for analysis. The patients went to bed between 22:00-24:00 and got up between 6:00-8:00, following their usual schedule. During bedtime hours, the subjects were recumbent and the lights were turned off. Daytime naps were not permitted.
The polysomnographic recordings from our unit met the guidelines of the American Academy of Sleep Medicine (AASM) [39]. The applied polysomnographymontage was as follows: two electro-oculogram channels, three electroencephalogram channels (Fz-Ax, Cz-Ax, and Oz-Ax, where Ax was a contralateral mastoid reference), one submental electromyogram channel, electrocardiogram, thermistors to detect the oro-nasal airflow, finger pulse-oximetry, a microphone to record breathing sounds and snoring, piezoelectric sensors and leg movement electrodes. In addition, the applied polysomnography-montage also included strain gauges to measure thoracic and abdominal breathing. Polysomnographic recordings were visually scored by specialized technicians using AASM criteria [40] (inter-judge agreement score of 85%).
Apneas were scored if the decrease in airflow was ≥90% for at least 10 s and hypopneas were scored if the decrease in airflow was ≥30% for at least 10 s with a decrease in oxygen saturation of 3% or followed by a micro-arousal [41]. AHI corresponds to the total number of apneas and hypopneas divided by period of sleep in hours. OSA was considered moderate to severe when AHI was ≥15/h [4].
Statistical analyses
Statistical analyses were performed using Stata 14. The normal distribution of the data was verified using histograms, boxplots, and quantile-quantile plots, and the equality of variances was checked using the Levene's test.
We divided our sample of major depressed subjects into a control group without moderate to severe OSA and a patient group with moderate to severe OSA.
Categorical data were described with percentages and numbers, and continuous data were described with means and SD or median and P25-P75. Normally distributed variables were analysed with a t-test. A Wilcoxon test or chi [2] test were used on asymmetric distributed or dichotomous variables.
The automatic selection of risk factors in the model was performed by a stepwise backward method with an entry threshold of 0.05 and an exit threshold of 0.1. The adequacy of the model was verified by the Hosmer and Lemeshow test and the specificity of the model by Link Test. The numbers of subjects by risk factors, outliers, and collinearity between risk factors that may cause problems, have also been verified.
A p-value of less than 0.05 was considered significant.
Results
Demographic data (Table 1) Male gender, snoring, metabolic syndrome, Z-drugs use and alcohol consumption are more frequent in subjects with AHI ≥15/h. These subjects also present an age/ BMI/ESS score greater and BDI/ISI score lower than the subjects with AHI <15/h. Markers of chronic inflammation, such as CRP and ferritin, are higher in moderate to severe OSA. There was no significant difference in benzodiazepines use, antidepressants therapy, smoking, and duration of sleep ≥8 h.
Prevalence of moderate to severe OSA in major depression (Table 1) The prevalence of moderate to severe OSA in our sample of 703 individuals with major depression is 13.94% (n = 98).
Multivariate analysis (Table 3) In major depression, risk factors associated significantly with an increased risk of moderate to severe OSA and obtained by the method of automatic selection (stepwise backward) were male gender, snoring, ESS score > 10, ISI score < 15, metabolic syndrome, age ≥ 50 years, BMI >30 kg/m 2 , ferritin >300 μg/L, CRP >7 mg/L, and duration of sleep ≥8 h.
Discussion
In our sample of individuals with major depression, we demonstrated a prevalence of moderate to severe OSA of 13.94%, which highlights the importance of this problem to the healthcare professionals treating this particular subpopulation of patients. This prevalence is similar to that of the general population [13], but less than that of 39% of the study of Ong et al. [12] However, in this study, the sample was relatively small and to be included, these individuals had to present with insomnia meeting the diagnostic criteria of DSM-IV-TR [33]. This diagnostic criteria included difficulty initiating or maintaining sleep, non-restorative sleep, clinically significant distress, or impairment in social, occupational, or other important areas of functioning; all of which are also symptoms of OSA [2] and may result in greater recruitment of patients with major depression and moderate to severe OSA and could explain the difference in prevalence within our study. Moreover, although the prevalence of moderate to severe OSA in major depression appears to be similar to that of the general population as indicated by our results, the existence of an overlap between the symptoms of major depression and OSA [42] as well as non-compliance with medical treatment in individuals with major depression [43] may lead to the under-diagnosis of moderate to severe OSA in major depression [3]. However, moderate to severe OSA is associated with increased cardiovascular morbidity and mortality [44], which justifies the implementation of effective treatment [45]. Therefore, in individuals with major depression, it is important to identify the specific risk factors for moderate to severe OSA to enhance the detection and management of this syndrome and reduce cardiovascular complications for these individuals. As in the general population [14], we found that male gender, age ≥ 50 years, and BMI ≥30 kg/m 2 are risk factors for moderate to severe OSA in major depression, which seems to confirm the results of Ong et al. [12] Furthermore, although snoring is a risk factor for mild to severe OSA in major depression [18], it has not been studied specifically for moderate to severe OSA. Nevertheless, in our study, we have demonstrated that similar to the general population [14], snoring is also a risk factor for moderate to severe OSA in individuals with major depression. We therefore confirmed with a large sample that the classical risk factors for moderate to severe OSA in the general population are applicable to the subpopulation of individuals with major depression, which seems to confirm the results of preliminary studies involving smaller samples of individuals with major depression [12,18].
In the general population, there is a special relationship between OSA and metabolic syndrome. Indeed, subjects with a metabolic syndrome have a higher risk of severe OSA [17], and individuals with moderate to severe OSA have a higher risk of metabolic syndrome [46,47]. In addition, the prevalence of metabolic syndrome increases with the severity of OSA [48]. However, in major depression, no studies have investigated the relationship between OSA and metabolic syndrome, which are two syndromes that frequently present in individuals with major depression [11,49]. Another risk factor for moderate to severe OSA found in the general population, but not studied in major depression, is sleep duration ≥8 h [16]. However, in our study, we demonstrated that as in the general population, metabolic syndrome Despite two meta-analyses [25,26], data in the literature on the use of excessive daytime sleepiness measured by ESS as a predictor of moderate to severe OSA in the general population are contradictory. Yet, in mental pathologies, including major depression, the use of ESS as a risk factor for moderate to severe OSA seems to not be recommended, as demonstrated in the study of Nikolakaros et al. [18] However, in this study, the sample size was small and it did not consist solely of individuals with major depression. In OSA, there is a particular relationship between daytime sleepiness measured by ESS and major depression. Indeed, in subjects with OSA, the presence of excessive daytime sleepiness is associated with a greater risk of depression [50], whereas depressive symptoms contribute significantly to excessive daytime sleepiness [51]. These elements allow better understanding why we have shown that excessive daytime sleepiness is a risk factor of moderate to severe OSA in major depression. Although some studies show a positive correlation between AHI and the severity of depression [27,52], we have demonstrated a finding similar to Ong et al. [10] where subjects with an AHI ≥15/h had a lower self-reported severity of depression than subjects with AHI <15/h, and that the self-reported severity of depression is not a risk factor for moderate to severe OSA in major depression. In addition, Bjorvatn et al. [53] have shown that the prevalence of insomnia complaints decreased when the severity of OSA increased, which enhances the understanding of our results. Indeed, in our study, we have shown that individuals with major depression and lower self-reported complaints of insomnia had a greater risk of moderate to severe OSA. Therefore, in the subpopulation of those with major depression, excessive daytime sleepiness and lower insomnia complaints are risk factors for moderate to severe OSA, unlike the self-reported severity of depression.
In OSA and major depression, there are arguments in favour of chronic inflammation, which may be correlated with the severity of OSA [31,32] and which may result in higher plasma levels of CRP and ferritin [54][55][56]. Despite the special relationship between chronic inflammation and depression/OSA, plasma CRP and ferritin levels have never been studied as a risk factor for OSA in the general population or individuals with major depression. However, in our study, we found that the presence of chronic inflammation in a subpopulation of individuals with major depression was a risk factor for moderate to severe OSA, which advances new perspectives in understanding the relationship between OSA and major depression.
Antidepressants may partially improve OSA by suppressing REM sleep and increasing upper airway tone [57]. However, in our study, we demonstrated that antidepressants are not a risk or protector factor for moderate to severe OSA in individuals with major depression. This can be explained by the fact we did not distinguish between the different classes of antidepressants, which may possibly mask the protective or deleterious effect of certain molecules on respiration. We found that benzodiazepines and Z-drugs are not risk factors for moderate to severe OSA, which seems to confirm the results of the meta-analysis of Mason et al. [23] However, we excluded subjects with dependence and therefore an overconsumption of these molecules. Benzodiazepines and Z-drugs are generally safe at a low dose for nocturnal breathing, but at high doses, they may cause or aggravate sleep apnea in some more fragile patients [57]. Thus, taking benzodiazepines and Z-drugs at a recommended dose, especially for the subpopulation of individuals with major depression, is not a risk factor for moderate to severe OSA.
The role of smoking in the occurrence of obstructive apnea is controversial in the literature [58,59]. It would appear that nicotine would decrease the resistance of the upper airways with a consequent reduction of the risk of OSA, whereas in case of withdrawal, this resistance would become more important and would cause a greater risk of OSA [60]. Nevertheless, a protective effect of smoking for OSA has yet to be investigated. In our study, we found that smoking is not a risk factor for moderate to severe OSA in the subpopulation of major depressed subjects. This may be explained by the fact that we included only active smokers who did not have nicotine withdrawal during their sleep laboratory. Further, in the literature, alcohol is a recognized risk factor for OSA. In fact, it induces a decrease in the tone of the upper airway muscles, which may increase the frequency and the severity of obstructive apnea in subjects with [61]. Similarly, we demonstrated that alcohol is not a risk factor for moderate to severe OSA in the subpopulation of individuals with major depression. This difference from the literature can be explained by the fact that none of the subjects included in our study had alcohol dependence and thus could stop their habitual consumption of alcohol during the sleep laboratory without consequence and avoid its deleterious effects on nocturnal breathing. In the future, prospective studies should be conducted with the subpopulation of individuals with major depression to validate the risk factors of moderate to severe OSA highlighted in our study. In addition, it would be useful to develop a score from these risk factors to better identify those at risk of moderate to severe OSA.
Limitations
The results obtained in our study come from retrospective data that, even if they have been encoded in a systematic manner, cannot be verified directly with the subject in most cases, which means that our results need to be replicated in prospective studies. Further, we used an automatic selection of risk factors by a stepwise backward method, which presents some limitations that can be consulted on http://www.stata.com/support/faqs/statistics/stepwise-regression-problems. Moreover, we focused only on moderate to severe OSA, which means that our results cannot be generalized to other breathing disorders during sleep, such as central apnea syndrome. As we have included only patients with major depression and without other psychiatric comorbidities, our results are not generalizable to all individuals with major depression or other psychiatric disorders.
Conclusion
We demonstrated in a large sample of individuals with major depression that the prevalence of moderate to severe OSA was 13.94%, and that the classical risk factors for moderate to severe OSA (male gender, age ≥ 50 years, BMI ≥30 kg/m 2 , and snoring) were applicable this particular subpopulation. We also found that the presence of metabolic syndrome, sleep duration ≥8 h, excessive daytime sleepiness, lower insomnia complaints, and markers of chronic inflammation (CRP and ferritin) were also risk factors for this syndrome in the subpopulation of individuals with major depression, unlike selfreported severity of depression, antidepressant therapy, smoking, alcohol consumption, or benzodiazepines and Z-drugs use.
Highlights
The prevalence of moderate to severe OSA in major depression is 13.94%.
Male gender, age ≥ 50 years, BMI ≥30 kg/m 2 , snoring, presence of metabolic syndrome, sleep duration ≥8 h, excessive daytime sleepiness, lower insomnia complaints and markers of chronic inflammation (CRP and ferritin) were risk factors for moderate to severe OAS in major depression.
These risk factors open up a new perspective for more effective screening of moderate to severe OSA in major depression.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Authors' contributions MH: principal investigator of the study with active participation in the encoding of data, statistical analysis, interpretation of results and writing of the article. JL: Active participation in the extraction and calculation of data from polysomnography for the realization of the database. GL: Support in the English translation of the manuscript and supervised the research work as a thesis promoter. PL: Support in drafting the manuscript and supervision of the research work as a thesis co-promoter. PH: Support in drafting the manuscript and supervision of research work as a member of the accompanying thesis committee. All authors read and approved the final manuscript.
Ethics approval and consent to participate This research protocol was approved by the Hospital and Medical School Ethics Committee of the Erasme Hospital (Brussels University Clinics) (Erasme Reference: P2017/119). At Erasme Hospital, all patients are informed that their data could be used retrospectively for scientific research. If patients do not wish for their data to be used, they must inform the hospital, at which time, this directive is indicated in their medical records, and any future use of their data is prohibited.
Consent for publication
Not applicable. | 2017-12-07T09:32:15.622Z | 2017-12-04T00:00:00.000 | {
"year": 2017,
"sha1": "812f0d40e38c86d3211831f9a55f599338f3a66b",
"oa_license": "CCBY",
"oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-017-0522-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "812f0d40e38c86d3211831f9a55f599338f3a66b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238218102 | pes2o/s2orc | v3-fos-license | Job strain and effort–reward imbalance as risk factors for type 2 diabetes mellitus: A systematic review and meta-analysis of prospective studies
Objectives This systematic review and meta-analysis aimed to synthesize the available data on prospective associations between work-related stressors and the risk of type 2 diabetes mellitus (T2DM) among adult workers, according to the demand–control–support (DCS) and the effort–reward imbalance (ERI) models. Method We searched for prospective studies in PubMed, EMBASE, Web of Science, Scopus, CINHAL and PsychInfo. After screening and extraction, quality of evidence was assessed using the ROBINS-I tool adapted for observational studies. The effect estimates extracted for each cohort were synthesized using random effect models. Results We included 18 studies (reporting data on 25 cohorts) in meta-analyses for job strain, job demands, job control, social support at work and ERI. Workers exposed to job strain had a higher risk of developing T2DM when compared to unexposed workers [pooled rate ratio (RR) 1.16, 95% confidence interval (CI) 1.07–1.26]. This association was robust in several supplementary analyses. For exposed women relative to unexposed women, the RR was 1.35 (95% CI 1.12–1.64). The RR of workers exposed to ERI was 1.24 (95% CI 1.08–1.42) compared to unexposed workers. Conclusions This is the first meta-analysis to find an effect of ERI on the onset of T2DM incidence. It also confirms that job strain increases the incidence of T2DM, especially among women.
Supplementary Material 2. Correspondence to: Ana Paula Bruno
Supplementary List the confounding domains relevant to all or most studies Major confounding domains (for which we want the analyzes to be compulsorily adjusted): Socio-economic status (ideally education or income, but we also accept occupation), Age and Sex. Additional confounding domains, but optional (we use the most adjusted model without including intermediate domains): Work Environment Factors, Family Charge, Stressful Events, Out of Work Social Support, Gender Confounding and Intermediate domains (should not be adjusted for): Body mass index (BMI), Lifestyle factors, Comorbidities, hours worked per week, multiple jobs List co-interventions that could be different between intervention groups and that could impact on outcomes
Specify the outcome
Specify which outcome is being assessed for risk of bias (typically from among those earmarked for the Summary of Findings table). Specify whether this is a proposed benefit or harm of intervention.
Specify the numerical result being assessed
In case of multiple alternative analyses being presented, specify the numeric result (e.g. RR = 1.52 (95% CI 0.83 to 2.77) and/or a reference (e.g. to a table, figure or paragraph) that uniquely defines the result being assessed.
Preliminary consideration of confounders
Complete a row for each important confounding domain (i) listed in the review protocol; and (ii) relevant to the setting of this particular study, or which the study authors identified as potentially important.
"Important" confounding domains are those for which, in the context of this study, adjustment is expected to lead to a clinically important change in the estimated effect of the intervention. "Validity" refers to whether the confounding variable or variables fully measure the domain, while "reliability" refers to the precision of the measurement (more measurement error means less reliability).
Number of hours worked
Multiple jobs * In the context of a particular study, variables can be demonstrated not to be confounders and so not included in the analysis: (a) if they are not predictive of the outcome; (b) if they are not predictive of intervention; or (c) because adjustment makes no or minimal difference to the estimated effect of the primary parameter. Note that "no statistically significant association" is not the same as "not predictive".
Risk of bias assessment
Responses underlined in green are potential markers for low risk of bias, and responses in red are potential markers for a risk of bias. Where questions relate only to sign posts to other questions, no formatting is used.
YES:
Always yes in our case, except with a cohort of new workers or have selected a cohort of participants who would all be exposed, or all not exposed at recruitment, and analyze the change in exposure over time. YES: Always yes, because we can expect more exposed people to leave work before the start of the study, or to participate less in the study YES: Always yes, because we can expect more exposed people to leave work before the start of the study, or to participate less in the study.
NA / Y / PY / PN / N / NI 2.4. Do start of follow-up and start of intervention coincide for most participants?
NO: due to our field of study, this answer should always be no, except with a cohort of new workers or having selected a cohort of participants who would all be exposed or all unexposed at recruitment, and analyzed the change in exposure over time. NO: Always no, because we never know the characteristics of the participants before the start of the study.
Note:
In occupational studies start of follow-up and start of exposure rarely coincide. For this reason, we choose to start the risk of bias in selection of participants into the study to moderate levels for this criterion. However, this criterion will not be considered in the other levels in order to keep a gradation in this risk of bias. Low: Never, due to the point (ii) (i) All participants who would have been eligible for the target trial were included in the study; and (ii) For each participant start of follow up and start of intervention coincided. Moderate: Participation rates of ≥80% or ≥ 70% with a comparison showing that refusals are similar to those included for age, sex and socio-economic status, or for exposure and outcome (i) Selection into the study may have been related to intervention and outcome; and The authors used appropriate methods to adjust for the selection bias; or (ii) Start of follow up and start of intervention does not coincide for all participants; and (a) the proportion of participants for which this was the case was too low to induce important bias; (90% de participation) or (b) the authors used appropriate methods to adjust for the selection bias; or (c) the review authors are confident that the rate (hazard) ratio for the effect of intervention remains constant over time. Serious: Participation rates between 80-60% or 60-50% with a comparison showing that refusals are similar to those included for age, sex and socio-economic status, or for exposure and outcome (i) Selection into the study was related (but not very strongly) to age, sex and socio-economic status or the intervention and outcome; and This could not be adjusted for in analyses; or (ii) Start of follow up and start of intervention does not coincide; and A considerable amount of follow-up time is missing from analyses; and The rate ratio is not constant over time.
Low / Moderate / Serious / Critical / NI 13 Critical: Participation rates of less than <60% or <50% with a comparison showing that refusals are similar to those included for age, sex and socio-economic status, or for exposure and outcome (i) Selection into the study was very strongly related to ) to age, sex and socio-economic status or the intervention and outcome; and This could not be adjusted for in analyses; or (ii) A substantial amount of follow-up time is likely to be missing from analyses; and The rate ratio is not constant over time.
Optional: What is the predicted direction of bias due to selection of participants into the study?
Favours experimental / Favours comparator / Towards null /Away from null / Unpredictable Bias in classification of interventions 3.1 Were intervention groups clearly defined?
YES: Exposure must have been measured by a validated tool based on one of two models studied. The validity must have been demonstrated in a study on the psychometric qualities of the instrument (internal consistency, factorial validity, predictive validity and discriminant validity). Note: If the tool used is an original validated tool, but the translation has not been validated, it is considered to be a well-defined intervention, but with a moderate level of risk. NO: Exposure measured with a proxy or translation whose validation has not been demonstrated, or by using different questionnaires from one participant to another. Exposure measured by a matrix based on job titles or based on the response of colleagues in the same work unit, as there is a risk of significant misclassification. Bias due to deviations from intended interventions: NA: Hard to apply in our field of research. Exposure deviations are almost always natural and expected, unless there is an intervention by a researcher that is differential depending on the level of exposure. This criterion will always be at a moderate level of risk. Therefore, it is not systematically evaluated in the included studies. If your aim for this study is to assess the effect of assignment to intervention, answer questions 4.1 and 4.
Risk of bias judgment
Optional: What is the predicted direction of bias due to deviations from the intended interventions?
Bias due to missing data : NOTE: Here, missing participant data is evaluated starting at recruitment and excluding the rate of participation in recruitment that has been taken into account in the selection bias analysis. YES: if a sensitivity analysis was performed to account for missing data (multiple imputation, inverse probability weighting) and the results are similar to the main analysis, or the results are different but the interpretation is done on the sensitivity analysis and not on the main analysis. NO: if no sensitivity analysis is done for missing data
Risk of bias judgment
Low: (i) Data were reasonably complete; (95% or 90% with demonstrations that they are similar or an analysis was done for missing data) or (ii) Proportions of and reasons for missing participants were similar across intervention groups; or (iii) The analysis addressed missing data and is likely to have removed any risk of bias. Moderate (between 94 (or 89) and 80% at follow-up, can go down to 75% if a comparison shows that they are similar): (i) Proportions of and reasons for missing participants differ slightly across intervention groups; and (ii) The analysis is unlikely to have removed the risk of bias arising from the missing data. Serious (between 79% (or 74%) and 50% at follow-up with comparison): (i) Proportions of missing participants differ substantially across interventions; or Reasons for missingness differ substantially across interventions; and (ii) The analysis is unlikely to have removed the risk of bias arising from the missing data; or Missing data were addressed inappropriately in the analysis; or The nature of the missing data means that the risk of bias cannot be removed through appropriate analysis. Critical (<50%): (i) (Unusual) There were critical differences between interventions in participants with missing data; and (ii) Missing data were not, or could not, be addressed through appropriate analysis Bias in measurement of outcomes Critical High fasting glucose was defined by a plasma glucose level of >100 mg/dL (5.6 mmol/L).
Type of bias Classification Reason -explanation
Bias due to confounding Serious Adjusted for age, sex, education level and post-intervention variables that could have been affected by the intervention (chronic medical conditions). Bias in selection of participants into the study
Serious
Participation at baseline was 63% and total non-response was handled by adjusting the weight of households that responded to the survey to compensate for those who did not respond. Bias in classification of interventions
Serious
Short version of the job demands scale; partial validation with low Cronbach's α.
Bias due to missing data Low Complete data for 95% of baseline participants were included in the analyses. Bias in measurement of outcomes
Low
Obtained objectively by register (administrative data and physician diagnoses).
Type of bias Classification Reason -explanation
Bias due to confounding Serious Adjusted for age and sex, no adjustment for socioeconomic factors. Bias in selection of participants into the study
Serious
Participation at baseline was 73% without comparison between participants and nonparticipants Bias in classification of exposure
Moderate
Intervention status is well defined. Shorter version of demand scale was validated with good α.
Bias due to missing data Moderate Complete data for 82% of baseline participants were included in the analyses. Authors provide comparison between included and missing participants showing that those lost to follow-up are rather different in terms of exposure, age and/or sex. No imputation was done. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively by clinical evaluation, while some were ascertained by self-reported questionnaire.
Overall Serious
Type of bias Classification Reason -explanation
Bias due to confounding Serious Stratified by sex, adjusted for age, employment grade and a post-intervention variable that could have been affected by the intervention (diet pattern). Bias in selection of participants into the study
Serious
Participation at baseline was 73% without comparison between participants and nonparticipants Bias in classification of interventions
Moderate
Intervention status is well defined. Shorter version of demand scale was validated with good α.
Bias due to missing data Serious Complete data for 72% of baseline participants were included in the analyses. Authors provide comparison between included and missing participants showing that those lost to follow-up are rather different in terms of exposure, SES, age and sex. No treatment for missing data was done. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively by clinical evaluation, while some were ascertained by self-reported questionnaire.
Overall Serious
Hino 2016
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Restricted by sex (men). Adjusted for age, marital status, job department, employment position and occupation. Bias in selection of participants into the study
Critical
Participation at baseline was 21% without comparison between participants and nonparticipants Bias in classification of interventions
Moderate
Questionnaire validated in Japanese workers for internal consistency.
Bias due to missing data Critical Proportions of missing participants differ substantially across interventions: 43% of the baseline participants included in the analysis, without comparison between included and missing participants. Bias in measurement of outcomes
Critical
Definition very wide, including diabetes defined by HOMA-IR, which is not a method recommended by the ADA.
Overall Critical
Huth 2014
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted for age, sex, physical intensity at work: low, moderate, high. Education was coded as binary variable. Bias in selection of participants into the study
Serious
Participation at baseline was 75% without comparison between participants and nonparticipants. Bias in classification of interventions
Low
Validated version of questionnaire.
Bias due to missing data Serious Complete data for 73% of baseline participants were included in the analyses, without comparison between included and missing participants. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups. Selfreported T2DM and the date of diagnosis were validated by hospital records or by contacting the participants' treating physicians.
Overall Serious
Kawakami 1999
Type of bias Classification Reason -explanation
Bias due to confounding Serious Restricted by sex (men), adjustment for age, education level, occupation, use of technology, leisure time and physical activity, family history of diabetes and a post-intervention variable that could have been affected by the intervention (BMI). Bias in selection of participants into the study
Moderate
Participation at baseline was 92% without comparison between participants and nonparticipants.
Serious
Very short questionnaire with one item for each dimension, not validated.
Bias due to missing data Serious Complete data for 77% of baseline participants were included in the analyses without comparison between included and missing participants. Bias in measurement of outcomes
Moderate
Obtained objectively by clinical evaluation, low risk of false positive outcomes. Some risk of false negative outcomes due to triage by urine insulin, but this risk is lower because the same test had been conducted annually for 12 years before baseline (exclusion of prevalent cases) and each year during follow-up (incident cases).
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Restricted by sex (women) and profession (nurses), adjusted for age. Bias in selection of participants into the study
Serious
Participation at baseline was 75% without comparison between participants and nonparticipants. Bias in classification of interventions
Low
Job strain was measured by the well-validated 27-item Karasek Job Content Questionnaire.
Bias due to missing data Serious Complete data for 73% of baseline participants were included in the analyses, with a comparison between included and missing participants that shows they are similar for all three important confounders, for exposure and for outcome. Bias in measurement of outcomes
Moderate
Self-reported diabetes with validation (98%) in a sub-sample.
Overall Serious
Kumari 2004
Type of bias Classification Reason -explanation
Bias due to confounding Serious Adjusted for age, length of follow-up, employment grade, ethnic group and a post-intervention variable that could have been affected by the intervention (ECG abnormalities). Bias in selection of participants into the study
Serious
Participation at baseline was 73% without comparison between participants and nonparticipants. Bias in classification of interventions DC (Moderate); ERI: (Serious) DC model: Intervention status is well defined; a slightly shorter version of the demand scale was validated with good α.
ERI model: Unknown number of items. According to Bosma et al 1998: "As there was no original measurement of effort-reward imbalance at phase 1, proxy measures (available from the authors) had to be constructed for the crucial components of the model." Bias due to missing data Moderate Complete data for 82% of baseline participants were included in the analyses, without a comparison between included and missing participants. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively by clinical evaluation, while some were ascertained by self-reported questionnaire. Social support: only two questions without validation.
Overall Serious
Bias due to missing data Critical Complete data for 51% of baseline participants were included in the analyses, without a comparison between included and missing participants. Bias in measurement of outcomes
Serious
The methods of outcome assessment were comparable across intervention groups, but ascertained by self-reported questionnaire. Prevailing cases of T2DM were excluded. DC: Intervention status well defined. Social support: short questionnaire with two items without validation. Bias due to missing data Critical Complete data for 46% of baseline participants were included in the analyses, without a comparison between included and missing participants. Bias in measurement of outcomes
Moderate
Diagnoses were obtained by self-reported questionnaire and supplemented with information on diabetes from hospital admissions.
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted by age, gender, marital status, occupational grade and follow-up duration. Bias in selection of participants into the study
Serious
Participation at baseline was 73% without comparison between participants and nonparticipants. Social support: short questionnaire with two items, no validation. Bias due to missing data Serious Complete data for 77% of baseline participants were included in the analyses, without a comparison between included and missing participants. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively by clinical evaluation, while some were ascertained by self-reported questionnaire.
Type of bias Classification Reason -explanation
Bias due to confounding Critical Adjusted by education level, race, gender, occupational category, marital status, insurance coverage. No adjustment for age. Adjusted for post-intervention variables that could have been affected by the intervention (BMI, physical activity, alcohol use, hypertension, working hours). Bias in selection of participants into the study
Serious
Participation at baseline was 74% with a comparison between participants and nonparticipants. Selection into the study may have been related to intervention and outcome.
Bias in classification of interventions Serious
Shorter version without information on the validity of the modified JCQ questionnaire, which was a combination of Karasek and Quinn models. Bias due to missing data Critical Complete data of 19% or 50% of baseline participants were included in the analyses, reasons for exclusion unclear. Only 56 participants with missing data were analyzed: "Participants with missing data on the independent variables were excluded from the final multivariate survival analyses. (n = 56, 3.9%). These participants were more likely to be working in high strain jobs at baseline, older, and women." Bias in measurement of outcomes Serious All of the diagnoses were obtained by self-reported questionnaire.
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted by age, sex, race/ethnicity, education level, and marital status. Bias in selection of participants into the study
Serious
Participation at baseline was 74% without comparison between participants and nonparticipants. Selection into the study may have been related to intervention and outcome.
Serious
Short version of ERI without information on validity.
Bias due to missing data Critical Complete data of between 24%-59% of baseline participants were included in the analyses, reasons for exclusion is unclear; no comparison between included and missing participants. Bias in measurement of outcomes
Serious
All the diagnoses were obtained by self-reported questionnaire. Participation at baseline was 61% and 59% for COPSOQ-I and COPSOQ-II, respectively, without comparison between participants and non-participants.
Serious
Short version of job demands (3 items) with substantial agreement with complete version.
Bias due to missing data COPSOQ-I=Low,
COPSOQ-II=Moderate
Complete data for 95% of participants (COPSOQ-I) and 88% (COPSOQ-II), without a comparison between included and missing participants Bias in measurement of outcomes COPSOQ-I=Low, COPSOQ-II=Moderate COPSOQ I: Obtained objectively from registers (hospitalization registers).
COPSOQ II: The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from registers, while some were ascertained by self-reported questionnaire. Participation at baseline was 75%, without a comparison between participants and nonparticipants Bias in classification of interventions
Serious
Short version (demands 3 items, control 5) with substantial agreement with complete version.
Bias due to missing data Low Complete data for 99% of baseline participants were included in the analyses. Bias in measurement of outcomes
Low
Obtained objectively by register (administrative data mortality and hospitalization registers). The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from registers, while some were ascertained by selfreported questionnaire.
Overall Serious
Nyberg-Gazel Bias due to missing data Serious Complete data for 53% of baseline participants were included in the analyses without a comparison between included and missing participants Bias in measurement of outcomes
Serious
The methods of outcome assessment were comparable across intervention groups, but ascertained by self-reported questionnaire. Participation at baseline was 40%, without a comparison between participants and nonparticipants.
Bias in classification of interventions Low
The original version was complete and validated Bias due to missing data Serious Complete data for 62% of baseline participants were included in the analyses, without a comparison between included and missing participants Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from administrative registers (hospital and reimbursement), while some were ascertained by self-reported questionnaire.
Overall Critical
Nyberg-IPAW Participation at baseline was 76%, without a comparison between participants and nonparticipants Bias in classification of interventions
Serious
Short version (demands 2 items) with substantial agreement with complete version Bias due to missing data Low Complete data for 96% of baseline participants were included in the analyses. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from administrative registers (hospital), while some were ascertained by self-reported questionnaire.
Overall Serious
Nyberg-PUMA
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted by age, sex, SES socioeconomic status (occupational title, register based), categorized in low intermediate, high or other) Bias in selection of participants into the study
Moderate
Participation at baseline was 80%, without a comparison between participants and nonparticipants Bias in classification of interventions
Serious
Short version (demands 3 items, control 5 items) with substantial agreement with complete version Bias due to missing data Low Complete data for 96% of baseline participants were included in the analyses. Bias in measurement of outcomes
Low
Obtained objectively from registers (hospitalization)
Overall Serious
Nyberg-SLOSH The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from administrative registers (hospital), while some were ascertained by self-reported questionnaire.
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted by age, sex, SES socioeconomic status (occupational title, register based), categorized in low intermediate, high or other) Bias in selection of participants into the study
Serious
Participation at baseline was 76%, without a comparison between participants and nonparticipants Bias in classification of interventions
Serious
Short version (demands 2 items, control 5 items) with substantial agreement with complete version. Bias due to missing data Low Complete data for 98% of baseline participants were included in the analyses. Bias in measurement of outcomes
Low
Obtained objectively by register (administrative data reimbursement and hospitalization).
Overall Serious
Nyberg-Whitehall II Complete data for 81% of baseline participants were included in the analyses, without a comparison between included and missing participants. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively by clinical evaluation, while some were ascertained by self-reported questionnaire.
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted by age, sex, SES socioeconomic status (occupational title, register based, categorized as low, intermediate, high or other) Bias in selection of participants into the study
Moderate
Participation at baseline was 82% together according to Alfredsson et al. (2002). Without comparison between participants and non-participants.
Bias in classification of interventions Low
The original scales of job demand and job control from WOLF N was complete and validated Bias due to missing data Low Complete data for 98% of baseline participants were included in the analyses. Bias in measurement of outcomes
Moderate
The methods of outcome assessment were comparable across intervention groups, but some of the cases were ascertained objectively from administrative registers (hospital), while some were ascertained by self-reported questionnaire.
Moderate for both
Pan 2017
Type of bias Classification Reason -explanation
Bias due to confounding Moderate Adjusted for sex, age, education level, vital status and follow-up Bias in selection of participants into the study
Serious
Participation at baseline was 73% without comparison between participants and nonparticipants.
Serious
Job strain not measured individually but obtained through a job exposure matrix based on job titles.
Bias due to missing data Moderate Complete data for 88% of baseline participants were included in the analyses, without comparison between included and missing participants. Multiple imputation with similar results according to the authors. Bias in measurement of outcomes
Moderate
Some diagnoses were obtained objectively from clinical evaluation and register (administrative data, medical records in Stockholm) and some of the diagnoses were obtained from selfreported questionnaires.
Overall Serious
Smith 2012
Type of bias Classification Reason -explanation
Bias due to confounding Serious Stratified for sex, adjusted for age, education level, marital status, ethnicity, immigration status, urban or rural and also for post-intervention variables that could have been affected by the intervention (chronic diseases, activity limitation at work due to health problems). Bias in selection of participants into the study
Bias in classification of interventions Serious
Shorter versions of job demands (2 items), job control (5 items) and social support (3 items) questionnaires were validated with reasonable α. Bias due to missing data Moderate Complete data for 89.6% of baseline participants were included in the analyses, with a comparison between included and missing participants. All analyses were weighted to account for the probability of selection into the original sample and non-response Bias in measurement of outcomes
Low
Obtained objectively from administrative register: 1 hospitalization or 2 reimbursement requests in 2 years (published validation algorithm).
Overall Serious
Souza Santos 2020 Obtained objectively by clinical evaluation.
Overall Critical
Toker 2012
Type of bias Classification Reason -explanation
Bias due to confounding Serious Adjusted for age, sex, education, follow-up time, family history of type 2 diabetes and a postintervention variable that could have been affected by the intervention (BMI). Bias in selection of participants into the study
Moderate
Participation at baseline was 92% without comparison between participants and nonparticipants Bias in classification of interventions
Moderate
Intervention status was well defined: use of validated questionnaires, but without validation of the translation. Bias due to missing data Serious Complete data for 55% of baseline participants were included in the analyses, with information showing that the included and excluded are different. Bias in measurement of outcomes
Moderate
Some diagnoses were obtained objectively from register (administrative data), and some of the diagnoses were obtained from self-reported questionnaires.
Overall Serious
Yamaguchi 2018
Type of bias Classification Reason -explanation
Bias due to confounding Serious Adjusted for age, sex, site, family structure, marital status, occupational category (blue collar or white collar), work status and post-intervention variables that could have been affected by the intervention (components of metabolic syndrome). Bias in selection of participants into the study
Serious
Participation at baseline was 76% without comparison between participants and nonparticipants Bias in classification of interventions
Moderate
Possible reverse causality: prevalent cases only partly excluded (only if two or more criteria for metabolic syndrome were present). Japanese version of questionnaire with confirmed reliability and validity. Bias due to missing data Serious Complete data for 56% of baseline participants were included in the analyses without information on comparison between included and missing participants. Bias in measurement of outcomes
Critical
Outcome was assessed by a clinical test with a cut-off that is not accepted by the ADA diabetes definition: high fasting blood glucose:100 mg/dl.
Overall Critical
Supplemental Figure Legends Suppl. Figure S1. Flow chart for the selection of the included studies. Suppl. Figure S2. Effect of high demands on type 2 diabetes mellitus. This analysis considers demands, whether defined dichotomously or in tertiles (highest versus lowest). It was not possible to transform OR or HR into RR since the original studies did not give estimates for the incidence of diabetes in men and women separately; the original values were therefore used. Since the estimates by Kumari et al. (2004) and Heraclides et al. (2009) are from the same cohort, but based on different baselines, both are included in the meta-analysis. Due to this overlap, the width of the confidence intervals might be underestimated. SE: standard error. CI: confidence interval at 95%. Suppl. Figure S3. Effect of low job control on type 2 diabetes mellitus. This analysis considers low job control, whether defined dichotomously or in tertiles (highest versus lowest). It was not possible to transform OR or HR into RR since the original studies did not give estimates for the incidence of diabetes in men and women separately; the original values were therefore used. Since the estimates by Kumari et al. (2004) and Heraclides et al. (2009) are from the same cohort, but based on different baselines, both are included in the meta-analysis. Due to this overlap, the width of the confidence intervals might be underestimated. SE: standard error. CI: confidence interval at 95%. Suppl. Figure S4. Effect of low social support at work on type 2 diabetes mellitus. This analysis considers low social support at work, whether defined dichotomously or in tertiles (highest versus lowest). It was not possible to transform OR or HR into RR since the original studies did not give estimates for the incidence of diabetes in men and women separately; the original values were therefore used. Since the estimates by Kumari et al. (2004) and Heraclides et al. (2009) are from the same cohort, but based on different baselines, both are included in the meta-analysis. Due to this overlap, the width of the confidence intervals might be underestimated. SE: standard error. CI: confidence interval at 95%. Suppl. Figure S5. Effect of job strain on type 2 diabetes mellitus irrespective of risk of bias. Job strain is included either defined as a dichotomous variable or as a contrast between high strain and low strain quadrants, or as continuous variable, or from the objective job strain matrix of Pan et al. (2017) Suppl. Figure S7. Funnel plot for the effect of job strain on type 2 diabetes mellitus using pooled estimates as published. For each cohort represented in Suppl. Figure S5, the relative risk is plotted against its standard error. Vertical dashed line: overall relative risk estimate from Suppl Figure S6. Suppl. Figure S8. Effect of effort-reward imbalance (ERI) on type 2 diabetes mellitus using original measures of effect. The values used for each study are the hazard ratios resp. odds ratios as published without transformation. SE: standard error. CI: confidence interval at 95%. | 2021-09-30T06:23:57.647Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "7fa8e83d81bcb61a518860d56a0e1d060f476826",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=3987&file_nro=1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da835db8e2d0294388396a1f698e18a4414d0883",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4644384 | pes2o/s2orc | v3-fos-license | Thiol-reactivity of the fungicide maneb
Maneb (MB) is a manganese-containing ethylene bis-dithiocarbamate fungicide that is implicated as an environmental risk factor for Parkinson's disease, especially in combination with paraquat (PQ). Dithiocarbamates inhibit aldehyde dehydrogenases, but the relationship of this to the combined toxicity of MB + PQ is unclear because PQ is an oxidant and MB activates Nrf2 and increases cellular GSH without apparent oxidative stress. The present research investigated the direct reactivity of MB with protein thiols using recombinant thioredoxin-1 (Trx1) as a model protein. The results show that MB causes stoichiometric loss of protein thiols, reversibly dimerizes the protein and inhibits its enzymatic activity. MB reacted at similar rates with low-molecular weight, thiol-containing chemicals. Together, the data suggest that MB can potentiate neurotoxicity of multiple agents by disrupting protein thiol functions in a manner analogous to that caused by oxidative stress, but without GSH depletion.
Introduction
Maneb (MB), a manganese-containing dithiocarbamate fungicide, is used in agriculture to treat a variety of crop pathologies. Epidemiological studies associate repeated human exposure to agricultural chemicals with increased risk for Parkinson's disease (PD) [ 1 , 2 ] and recent reports indicate that occupational and / or residential exposure to MB significantly increases the risk of developing PD [ 3 -5 ]. Rodent models of pesticide-mediated PD show that combined effects of paraquat (PQ) and MB exposure leads to deficits in motor function, loss of tyrosine hydroxylase staining in the substantia nigra, and altered striatal dopamine metabolism [ 6 , 7 ]. Mechanistic studies showed that the increased toxicity of PQ + MB could be due, at least in part, to altered toxicokinetics of PQ in mice exposed to MB [ 8 ].
In a recent in vitro study, we obtained evidence that MB and PQ operate through divergent mechanisms of toxicity [ 9 ]. Results showed that PQ acted via a mechanism involving reactive oxygen species (ROS) while MB did not. MB did not increase ROS production or oxidize thiol-containing antioxidants (thioredoxin and peroxiredoxin), but did activate Nrf2. Hong et al. have reported a strong correlation between alkylation of critical Cys residues on Keap-1 and the potency of Nrf2 activation [ 10 ]. This report and our data lead to the speculation that MB could be a direct thiol-modifying agent causing Nrf2-Keap-1 dissociation, nuclear translocation and gene transcription. Currently, there is no explicit evidence demonstrating that MB modifies Cys residues, e.g. Keap-1. Therefore, the purpose of this study was to characterize the thiol binding activity of MB utilizing N-acetylcysteine (NAC) and thioredoxin-1 (Trx1) as model thiol containing agents.
Materials
Recombinant human thioredoxin-1 protein was obtained from Lab Frontier (Korea). All other reagents were obtained from Sigma.
Reactivity of MB with protein-bound thiols
To assess the ability of MB to bind thiols in proteins, we utilized Trx1 as a model protein. Briefly, Trx1, in its fully reduced state [ 12 ], was incubated with increasing concentrations of MB for 1 h at 37 • C. Following MB incubation, samples were desalted using a spin column (Pierce) and the free thiols were labeled using mPEG 2 -biotin (Pierce), a biotinylated N-ethylmaleimide compound. The samples were then separated via SDS-PAGE, and biotinylated protein was visualized by Western blotting with fluorescently labeled streptavidin. Reactivity of MB to primary amines was also assessed using sulfo-NHS-biotin (Pierce) and application of streptavidin blotting.
Trx1 activity assay
To understand the functional consequences of thiol modification by MB, Trx1 activity was assessed using the spectrophotometric insulin reduction assay described by Arner and Holmgren [ 13 ].
LC-MS analysis of intact protein
MB modification of Trx1 was assessed using intact protein LC-MS. Unmodified and MB-adducted Trx1 protein was analyzed using an Accela-LTQ Orbitrap-Velos mass spectrometer. A 10 μl injection was applied to a C18 column (5 μm, 100 × 2.1 mm) and samples were eluted using a formic acid / acetonitrile gradient. Electrospray ionization was used in the positive mode. Raw spectra were deconvoluted utilizing a procedure in the Xcaliber program (Thermo).
Results
In this report, we used N-acetylcysteine (NAC) and the thiolcontaining redox protein, human thioredoxin-1 (Trx1), to test whether MB is thiol-reactive and covalently modifies protein thiols. Results ( Fig. 1 A) show that free thiol concentration decreased as a function of MB concentration with a complete loss when MB:NAC reached 2:1. A kinetic assay showed that the apparent 2nd order rate constant was 5.03 M −1 s −1 (data not shown). We next examined the reactivity of MB with protein thiol using recombinant Trx1 as a model. This protein contains thiol residues that are solvent accessible, have a spectrum of reactivity and are essential for its biological function [ 12 ]. Fig. 1 B shows a stoichiometric loss of protein thiols due to MB treatment. Trx1 contains 5 Cys residues and the biotin signal was lost when incubated with MB:Trx1 at a molar ratio of 5:1, indicating complete modification of thiols (either adduction or oxidation, see below). Reactivity of MB to primary amines was also investigated ( Fig. 1 C) using a procedure that is similar to visualizing protein thiols. Following MB incubation, the free amines were labeled and visualized using sulfo-NHS-biotin as demonstrated in Fig. 1 B. These results show that MB does not react with amines under the conditions of the assay, preferentially modifying only protein thiols. The kinetics of the reaction was investigated with 5:1 concentration ratio using the mPEG 2 -biotin method ( Fig. 1 D). In agreement with the NAC reactivity data, results from this experiment show that the reaction between MB and Trx1 is relatively slow, approaching complete modification of all Trx1 thiols at 60 min.
The activity of Trx1 was assessed to determine functional consequences of MB adduction. Trx1 was incubated for 1 h with MB (5:1, as above) and then desalted to remove any unreacted MB prior to the activity assay. Data show that MB modification of Trx1 slowed the Trx1-dependent oxidation of NADPH in the presence of Trx reductase by 43% ( Fig. 2 A and B). In contrast to these results, Trx1-catalyzed insulin reduction by DTT showed no effect on activity due to MB modification (data not shown). The incomplete inhibition of activity despite evidence for nearly complete modification of thiols indicated that MB-dependent thiol modification is likely to be reversible.
To examine reversibility, MB-modified Trx1 was incubated with 1 mM DTT, and thiol content was examined with mPEG 2 -biotinylation and Western blotting as above. Results showed that the band corresponding to the unmodified thiol form of Trx1 was restored ( Fig. 2 C). Titration with NAC (increasing the ratio of NAC:MB from 0 to 13.3) showed that greater than a 5-fold excess of NAC was required to restore all of the thiols ( Fig. 2 D). The results, therefore, show that modification of Trx1 by MB is completely reversible by treatment with thiols.
X-Ray crystallography showed that oxidized Trx1 is crystalized as a dimer formed by a disulfide between C73 residues [ 14 ]. We investigated the possibility that MB treatment caused formation of a Trx1 dimer by treating Trx1 with increasing concentrations of MB, separation via SDS-PAGE under non-reducing conditions and visualization with Coomassie blue ( Fig. 3 A). The data demonstrate that MB results in appearance of a band at 25 kD, corresponding to twice the molecular weight of Trx1, with as little as 1 M equivalent of MB. This result indicates that only one Cys residue is involved in the dimerization.
Due to the observation of MB-mediated protein cross-linking, we conducted LC-MS studies of intact, MB-treated Trx1 using ESI in the positive ionization mode and detection with an LTQ-Orbitrap-Velos (Thermo). These experiments resulted in the detection of Trx1 and a single modified product ( Fig. 3 B and C). In the MB-modified sample, we observed a large decrease in intensity of the unmodified Trx1 peak ( m / z 11,602) and a new peak ( m / z 11,810) caused by the binding of ethylene bis-dithiocarbamate (EBDTC) to Trx1, resulting in a mass shift of 210 mass units from the unmodified peak. This result suggests that MB does not cause a simple oxidation of thiols to disulfides but rather participates in more complex reaction processes.
Discussion
In this study we demonstrated the thiol reactivity of the dithiocarbamate fungicide MB. The data show that MB is a thiol-reactive substance that causes dimerization of Trx1 in vitro but do not discriminate between oxidation and more complex cross-linking that could occur through the two dithiocarbamate moieties in MB. Modification resulted in a partial inhibition of Trx1 activity, indicating that MB is either modifying active site thiol residues or peripheral residues, such as C73, that result in decreased activity. It should be emphasized again that Trx1 was employed as a model protein for these studies. Trx1 possesses thiol residues that are solvent accessible, have a spectrum of reactivity and are essential for its biological function [ 12 ].
Our data also demonstrate that the reaction between MB and thiols is relatively slow, with a rate constant (5.03 M −1 s −1 ) similar to that for reaction of H 2 O 2 and Trx1 (1.05 M −1 s −1 ) and thiol-disulfide exchange (20 M −1 s −1 ) [ 15 , 16 ]. Previous data show that NAC pretreatment can protect against MB-induced injury in Chinese hamster V79 cells, indicating that increased cellular thiols can protect or even prevent toxicity associated with an acute MB exposure [ 17 ]. With this in mind, the present data indicate that relatively slow, reversible binding of MB with protein thiols could trap MB in cells. Because protein thiols involved in redox signaling are more reactive than other cellular thiols, such retention could allow transfer to more reactive thiols and cause prolonged disruption of redox circuits that function in redox signaling and control [ 18 , 19 ].
Consequently, the data suggest that MB and other similar dithiocarbamates, including mancozeb, disulfiram and zineb [ 20 -22 ], may cause toxicity by disruption of redox circuits that function in cellular homeostasis. Perhaps more importantly, such interaction with critical redox signaling systems could potentiate neurotoxicity by interfering with essential cell stress response mechanisms. Although speculative, such a mechanism could explain the combined toxicity of MB and PQ in PD. | 2016-05-12T22:15:10.714Z | 2014-04-18T00:00:00.000 | {
"year": 2014,
"sha1": "435aa85decec4a53f219d51bef2d5f96a6814bc4",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1016/j.redox.2014.04.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06e52b824d56e0e49f9bc070ac7bcfd964c3931d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
51728720 | pes2o/s2orc | v3-fos-license | The Clinical Cases of Geleophysic Dysplasia: One Gene, Different Phenotypes
Background Geleophysic dysplasia is a rare multisystem disorder that principally affects the bones, joints, heart, and skin. This condition is inherited either in an autosomal dominant pattern due to FBN1 mutations or in an autosomal recessive pattern due to ADAMTSL2 mutations. Two patients with unaffected parents from unrelated families presented to their endocrinologist with severe short stature, resistant to growth hormone treatment. Routine endocrine tests did not reveal an underlying etiology. Exome sequencing was performed in each family. Our two patients, harboring de novo heterozygous FBN1 mutations p.Tyr1696Asp and p.Cys1748Ser, had common clinical symptoms such as severe short stature, characteristic facial features, short hands and feet, and limitation of joint movement. However, one patient had severe cardiac involvement whereas the other patient had tracheal stenosis requiring tracheostomy placement. Conclusions Patients with severe dwarfism, skeletal anomalies, and other specific syndromic features (e.g., tracheal stenosis and cardiac valvulopathy) should undergo genetic testing to exclude acromelic dysplasia syndromes.
Mutations in FBN1 are also associated with a large spectrum of diseases including Marfan syndrome, Weill-Marchesani syndrome, Stiff skin syndrome, MASS syndrome, and Marfan lipodystrophy syndrome [8][9][10]. The mechanism by which changes in this gene contribute to both tall and short stature is still unclear [3].
Transmission of GD is variable and corresponds to an autosomal recessive model in the cases with ADAMTSL2 gene mutations and an autosomal dominant model in the cases with FBN1 and LTBP3 mutations [1,7]. The true prevalence of GD in the world is unknown; however approximately 55 affected individuals have been reported in 2009 [1] giving 2 Case Reports in Endocrinology a prevalence <1/1 000 000. Symptoms of GD can vary widely from person to person and thus the diagnosis of this rare condition can be quite complicated. Many patients may remain un-or misdiagnosed.
In the present study, we report two patients from unrelated families who had highly variable clinical presentations of GD but their common feature was severe short stature with resistance to growth hormone (GH) treatment. Nevertheless, exome sequencing identified the presence of de novo heterozygous FBN1 variants in both patients. This study was approved by the Institutional Review Board at Cincinnati Children's Hospital Medical Center (Protocol #2014-5919). Written informed consent was obtained from both patients' parents.
Case Presentation
Patient 1 (P1) is a Ukrainian girl who was born as the second child of nonconsanguineous white parents following an uneventful pregnancy and spontaneous term delivery. She had been born at a gestational age of 39 weeks with a normal birth weight (3100 g, 15 th -50 th percentile) and birth length (50 cm, 50 th -85 th percentile). Progressive postnatal growth delay developed since 1 year of age. Familial stature was well within the normal range with a maternal height of 154 cm (-1.3 SD), paternal height of 175 cm (+0.04 SD), and a brother's final height of 169 cm (-0.8 SD) (Figure 1(a)). The patient was first examined by an endocrinologist at the age of 2.5 years. Her height was 72 cm (-4.5 SD) and weight was 7.9 kg (<5 ℎ percentile). Physical examination revealed additional dysmorphic features and other physical abnormalities including a broad nasal bridge, a bulbous nose, elongation of the eye lashes, contractures of the elbow joints and wrists, and small hands and fingers (Figure 1(b)). Routine biochemical analysis demonstrated normal hematology, chemistry, and thyroid hormone function. IGF-1 was 51.7 ng/ml (between the 10th and 50th percentile).
Further investigation was done at the age of 6 years and revealed a normal clonidine-stimulated GH peak of 25 ng/ml. Baseline IGF-1 was 74.4 ng/ml (between the 10th and 50th percentile). IGF-1 generation test showed no response to GH stimulation (after three days of GH administration at a dose 0.03 mg/kg/day IGF-1 remained low at 55.2 ng/ml). Karyotype was that of a normal female (46 XX). Her bone age at her calendar age of 6 years was 2 years (as assessed by the Greulich and Pyle method), and cone-shaped epiphyses and Madelung deformity were noted (Figures 1(b) and 1(c)). Skeletal Xray of the lower extremities showed lateral positioning of the femoral heads and varus deformity of the knee joints ( Figure 1(d)). A brain MRI revealed a hypoplastic pituitary and sella turcica. She was prescribed empiric treatment with recombinant GH (rGH) at a dose 0.03 mg/kg/day which was ineffective. Over a period of 4 months, the child grew by only 0.8 cm. Thereafter, rGH therapy was discontinued. Further observation showed progressive growth retardation (Figures 1(e) and 1(f)). At 8 years of age, her height was 80.5 cm (-8 SD) and weight was 11 kg (<5 ℎ percentile). At 7 years of age, echocardiography showed minimal aortic, mitral, and pulmonary stenosis. However, two years later, a repeat echocardiogram showed worsening cardiac disease with progression of the mild aortic, mitral, and pulmonary stenosis and new findings of pulmonary hypertension and left ventricular hypertrophy. The patient also suffers from carpal tunnel syndrome which was confirmed using electromyography.
Comprehensive genetic testing was done at 8 years of age. A chromosomal microarray (Illumina CytoSNP-850v1.1) was performed and excluded any pathogenic copy number variants. Subsequently, whole exome sequencing was performed at Cincinnati Children's Hospital on the patient and her parents using previously described methods [11]. As neither parent was affected, both recessive and de novo dominant inheritance models were investigated. The patient was found to have a de novo heterozygous mutation in FBN1 gene p.Tyr1696Asp (Figure 1(g)). This variant has not previously been reported in the UMD-FBN1 mutation database (www.umd.be/FBN1/) and is not present in a large healthy control database (gnomad.broadinstitute.org), but it is predicted to be damaging by UMD-predictor, CADD and MutationTaster2 [12][13][14].
After genetic testing, a trial with high-dose rGH treatment at a dose 0.06 mg/kg/day was started due to extremely severe dwarfism which resulted in incremental growth of 1.5 cm per 6 months of treatment (Figure 1(e)) with a subsequent increase of IGF-1 level (318.8 ng/ml, between the 50th and 90th percentile) after 3 months of treatment and 151 ng/ml (between the 10th and 50th percentile) after 6 months of treatment accordingly. Patient 2 (P2) is a Ukrainian boy who was born as the first child of nonconsanguineous white parents via spontaneous term delivery. He was born at a gestational age of 40 weeks with a birth weight of 4050 g (85 th -97 th percentile) and birth length of 55 cm (>97 th percentile). Familial stature was quite tall with a maternal height of 180 cm (+2.9 SD) and paternal height of 195 cm (+3 SD) (Figure 2(a)). Progressive postnatal growth retardation developed beginning at 1 year of age. Prior to 2 years of age, the child had frequent respiratory infections. At the age of 2 years, a tracheostomy was placed due to acute edema of the throat, asphyxia, and pneumonia. In Ukraine, all further attempts to remove the tracheostomy with dilation and resection of pathologic tissue were unsuccessful.
The patient had many dysmorphic features and other physical abnormalities including an indented nasal bridge, elongation of the eye lashes, prominent upper jaw, peripheral edema, thick lips, tapered fingers, stiff interphalangeal joints, and short hands and fingers. At 2 years of age, comprehensive screening for metabolic disorders was completed including mannosidosis, fucosidosis, metachromatic leukodystrophy, Sandhoff disease, lysosomal storage diseases, GM1 gangliosidosis, Krabbe disease, and mucopolysaccharidosis (types 1-3 and 6). However, all metabolic testing was within normal limits. Thereafter, the child was referred to two European clinics in Germany and Denmark with the aim of removing the tracheostomy, but in both cases video laryngoscopy showed large adenoid-like tissue within the rhinopharynx and oropharynx which could cause persistent obstruction. Therefore, it was recommended that the tracheostomy remain in place permanently. Due to the suspicion of mucopolysaccharidosis or mucolipidosis, a targeted next generation sequencing panel of 99 genes was performed; however no variants were identified. The patient was first examined by an endocrinologist at 4 years of age (Figure 2(b)). Blood samples showed normal hematology, chemistry, and thyroid hormone function. IGF-1 was 41.8 ng/ml (<5th percentile); however clonidinestimulated GH peak was normal (12.8 ng/ml). IGF-1 generation test showed a good response to stimulation (after three days of GH administration at a dose of 0.03 mg/kg/d IGF-1 was 205.5 ng/ml). However, treatment with rGH at a dose 0.03 mg/kg/day for 5 months did not result in catch-up growth (0 Case Reports in Endocrinology 5 cm per 5 months of treatment) (Figure 2(c)). He had a normal male karyotype (46 XY). His bone age was 2 years (assessed by the Greulich and Pyle method) with noticeable cone-shaped epiphyses (Figure 2(d)). Echocardiography was normal.
Similar to the first patient, comprehensive genetic testing was done at 6 years of age. Chromosomal microarray was negative but whole exome sequencing identified a de novo heterozygous mutation in FBN1 p.Cys1748Ser (Figure 2(e)). Similar to the variant found in the first patient, this mutation was not previously described in the UMD-FBN1 database and was not present in the gnomAD healthy control database, but it is predicted as pathogenic by UMD-predictor, CADD and MutationTaster2 [12][13][14].
After genetic testing, a trial with high-dose rGH treatment at a dose of 0.06 mg/kg/day was started and showed no improvement in growth (0 cm per 3 months of treatment, Figure 2(c)).
Treatment with anti-inflammatory drugs for his airway issues was started including azithromycin and inhaled or nebulized mometasone.
Discussion
The Ukrainian Pediatric Growth Hormone Registry was created in 2012 to include children diagnosed with short stature identified by regional Ukrainian pediatric endocrinologists. The number of cases with hypopituitarism in Ukraine in 2016 was 962 (a prevalence of 1 in 7914 for the pediatric population in 2016), and there were an additional 315 cases of Turner syndrome (1:24171). The registry includes also additional patients with syndromic dwarfism who are not receiving GH treatment. However due to the lack of access to genetic diagnostics in Ukraine, a case of GD has not been previously described.
The diagnosis of GD can be based on clinical findings including proportionate short stature, very short hands and feet, progressive joint limitation and contractures, distinctive facial features (round, full face; small nose with anteverted nostrils; broad nasal bridge; thin upper lip with flat philtrum), thickened skin, and progressive cardiac valvular disease [1][2][3][4]. Additional features include recurrent respiratory and middle-ear infections, tracheal stenosis, and hepatomegaly [1,6].
In this report, we describe two patients with molecular defects in FBN1 leading to severe short stature which was resistant to GH treatment. Both patients remained undiagnosed for many years until exome sequencing confirmed the etiology of their clinical presentations. Both children had facial dysmorphisms (broad nasal bridge, a bulbous nose, elongation of the eye lashes, small hands and fingers, progressive joint limitation, and contractures). However one of the patients (P1) had progressive cardiac pathology which was found at the age of 7 years, and the second patient had severe pathology of the respiratory system with tracheal stenosis requiring tracheostomy placement at 2 years of age. In patients with GD, rapid progression of cardiac pathology has been also described [15,16] leading to the necessity of timely and adequate cardiac health supervision including the use of valve replacement when indicated. Severe respiratory problems were described as a leading cause of early death in patients with GD [6,7] and often require tracheostomy placement.
Radiographic findings usually include delayed bone age, broad proximal phalanges, cone-shaped epiphyses, ovoid vertebral bodies, shortened tubular bones of the hands and feet, and small capital femoral epiphyses [1,4,17]. Skeletal survey in both our patients showed cone-shaped epiphyses and shortening of tubular bones. One of the patients (P1) had dysplasia of the hip and Madelung deformity with carpal tunnel syndrome requiring a carpal tunnel release. Neither patient had hepatomegaly which was confirmed by abdominal ultrasound.
Genetic testing confirmed two newly identified mutations in FBN1 (p.Tyr1696Asp and p.Cys1748Ser) which are located in the TB5 domain. Other mutations in this FBN1 domain are linked with Marfan syndrome and WMS. Sanger sequencing confirmed the presence of the de novo variants in each of the patients. The variants were absent in the parents. Although the two reported mutations are new, a few mutations have been described in the same amino acids: p.Tyr1696Cys (X2) [6], p.Cys1748Phe [18], and p.Cys1748Arg [19]. In comparing the phenotypes of patients with the similar variants, it is noteworthy that a patient with p.Tyr1696Cys also required tracheostomy placement at 3 years of age. That patient additionally had mitral stenosis and insufficiency and died at 9 years of age [6]. The patients with p.Cys1748Phe and p.Cys1748Arg had WMS phenotype with ectopia lentis and in the case of the Cys1748Arg mutation an acute thoracic aortic dissection at 38 years of age [18,19].
There is insufficient data regarding GH treatment in patients with various skeletal dysplasias [20] especially in those with acromelic dysplasia syndromes. However, there have been a variety of reports with conflicting results [21,22] presumably due to the underlying genetic heterogeneity and different treatment regimens. A trial of recombinant IGF-1 therapy (mecasermin) in a single patient was also previously described [4]. As GD patients are characterized by severe short stature (>3 SD) [6], long-term follow-up is needed to study the advisability of GH treatment regimens in an effort to improve their final height.
Taking into account the severity of dwarfism in our patients (> -6 and -8 SD) with resistance to rGH treatment and severe multiorgan involvement (rapid progression of cardiac pathology and tracheostomy placement), there is a need for further research into alternate treatment modalities. Annual multidisciplinary examination is recommended to provide a comprehensive evaluation of all involved organ systems in order to assess for known comorbidities of GD. | 2018-08-14T10:54:24.955Z | 2018-07-03T00:00:00.000 | {
"year": 2018,
"sha1": "2afc8395a7f7ed3c6a0ef6771a8a30fb141720c7",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crie/2018/8212417.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bb01ae77f91fedf61edaef72e8a8ddac6bb06ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265330472 | pes2o/s2orc | v3-fos-license | PSMA-Expression Is Highly Associated with Histological Subtypes of Renal Cell Carcinoma: Potential Implications for Theranostic Approaches
In renal cell carcinoma (RCC), accurate imaging methods are required for treatment planning and response assessment to therapy. In addition, there is an urgent need for new therapeutic options, especially in metastatic RCC. One way to combine diagnostics and therapy in a so-called theranostic approach is the use of radioligands directed against surface antigens. For instance, radioligands against prostate-specific membrane antigen (PSMA) have already been successfully used for diagnosis and radionuclide therapy of metastatic prostate cancer. Recent studies have demonstrated that PSMA is expressed not only in prostate cancer but also in the neovasculature of several solid tumors, which has raised hopes to use PSMA-guided theranostic approaches in other tumor entities, too. However, data on PSMA expression in different histopathological subtypes of RCC are sparse. Because a better understanding of PSMA expression in RCC is critical to assess which patients would benefit most from theranostic approaches using PSMA-targeted ligands, we investigated the expression pattern of PSMA in different subtypes of RCC on protein level. Immunohistochemical staining for PSMA was performed on formalin-fixed, paraffin-embedded archival material of major different histological subtypes of RCC (clear cell RCC (ccRCC)), papillary RCC (pRCC) and chromophobe RCC (cpRCC). The extent and intensity of PSMA staining were scored semi-quantitatively and correlated with the histological RCC subtypes. Group comparisons were calculated with the Kruskal–Wallis test. In all cases, immunoreactivity was detected only in the tumor-associated vessels and not in tumor cells. Staining intensity was the strongest in ccRCC, followed by cpRCC and pRCC. ccRCC showed the most diffuse staining pattern, followed by cpRCC and pRCC. Our results provide a rationale for PSMA-targeted theranostic approaches in ccRCC and cpRCC.
Introduction
Renal Cell Carcinoma (RCC) is the 9th most common cancer in the male population and the 14th most common form in the female population worldwide [1].The majority of RCC cases (>60%) are detected coincidentally via ultrasound (US) or computer tomography (CT) scans performed for other indications [2].In localized disease, radical nephrectomy is the first-line therapeutic approach, but even despite complete removal of the primary tumor, 25% to 40% of RCC patients still develop distant metastases during follow-up.In addition, 20% to 30% of RCC patients may already suffer from metastatic disease (mRCC) at the time of initial diagnosis [3,4].Recently, the treatment of patients with mRCC has been revolutionized by the introduction of tyrosine kinase inhibitors (TKI) and immunotherapies, which show promising results in significantly prolonging the survival of mRCC patients [5].Nevertheless, cancer cells may develop resistance mechanisms that could weaken or abrogate the therapeutic anti-cancer effects of TKIs and immune checkpoint inhibitors (ICIs) and ultimately lead to disease progression [6].To adapt treatment strategies to such resistance mechanisms, it is important to detect disease progression as early as possible.
In contrast to partial nephrectomy, which is the preferred surgical procedure for localized renal cell carcinoma (RCC), Cytoreductive nephrectomy (CN) is commonly performed in patients with metastatic RCC (mRCC).CN aims to remove the primary tumor in the kidney and can potentially be curative if all tumor deposits are successfully removed.However, for the majority of patients with metastatic disease, CN is considered a palliative procedure, and systemic treatments, such as targeted therapies and immunotherapies, are still necessary.CN is performed with the goal of reducing tumor burden, alleviating symptoms, and potentially enhancing the effectiveness of subsequent systemic treatments.Previous studies have demonstrated improved survival benefits when CN is combined with interferon-based immunotherapy.However, recent research has challenged the role of CN in the era of targeted therapies.The emergence of novel treatments, such as tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs), has led to a reevaluation of the optimal use and timing of CN in the management of mRCC.The effectiveness of targeted therapies in controlling tumor growth and improving patient outcomes has raised questions about the necessity of CN in all cases of metastatic disease.Ongoing research aims to better understand the specific patient characteristics and disease factors that may influence the benefits of CN in the context of evolving treatment strategies [2].
Currently, the European Association of Urology (EAU) guideline recommends the use of contrast-enhanced CT or magnetic resonance imaging (MRI) to detect recurrent or metastatic sites in RCC patients during follow-up [2].However, several studies have already indicated the limitations of those conventional imaging modalities in differentiating malignant from benign renal neoplasms, and consequently, the sensitivity for detecting lymph nodes or distant metastases of RCC is limited [7][8][9].Thus, there is a need for new imaging techniques that could aid in the diagnosis and staging of RCC and facilitate the assessment of response to systemic therapy in mRCC [10].
In general, 18 F-fluoro-2-deoxy-2-D-glucose ( 18 F-FDG) for Positron Emission Tomography (PET) ( 18 F-FDG PET) is widely used for detecting and monitoring tumors.However, unlike in most other malignancies, the application of 18 F-FDG PET and hybrid PET imaging in mRCC is of limited diagnostic yield due to the low 18 F-FDG-avidity of metastatic RCC lesions.Therefore, 18 F-FDG PET is not recommended by practice guidelines for mRCC imaging [8].
An alternative target for radiopharmaceutical-based imaging is prostate-specific membrane antigen (PSMA), a type II integral membrane glycoprotein originally discovered on prostatic epithelium [11].To date, PSMA PET/CT is a well-established imaging modality in patients with metastatic prostate cancer [12].However, PSMA expression is also reported in the tumor-associated neovasculature of a variety of tumor entities, including hepatocellular carcinoma, breast cancer, and RCC [13,14].Pilot studies with small case numbers have already investigated the clinical value of 68 Ga-PSMA-11 PET/CT or 18 F-PSMA-1007 PET/CT in RCC patients and suggested a potential benefit of PSMA PET/CT for staging and follow-up [15][16][17][18][19].In addition, analogous to prostate cancer, upregulation of PSMA in the neovasculature of RCC may represent a target for potential new and innovative radiopharmaceutical treatment strategies, especially when all other therapeutic options have been exhausted [15].
Histopathologically, RCC is divided into several subtypes that differ at histomorphological and molecular levels as well as in terms of their clinical behavior.Three major histological subtypes, which together account for more than 95% of cases, are defined: Clear cell RCC (ccRCC, accounting for 80-90% of cases), papillary RCC (pRCC, 10-15%) and chromophobe RCC (cpRCC, 4-5%) [20].A better understanding of PSMA expression in different subtypes of RCC and adjacent healthy renal tissue is crucial to assess which patients would benefit most from theranostic approaches with PSMA-targeted ligands.However, to date, only a few studies have investigated PSMA expression in RCC at the protein level [21][22][23][24].Here, we aimed to investigate how PSMA expression is distributed in the major histological subtypes of RCC to identify those that might benefit most from a theranostic approach with PSMA ligands.
Tissue Samples
This retrospective study was approved by the medical ethics committee of Ludwig Maximilian University (LMU), Munich.Formalin-fixed and paraffin-embedded material from 65 patients who underwent surgery for RCC at the Department of Urology (LMU Munich) between 2011 and 2019 was collected.Tissue samples were first analyzed on hematoxylin and eosin (H&E) stained slides and classified according to the WHO Classification of Tumours, Fifth Edition.A representative section of each case was selected for analysis, on which both tumor tissue and normal kidney tissue were encountered.Tumor-associated vessels were identified on H&E and Elastica van Gieson (EvG) stain.
Immunohistochemical Analyses
Immunohistochemical staining for PSMA was performed on 5 µm thick formalin-fixed and paraffin-embedded (FFPE) tissue sections.Sections were pretreated with Ventana Cell Conditioner 1 Immunostainer (Ventana Medical Systems, Oro Valley, AZ, USA) for 1 h and then incubated with mouse PSMA monoclonal antibody 3E6 (1:50, Agilent Technologies, Santa Clara, CA, USA) for 32 min.Staining was performed using a Ventana BenchMark Ultra automated stainer and ultraView DAB kit (Ventana Medical Systems).Slides were counterstained with hematoxylin.Positive controls were used for quality assurance in each staining run.
The extent and intensity of PSMA-staining were evaluated semi-quantitatively as previously described [21] and correlated with histological RCC subtypes.For this purpose, the staining pattern was assessed in tumor cells, tumor-associated vessels, and adjacent normal kidney tissue.The intensity of PSMA staining was independently graded by two different observers (SL, VNB) on a scale of 1 to 3 (1 = no positive reaction or weak intensity, 2 = moderate intensity, 3 = strong intensity).For tumors with a focal staining pattern, the region with the highest staining intensity was scored.If the results differed, the scoring value was determined by discussion of all investigators.For statistical analyses, a staining intensity of 2 or 3 was considered strong, and a staining intensity of 1 was considered weak.
The distribution pattern of PSMA expression was evaluated by indicating the percentage of immunoreactive vascular structures.The pattern was considered diffuse if more than 50% of tumor-associated vessels were PSMA positive and focal if less than 50% of tumor-associated vessels were PSMA positive.Figure 1 shows representative pictures of different PSMA staining patterns.
Statistical Analyses
Statistical analyses were performed with IBM SPSS ® Statistics (version 25; SPSS, Chicago, IL, USA) and GraphPad Prism (version 9.5.0GraphPad Software, La Jolla, CA, USA).Descriptive statistics are displayed as mean ± standard deviation (SD).A post hoc analysis from Kruskal-Wallis testing was applied to assess differences between tumor sites.Statistical significance was defined as a two-sided p-value < 0.05.
Statistical Analyses
Statistical analyses were performed with IBM SPSS ® Statistics (version 25; SPSS, Chicago, IL, USA) and GraphPad Prism (version 9.5.0GraphPad Software, La Jolla, CA, USA).Descriptive statistics are displayed as mean ± standard deviation (SD).A post hoc analysis from Kruskal-Wallis testing was applied to assess differences between tumor sites.Statistical significance was defined as a two-sided p-value < 0.05.
Immunohistochemical PSMA Expression in Non-Neoplastic Kidney Tissue
In non-neoplastic kidney tissue, PSMA was expressed in the epithelial cells of the tubulus system.Native renal vessels, glomeruli, and stroma showed no immunoreactivity (Figure 2).
Immunohistochemical PSMA Expression in Different Subtypes of RCC
In all cases of RCC, immunoreactivity for PSMA was detected only in the tumorassociated vessels and not in tumor cells.All (100%) ccRCC samples showed strong PSMA expression with a mean staining intensity of 2.67.In cpRCC, we found strong PSMA expression in 50% of samples, and the mean staining intensity over all samples tested was 1.75.In contrast, only 8% of pRCC showed strong PSMA expression.The mean staining intensity in pRCC was 1.08.Thus, PSMA staining intensity was strongest in ccRCC, followed by cpRCC and pRCC.Differences in PSMA intensity between ccRCC and pRCC (p < 0.0001), ccRCC and cpRCC (p < 0.01) as well as pRCC and cpRCC (p < 0.05) were statistically significant (Figure 3A).
The most diffuse staining pattern was observed in ccRCC, where 17/21 (81%) had a diffuse immunoreactivity for PSMA in more than 50% of tumor-associated vessels, followed by cpRCC (6/20; 30%) and pRCC (1/24, 4%).Differences in staining pattern were statistically significant between ccRCC and pRCC (p < 0.0001) and between ccRCC and cpRCC (p < 0.01).However, there was no statistical difference in PSMA diffusity between the entities of pRCC and cpRCC (Figure 3B).A summary of PSMA expression patterns in different subtypes of RCC is shown in Table 2.
Immunohistochemical PSMA Expression in Different Subtypes of RCC
In all cases of RCC, immunoreactivity for PSMA was detected only in the tumorassociated vessels and not in tumor cells.All (100%) ccRCC samples showed strong PSMA expression with a mean staining intensity of 2.67.In cpRCC, we found strong PSMA expression in 50% of samples, and the mean staining intensity over all samples tested was 1.75.In contrast, only 8% of pRCC showed strong PSMA expression.The mean staining intensity in pRCC was 1.08.Thus, PSMA staining intensity was strongest in ccRCC, followed by cpRCC and pRCC.Differences in PSMA intensity between ccRCC and pRCC (p < 0.0001), ccRCC and cpRCC (p < 0.01) as well as pRCC and cpRCC (p < 0.05) were statistically significant (Figure 3A).
Immunohistochemical PSMA Expression in Different Subtypes of RCC
In all cases of RCC, immunoreactivity for PSMA was detected only in the tumorassociated vessels and not in tumor cells.All (100%) ccRCC samples showed strong PSMA expression with a mean staining intensity of 2.67.In cpRCC, we found strong PSMA expression in 50% of samples, and the mean staining intensity over all samples tested was 1.75.In contrast, only 8% of pRCC showed strong PSMA expression.The mean staining intensity in pRCC was 1.08.Thus, PSMA staining intensity was strongest in ccRCC, followed by cpRCC and pRCC.Differences in PSMA intensity between ccRCC and pRCC (p < 0.0001), ccRCC and cpRCC (p < 0.01) as well as pRCC and cpRCC (p < 0.05) were statistically significant (Figure 3A).
The most diffuse staining pattern was observed in ccRCC, where 17/21 (81%) had a diffuse immunoreactivity for PSMA in more than 50% of tumor-associated vessels, followed by cpRCC (6/20; 30%) and pRCC (1/24, 4%).Differences in staining pattern were statistically significant between ccRCC and pRCC (p < 0.0001) and between ccRCC and cpRCC (p < 0.01).However, there was no statistical difference in PSMA diffusity between the entities of pRCC and cpRCC (Figure 3B).A summary of PSMA expression patterns in different subtypes of RCC is shown in Table 2.The most diffuse staining pattern was observed in ccRCC, where 17/21 (81%) had a diffuse immunoreactivity for PSMA in more than 50% of tumor-associated vessels, followed by cpRCC (6/20; 30%) and pRCC (1/24, 4%).Differences in staining pattern were statistically significant between ccRCC and pRCC (p < 0.0001) and between ccRCC and cpRCC (p < 0.01).However, there was no statistical difference in PSMA diffusity between the entities of pRCC and cpRCC (Figure 3B).A summary of PSMA expression patterns in different subtypes of RCC is shown in Table 2.
Discussion
RCC is a serious malignant tumor disease that accounts for 2% of the global cancer diagnosis [1].In the metastatic stage, the prognosis is especially poor, and the 5-year survival rate is reported to be only 12% [1].Therapy of mRCC has been revolutionized by the development of TKIs and ICIs, to which many mRCC patients initially show good responses [5].Unfortunately, 70% of patients develop drug resistance in the further course of the disease [6].Sensitive, reliable, and accurate imaging modalities are needed to adapt therapy regimens to such molecular changes as quickly as possible in order to halt disease progression [10].In addition, there is an urgent need for new therapeutic options to treat patients with therapy-resistant mRCC [25].Analogous to prostate cancer, PSMA could act as a target for both diagnostic and therapeutic approaches.Here, we found that PSMA is expressed at different levels in the major subtypes of RCC.While native renal vessels generally did not show PSMA expression, there was a strong and diffuse PSMA expression in tumor-associated neovasculature of all ccRCC samples.In cpRCC, strong PSMA expression was observed in 50% of the cases, while in pRCC, only 8% showed strong PSMA expression.Differences in PSMA expression intensities between the different histological subtypes of RCC were statistically significant.As the immunohistochemical PSMA intensity is reported to correlate with uptake on PSMA-PET/CT, these results suggest that PSMA is suitable as a target for theranostic procedures in ccRCC and cpRCC [26].
PSMA overexpression was first detected in prostate carcinomas, and PSMA-PET/CT as a form of hybrid imaging is now widely used in patients with metastatic or recurrent prostate cancer, as it shows significantly improved accuracy over conventional imaging techniques [27].In addition, PSMA-directed radioligand therapies with lutetium-177 ( 177 Lu-PSMA-617) or actinium-225 ( 225 Ac-PSMA-617) are being tested in clinical trials as a therapeutic option in refractory metastatic prostate cancer [28][29][30].Despite its name, PSMA is now known to be expressed not only in prostate carcinomas but also in the tumor-associated neovasculature of several solid tumor entities [22][23][24]31,32].The first case series investigated the diagnostic value of 68 Ga-PSMA-PET/CT or 18 F-PSMA-PET/CT in RCC patients and showed promising results in ccRCC as PSMA-PET/CT could detect metastatic sites with higher sensitivity than conventional imaging methods (see exemplarily Figure 4) [8,18,[33][34][35].In non-clear cell RCC, however, only a small proportion of tumors showed uptake of PSMA-targeted radiotracer, and PSMA-PET/CT did not detect additional metastases compared to conventional imaging modalities [35].Nevertheless, the aforementioned clinical studies on PSMA-PET/CT in RCC only investigated small case numbers, and especially non-clear cell RCCs were underrepresented [23].
exemplarily Figure 4) [8,18,[33][34][35].In non-clear cell RCC, however, only a small proportion of tumors showed uptake of PSMA-targeted radiotracer, and PSMA-PET/CT did not detect additional metastases compared to conventional imaging modalities [35].Nevertheless, the aforementioned clinical studies on PSMA-PET/CT in RCC only investigated small case numbers, and especially non-clear cell RCCs were underrepresented [23].Here, we demonstrated that the major RCC subtypes show significant differences in PSMA expression on protein level, with cpRCC and pRCC showing significantly less frequent and weaker PSMA immunoreactivity than ccRCC.This may be the reason for the lower diagnostic sensitivity of PSMA-PET/CT in non-clear cell RCC.
In daily practice, one major unmet need is to discriminate between malignant and benign renal lesions to avoid unnecessary biopsies or overtreatment.While we focused exclusively on malignant renal lesions in our study, preliminary research results suggest that PSMA-PET/CT could be helpful in the discrimination of benign and malignant renal neoplasms.For example, an early pioneer study by Baccala et al. reported positive PSMA staining in 76.2% of ccRCCs and only 52.6% of oncocytomas [22].In a comparable study, 80% of ccRCC and only 30% of oncocytomas showed neovasculature with positive PSMA staining [21].However, only a few cases were investigated, and the difference in staining intensity was not statistically significant.Nevertheless, these initial results are promising and suggest that PSMA expression may aid in the distinction between malignant and benign renal lesions.Thus, further studies are needed to investigate the PSMA expression in benign renal lesions.
Besides tumor dignity, treatment decisions in RCC patients are mainly guided by tumor stage and grade, but radiomorphologic correlates for these parameters are currently lacking.Studies have shown that PSMA expression leads to neoangiogenesis in different tumor entities, which is an important factor for tumor progression [36,37].Consequently, PSMA expression is associated with more malignant tumor behavior, higher tumor stages, and a worse prognosis in prostate cancer, squamous cell cancer, and breast cancer [38][39][40].In our study with RCCs, we found no difference in PSMA intensity Here, we demonstrated that the major RCC subtypes show significant differences in PSMA expression on protein level, with cpRCC and pRCC showing significantly less frequent and weaker PSMA immunoreactivity than ccRCC.This may be the reason for the lower diagnostic sensitivity of PSMA-PET/CT in non-clear cell RCC.
In daily practice, one major unmet need is to discriminate between malignant and benign renal lesions to avoid unnecessary biopsies or overtreatment.While we focused exclusively on malignant renal lesions in our study, preliminary research results suggest that PSMA-PET/CT could be helpful in the discrimination of benign and malignant renal neoplasms.For example, an early pioneer study by Baccala et al. reported positive PSMA staining in 76.2% of ccRCCs and only 52.6% of oncocytomas [22].In a comparable study, 80% of ccRCC and only 30% of oncocytomas showed neovasculature with positive PSMA staining [21].However, only a few cases were investigated, and the difference in staining intensity was not statistically significant.Nevertheless, these initial results are promising and suggest that PSMA expression may aid in the distinction between malignant and benign renal lesions.Thus, further studies are needed to investigate the PSMA expression in benign renal lesions.
Besides tumor dignity, treatment decisions in RCC patients are mainly guided by tumor stage and grade, but radiomorphologic correlates for these parameters are currently lacking.Studies have shown that PSMA expression leads to neoangiogenesis in different tumor entities, which is an important factor for tumor progression [36,37].Consequently, PSMA expression is associated with more malignant tumor behavior, higher tumor stages, and a worse prognosis in prostate cancer, squamous cell cancer, and breast cancer [38][39][40].In our study with RCCs, we found no difference in PSMA intensity according to clinicopathological parameters.All ccRCCs showed strong PSMA intensity regardless of stage (T1, T2, T3) or grading (G1, G2, G3).Similarly, the vast majority of pRCCs showed weak PSMA expression regardless of stage or grading, and likewise, no corresponding correlation was found in cpRCCs.Nevertheless, the significance of those results is limited due to the size of the individual subgroups.In addition, our cohort did not include T4 or G4 carcinomas.Therefore, to clarify whether PSMA-PET/CT can aid in the accurate assessment of tumor size and grading in RCCs, further studies are needed.
Another point of great clinical importance is the detection rate of metastases at the initial diagnosis of RCC.Depending on the tumor stage, different therapeutic regimens can be considered for the patient, leading to either local tumor therapy alone or systemic combination therapies with a higher risk of associated toxicities [2].In mRCC patients undergoing TKI or ICI therapy, reliable imaging methods are critical for response assessment or detection of disease progression.In this setting, PSMA-PET/CT could be a highly promising method because this molecular imaging method seems to be able to detect disease progression earlier than conventional imaging [15].Our results suggest that the use of PSMA-PET/CT could be particularly useful in patients with ccRCC and, to a lesser extent, in cpRCC.
PSMA is a transmembrane protein and consists of 19 intracellular, 24 transmembrane, and 707 extracellular amino acids.The protein is responsible for various different enzymatic activities, although its precise function is still not fully understood.To date, it is known that PSMA contributes to tumor progression in a number of ways.For example, PSMA acts as a folate hydrolase, breaks down polyglutamate folate chains, and enables the uptake of monoglutamate folate [41].The increased cellular folate uptake is an essential component for enhanced nucleic acid synthesis by dysregulated tumor cells [40,42].Consequently, PSMA-positive prostate cancer cells were shown to have a greater invasive potential [42], and breast cancer cells with downregulated PSMA expression exhibited decreased cell proliferation and migration, suggesting that PSMA contributes to carcinogenesis and metastasis [43].In addition, PSMA plays a key role in the neoangiogenesis of solid tumors.PSMA inhibition, knockdown, or deficiency led to abrogation of angiogenesis [44].Against this background, it is readily explained why therapy directed against PSMA can halt tumor progression in patients with metastatic prostate cancer.
Similar to this, PSMA radioligand therapy could also be an option for end-stage mRCC patients for whom there are no other treatment options [8,11].As with all therapeutic modalities, a trade-off between cancer control and therapy-associated side effects is necessary.Studies to date have indicated a low risk of nephrotoxicity with the use of 177 Lu-PSMA-617 for the therapy of hormone-refractory metastatic prostate cancer [45,46].Nevertheless, nephrotoxicity and tubulointerstitial nephritis are possible side effects, and renal function should be monitored closely during PSMA therapy [47].As a possible pathogenetic cause, we found PSMA expression in the epithelial cells of the (proximal) tubule system, which is consistent with previous findings in the literature [48].In addition, PSMA expression has been described in salivary glands, the brain, and small intestinal tissue, which is why treating physicians should be alert for side effects in these organs when using PSMA-labeled radioligands [31,48].
Given potential side effects and high therapy costs, it is critical to identify those patients who would benefit most from PSMA theranostics.Therefore, a profound knowledge of PSMA expression in normal kidney tissue and histopathological subtypes of RCC is important.The expression of PSMA in RCCs has been investigated by immunohistochemistry in some studies, mainly including ccRCC and pRCC [21][22][23][24].Here, in agreement with our results, PSMA expression was found to be vigorous in the majority of ccRCCs, whereas PSMA expression was rarely detected in pRCCs.Moreover, a significant association between high PSMA expression and overall survival was demonstrated in ccRCC patients [24].
Our study was able to confirm and reproduce the pattern and tendency in PSMA expression of previous studies.However, those studies used internal domain-binding antibodies, which limits the clinical applications because antibody-bound radioligands exert their therapeutic effects mainly through extracellular antigens [22,36,41].
It is also possible that different PSMA-targeting antibodies recognize and bind different splicing forms of PSMA [14].Therefore, in our study, we used an extracellular epitope PSMA targeting antibody (3E6).
Data on PSMA expression in cpRCC have also been reported, but the sample sizes were considerably smaller, and thus, it is not clear to date whether cpRCC patients may also benefit from PSMA theranostics.However, especially in metastatic cpRCC, PSMA radioligand therapy is of high clinical interest because there are currently no established treatment options for this rare form of RCC [49].In our study, we detected PSMA expression in 50% of cpRCC cases, demonstrating a rationale for PSMA theranostics in this subtype of RCC, too.
PSMA-targeted endoradiotherapy could be used to enhance the response to immunotherapy via the abscopal effect in RCC due to its high immunoresponsiveness.The abscopal effect is a phenomenon where localized radiation therapy can trigger an immune response throughout the body [50,51].Additionally, the cross-fire effect and the radiation-induced bystander effect (RIBE) have been discussed as potential mechanisms for the efficacy of therapeutic radioligand therapy.The cross-fire effect is achieved by particle-induced destruction of multiple cells in the neighborhood of a tracer accumulating cell.This mechanism helps to compensate for the heterogeneity seen in malignant tumors.Correspondingly, RIBE is a phenomenon in which cells that are not directly exposed to ionizing radiation behave as if they have been exposed [52].These mechanisms are of particular interest for potential theranostic applications, as PSMA is found mostly on the neovasculature of renal neoplasms in contrast to prostate tumors, where it is mainly expressed by the carcinoma cells.
However, clinical trials involving αor β-emitter-radiolabeled PSMA-targeted ligand therapy on RCC have yet to be conducted.
Our study is limited because we focused on the three major histopathological subtypes of RCC and other rare RCC entities, and benign renal neoplasms, such as oncocytoma, were not included.Future studies should be performed to investigate PSMA expression in those other RCC subtypes.Also, the intensity of the PSMA expression could be correlated with the respective T stage of the RCC entity.In addition, in vivo and in vitro autoradiography binding studies are needed to accurately determine the binding affinity of PSMA-targeted radioligands in RCC samples.Further (multicenter) studies could also focus on the change of PSMA expression under systemic therapy and investigate possible differences in PSMA expression between tissue samples from primary tumors and metastases, which may provide further insights into the role of PSMA in tumor progression and metastasis.
Conclusions
Based on our immunohistochemical study, we found statistically significant differences in PSMA expression patterns within the tumor neovasculature of ccRCC, cpRCC, and pRCC.The observed variations in PSMA expression underscore the potential for PSMA-targeted theranostic approaches.Given that ccRCC is the most prevalent subtype of RCC, the ability to selectively target PSMA could have a substantial clinical impact.PSMA-target therapies-including radioligand therapy-have already shown significant potential in the treatment of prostate cancer.
While our findings are promising, further research is required to translate these results into practical applications.Further prospective (multicenter) studies are needed to validate our findings across larger and more diverse patient populations.These studies can help assess the diagnostic and therapeutic potential of PSMA radioligands with greater precision and provide essential data for clinical implementation.
Table 1 .
Clinical and pathological characteristics.
Table 2 .
PSMA expression in renal cell carcinoma. | 2023-11-22T16:19:45.926Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "01c53f4c248fb0da7b82d08f49f0880bb638521d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/11/3095/pdf?version=1700467648",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0227b6cdc02cd7fe26f29a285491bf90b7f509c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259234313 | pes2o/s2orc | v3-fos-license | Discharge coe ffi cient and energy dissipation on stepped weir
: High volumes of kinetic energy are generated as water is transported to the dam downstream. Stepped weir are some of the best in lowering the kinetic energy of a fl ow traveling downstream. In stepped weirs, the steps ’ design can impact how much of the fl ow ’ s kinetic energy is transferred downstream. Because these weirs could dissipate more power, recently, pooled designs have been more common than smooth ones. Therefore, this work investigated the impact of sills at the ends-edge of the steps and discharge values on fl ow patterns, particularly energy dissipation. Seventy- fi ve experiments were conducted and fi ve models were used with a slope angle of 35° with di ff erent step dimensions, di ff erent numbers of steps (14, 10, 7, 5, and 3), and other discharges. Three-step con fi gurations were used: fl at, fully pooled, and zigzag pooled steps. The results indicated that increasing the number of steps increased the energy dissipation rate. In addition, an increase in the discharge leads to an increase in the discharge coe ffi cient and thus decreases the energy dissipation rate. A coe ffi cient of determination R 2 gives a good agreement for the discharge coe ffi cient (0.73).
Introduction
According to the needs and characteristics of the area, many hydraulic structures are built in open channels [1]. One of these structures, the weir, is used to measure discharge and the depth of rising water in irrigation channels [2][3][4]. The weir channel is typically one of a dam's most essential aspects. The weir channel offers a practical and secure method of transferring flood flows to the region downstream of the barriers [5]. Stepped weirs have become common hydraulic structures in recent years, with steps on their faces running from close to the crest to the toe [6]. The efforts substantially accelerate the rate of energy dissipation on the weir surface. Stilling basin length should be as small as possible to minimize the required downstream energy dissipation basin size. However, using a stepped weir can reduce cavitation risk by boosting self-aerated flow compared to more traditional smooth weirs [7][8][9][10]. According to the discharge and the stepped weir's dimensions, flow over a stepped weir can be generally classified into three flow patterns: nappe, transition, and skimming. The various features of each flow pattern choose the flow pattern as an essential component in the construction of stepped spillways [11,12]. The first, a sequence of little successive falls, occurs for low discharger flow rates and/or essential step lengths. The transition flow regime, which was initially introduced, happens for various intermediate discharges when transitioning from the nappe flow to the skimming flow [13]. This regime's most distinguishing feature is the presence of horizontal step-face stagnation and significant splashing. A skimming flow system is formed when the flowing water completely submerges the steps. Usually, horizontal-axis recirculation zones are formed between the outer edges of the step, which are created for efficient flow rates and short step lengths [14].
Since each step serves as a small stilling basin for low discharges, most of the flow energy is dissipated over the steps [15].
The research focuses on identifying the relationship between discharge coefficient, energy dissipation, and relative discharges and how the arrangement of the step affects the discharge coefficient and energy loss. the energy dissipation E 0 and any area of an intriguing phase E, as shown in Figure 1.
Along with the datum, the section on the significant step is also superimposed. The velocity head and depth flow, which are measured vertically from the datum, combine the energy E 1 .
The energy loss E Δ is the difference between the energy at the entrance section E 0 and the energy at the exciting section E . 1 The dissipation of energy, one of the dimensionless parameters frequently used to research the energy dissipation characteristic, is ΔE/E 0 .
Dimensional analysis
Energy dissipation of hydraulic jump downstream stepped weir was affected by several factors, including geometric characteristics of the stepped weir (such as the width of the stepped weir [W], the height of the weir H dam [ ], and the slope of the weir [θ]), length, height, and number of steps (l s , h s , and N s , respectively). The kinematic flow characteristics are also affected by other flow characteristics such as gravity acceleration (g), velocity (V), dynamic viscosity (μ), mass density (ρ), surface tension (o), and critical depth flow ( y c ).
Thus, the energy dissipation of flow is a function of these variables: By multiplying the non-dimensional parameters according to Buckingham's theory, the aforementioned variables can be decreased, and equation (3) can be expressed as follows: Dimensionless characteristics are necessary for the investigation of energy dissipation in spillways.
Discharge coefficient
Horton [16] proposed that discharge coefficient, C d , is directly dependent on the upstream head ( y 0 )-to-crest length (L c ) ratio, y L / 0 c , and that viscosity and surface tension effects may be disregarded if y 0 > 30 mm. Singer [17] proposed using y L / 0 c to classify flow over weirs with finite crest lengths. Standing waves on the weir for y L / 0 c < 0.08 indicated that surface tension and viscosity effects might need to be considered in this range. The weirs with finite crest lengths are classified into four groups based on y L / 0 c :
Experimental work
All experiments were performed at the hydraulic laboratory of the Middle Technical University at Kut Technical Institute in Iraq, as shown in Figure 2 [18], using a laboratory flume of 12 m long, 50 cm high, and 50 cm wide. The flume obtains water from a permanent upper tank through a 6-inch pump tube. This is located at a 90°angle at the water outlet from the upper tank. The three-point carriages in the channel are used to measure the height of the point with an accuracy of 0.5 mm [19]. The examined stepped weir models were created from foam, as seen in Figure 3. The total height, width, and length of the crest, and slope of the weir are all the same for all models and are 35, 50, 50, 10 cm, and 35°, respectively. Each model had a different length, height, and number of steps. Five discharge pumps (7,12,15,20, and 25 L/s) for each model during 75 experiments were done in the free flow state, as indicated in more detail in Table 1. The study investigated three steps: flat, pooled, and zigzag. The formula gives the dimensions of the step h s /l s , where h s is the height of the step and l s is the horizontal length. The height (h p ) of the end-sill was 1.5 cm and the length (l p ) was 1 cm in the case of the pooled steps. Used number of steps (N s ) (14, 10, 7, 5, and 3).
Discussions and analysis of results
The laboratory observations of the effect of step number, step geometry, step end-sill shape, and discharge on energy dissipation on the step, as well as establishing a relationship between energy dissipation and the ratio of the critical depth flow to step height y h / c s ( ) and the discharge coefficient CD versus the length of step to critical water depth ratio (l y / s c ), are presented here. The stepped weirs showed three flow patterns: the nappe flow, the transition flow, and the skimming flow, as shown in Figure 4(a-c), respectively.
Observed is the fact that the nappe flow occurred at step 3 with a significant step height and low discharge, while the transitional flow occurred at the intermediate discharge and actions (5 and 7).
As for the skimming flow, it was in steps 14 and 10 with a small step height and at all discharges.
The flow energy dispersed over a stepped weir in steps 14, 10, 7, 5, and 3 was depicted as a function of distance from critical flow depth ( y c ) and the step height h s ( ), applied as the dimensionless parameter y h / c s ( ). It should be noted that the rate of energy dissipation increases by decreasing that percentage, as shown in Figure 5, and this corresponds to the previous researchers such as Jahad et al. [20].
The results additionally showed the relationship between the length step-to-the critical depth flow ratio (l y / s c ) and the energy dissipation ratio (E%), as shown in Figure 6. Because the length of the step causes the size of the extrusion to increase and hence create a gradual flow, it was discovered that the percentage of the flow energy dissipation increases as the length of the step to the depth of critical water increases. The fully pooled step achieves the highest dissipation of the flow energy. This is supported by numerous researchers' findings, including those of Nasiralla AL-Talib et al. [21].
The pooled steps have dissipated energy more than flat and zigzag pooled stages. With the entire end-sill in phases, the relative energy loss rises by 5% since the characteristic height of the end-sill increases the amount of water trapped. The hydraulic jump and the impact of the jet on the step face account for the majority of energy loss in the nappe flow, so the characteristic height has less of an effect there. However, the characteristic height has a more significant influence on the transition flow than on the nappe flow, and its impact on the skimming flow is evident.
According to the ratio ( y L / 0 c ) mentioned previously, the crested in this study is the short-crested weir because Discharge coefficient and energy dissipation on stepped weir 3 the ratio ( y L / 0 c ) for the current study ranges between (0.5 < < y L / 1.3 0 c ), equation (5) was applied to calculate C d : The C d values range from 0.7 to 1.25 and are affected by the ratio of y L / 0 c , as shown in Figure 7. In addition, an increase in the discharge leads to an increase in the discharge coefficient, as shown in Figure 8, and thus, the rate of energy dissipation decreases, as shown in Figure 9, which is consistent with previous studies [22]. Based on the experimental data, the discharge coefficient for the stepped weir with the coefficient of determination R 2 of 0.73 was predicted.
Conclusions
These tests showed that as the number of steps increased, the relative dissipation of energy increased due to the high level of roughness of the steps, which increased friction and caused the conversion of kinetic energy into thermal Fully pooled steps dissipate more energy than zigzag pooled steps. Also, more discharge leads to a larger discharge coefficient, which reduces the amount of energy dissipation. Discharge coefficient and energy dissipation on stepped weir 7 | 2023-06-24T13:10:22.789Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "ce96c7f4253a9684cfe77e552d366d79593a0fdd",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/eng-2022-0427/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61d9bce73fa9e8a0b9a27ce75f6c4d2bb972f0f9",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
73426427 | pes2o/s2orc | v3-fos-license | Establishment of Classification of Tibial Plateau Fracture Associated with Proximal Fibular Fracture
Objective The purpose of this retrospective study was to determine the incidence of fibular fractures as an associated injury in tibial plateau fractures according to CT scan. We also attempt to introduce a new morphological sub‐classification on this associated injury and to analyze the correlation between this classification and tibial plateau fractures. Methods We selected cases with fibular fractures from all the tibial plateau fracture patients. The cases were further divided into 2 groups: unicondylar group and bicondylar group. On the basis of our new classification system of fibular fracture, all the included cases were divided into 5 subgroups. Results Finally, a total of 150 cases associated with fibular fractures in 502 tibial plateau fracture cases were identified from our institution database. The incidence of fibular head fracture in tibial plateau fractures was 29.88% (150/502). Seventy‐one cases (47.3%) were involved one condyle, and 79 cases (52.7%) involved both. It shows significant difference in the subgroup of avulsion fracture with horizontal fracture line (Type A) which is ratio of 16.9% in unicondylar group and 1.27% in bicondylar group. Conclusion A new classification of this associated injury describing the morphology of the fracture fragments may improve operative planning.
T ibial plateau fractures are common injuries which constitute 1.6% of all fractures from both high-energy and low-energy trauma 1 . Hence, there are several fracture patterns. A number of classification systems have been proposed to categorize the fracture patterns, simplify communication in clinical practice, and provide guidelines for preoperative planning. The current widely recognized systems are the Schatzker classification and the OTA/AO classification 2 .
Tibial plateau fractures, especially comminuted fractures (Schatzker V and VI), are always associated with serious injuries. The injuries, such as compartment syndrome, neurovascular injury, and ligamentous disruption, have been widely reported [3][4][5][6][7] . Nevertheless, fibular fractures tend to be neglected in the literature as an associated injury of tibial plateau fractures. Zhu et al. reports an incidence of fibular head fractures in bicondylar tibial plateau fractures of 63.41% 8 . The proximal fibular zone is comprised of ligaments, tendons, the common peroneal nerve, and tibiofibular bony structure. Anatomically, the lateral collateral ligament and tendon of the long head of the biceps femoris muscle are attached to the lateral margin of the fibular head. In addition, the popliteofibular, arcuate, and fablelofibular ligaments attached to the fibular styloid process constitute the acute complex, which contributes to the posterolateral stability of the knee 9 . Proximal fibular dislocated fractures cause knee posterolateral complex (PLC) injuries, often creating obvious postural instability and external rotation instability [10][11][12] . There are many tibial fracture classifications, but none of them includes a classification for proximal fibular fractures. In our clinical practice, the importance of fibular fractures has always been realized, which has helped us to treat tibial fractures easily. Therefore, we wanted to conduct a study on tibial plateau fractures associated with proximal fibular fractures. The purpose of this study is: (i) to investigate the incidence and morphology of fibular fractures as an associated injury in tibial plateau fractures; (ii) to further clarify the importance of fibular fractures; and (iii) to introduce a new classification of tibial plateau fractures associated with proximal fibular fractures based on CT scan.
Materials and Methods
D ata were collected by reviewing all the patients who had been hospitalized for tibial plateau fractures in our trauma center between January 2010 and December 2014.
Inclusion criteria: (i) Patients suffering tibial plateau fractures associated with proximal fibular fractures; (ii) patients undergoing anterior-posterior and lateral X-rays films, as well as CT films; (iii) patients have well-documented medical.
Exclusion criteria: (i) pathologic fractures or old fractures; and (ii) patients did not have X-rays films or CT films; (iii) patients younger than 12 years of age.
After exclusion, a total of 150 cases associated with fibular fractures in 502 tibial plateau fracture cases were identified from our institution database.
Three resident surgeons were trained to view all the radiographs on a clinical pictures and communication system (PACS). They first evaluated the radiographs to determine the type of tibial fracture and fibular fracture separately and then achieved a consensus together. All the tibial plateau fractures were classified on the basis of the Schatzker classification system. The cases were divided into two groups according to the number of condyles involved. Then the cases were further divided into five subgroups on the basis of fibular fracture type.
Classification of Proximal Fibular Fracture
Our study group originally proposed a new classification system for fibular fractures, especially for cases that were additional to tibial plateau fractures. In this new classification system, fibular fractures are further divided into five subgroups according to fracture line and degree of comminution: (A) avulsion fibular head fracture with horizontal fracture line; (B) fibular head cleavage fracture with oblique fracture line penetrating into the fibular head; (C) fibular head depressed fracture, obviously depressed without cleavage on CT; (D) fibular head comminuted fracture with more than two fragments on CT; and (E) fibular neck or shaft fracture (Fig. 1).
Demographic Data
A total of 150 cases were included in the study. There were 92 males and 58 females; the sex ratio was 1.60:1. The average age was 51 years (range, 14-78 years), including 1 case in the 11-20 years group, 18 cases in the 21-30 years group, 23 cases in the 31-40 years group, 29 cases in 41-50 years group, 45 cases in the 51-60 years group, 25 cases in the 61-70 years group, and 9 cases in the 71-80 years group. A total of 68 cases were left knee injuries and 82 cases are right-side injuries. The mechanism of injury was a traffic accident in 69 cases, a fall from a height in 45 cases, simply a fall in 21 cases, an athletic injury in 7 cases, a crashing injury in 6 cases, a crush injury in 1 case, and a kick by a domestic animal in 1 case.
Fracture Patterns and Classification
The incidence of fibular fractures in tibial plateau fractures was 29.88% (150/502); 71 cases (47.3%) involved one condyle and 79 cases (52.7%) involved both. The most common pattern in these case series was a split fracture with an oblique fracture line (type B, 32.67%, 49/150). The second most common pattern was a comminuted fracture (type D, 31.33%, 47/150). In the unicondylar group, the most common pattern was a split fracture with an oblique fracture line (type B, 38.03%, 27/71). In the bicondylar group, the most common pattern was a comminuted fracture (type D, 46.84%, 37/79). There was significant difference in the subgroup of avulsion fractures with a horizontal fracture line (Type A), with a ratio of 16.9% in the unicondylar group and 1.27% in the bicondylar group.
Details of the results of the fracture patterns and classification are summarized in Table 1.
Discussion
T ibial plateau fractures can range from a simple split fracture to a comminuted fracture, such as bicondylar injuries, which can result in severe soft tissue injuries 13 . Several authors report that severe fractures are associated with injuries such as collateral ligament injuries, compartment syndromes, meniscus injuries, and popliteal vascular injuries [14][15][16][17][18] . However, fibular fractures as an associated injury of tibial plateau fractures have rarely reported. In our study, the incidence of fibular fractures in tibial plateau fractures is 29.88% (150/502), which is similar to the incidence of cruciate ligament and collateral ligament injuries.
We suppose that the fracture patterns are the result of the injury mechanism. As the axial impact load is being transmitted from the femoral to the tibial articular surface, the lateral plateau depression fracture (S3) might be caused by the force during valgus moment. The fibular head depression (Type C) might be caused at the same time the tibial plateau depressed on the posterolateral area. The magnitude of the generated force increasing or the shearing force compressing the fibular head might be split (Type B) associated with lateral plateau fracture (S1, S2). While mixed force on the tibial plateau is always the reason for mixed fractures (S5, S6), fibular head comminuted fractures (Type D, Type E) always occur with bicondylar fractures and diastasis. The avulsion fibular head fracture (Type A) is a special case. This pattern always occurs as a result of an injury producing excessive varus forces coupled with axial loading (S4). In our study, 12 cases of avulsion fibular head fracture (Type A, 92.31% 12/13) were associated with medial tibial plateau fractures (S4). Huang et al. reported that the incidence of avulsion fractures of the fibular head was 0.6%, and the typical location of the avulsed osseous fragment was adjacent to the posterolateral rim of the tibial plateau 9 . Capps maintained that the avulsion fracture of the fibular head was commonly associated with proximal tibia fractures, as was the injury to the biceps femoris tendon and lateral collateral ligament 10 Fig. 1 The new classification system of fibular fractures.
99
ORTHOPAEDIC SURGERY VOLUME 11 • NUMBER 1 • FEBRUARY, 2019 evidence of an avulsed bone fragment originating from the site of attachment of the lateral collateral ligament or the tendon of the biceps femoris muscle on MRI.
The integrity of the fibular head is important for the stability of the posterolateral corner (PLC) of the knee. In addition, the separation of the proximal tibiofibular joint in comminuted fibular head fractures (Type D in our study) and in bicondylar tibial plateau fractures may cause instability of the PLC. According to Capps et al. 8 , fibular head fractures are an easily missed injury of the knee. The vascular repair was emphasized by Green et al. 11 , who report on the high incidence (32%) of injuries to the popliteal artery that accompanies PLC injuries. However, in our study, no vascular injuries were observed. Ross et al. suggest that the lesions should be repaired within 2 weeks, to reduce the stability of the posterolateral corner 19 . Diagnosis of the separation of the proximal tibiofibular joint is difficult because it requires an awareness of the injury and eliciting the mechanism of injury. Certainly, injuries of the PLC were not evaluated by arthroscope in our study, and, thus, their overall impact on the clinical outcomes, if any, is unknown.
Whether surgical intervention for fibular fractures should be undertaken is highly controversial. Zhang presented a case of a 45-year-old man with a PCL injury and an arcuate avulsion fracture of the fibular head treated with a reduction of avulsed bone fragment and fixation with a suture anchor using an all-arthroscopic technique 20 . Chung reviewed 6 cases of avulsion fracture of the fibular head associated with lateral instability of the knee with an average 2-year follow-up, supporting the effectiveness of surgical reduction and fixation 21 . In our clinical setting, we suggested reducing the fibular head before the reduction of the tibial plateau fracture preventing the instability of the PLC. The surgical technique was simple, involving anterolateraly inserting one or two K-wires into the main fragment as a joystick and lifting the fibular head. The reduction is initially assessed by comparison of the radiographic views with C-arm. In our limited experience, the reduction of fibular fractures, especially split fractures (type B), depressed fractures (Type C), and fibular shaft fractures (Type E), is helpful for the minimally invasive fixation of tibial plateau fractures and releasing excessive local strain of the lateral tibial plateau.
This study had several limitations. We would like to conclude that fibular head fracture is predominantly associated with tibial plateau fractures. However, long-term follow up for functional outcome still needs to be obtained to further confirm the impact of fibular head fractures on PLC. Whether such a classification system will help to predict prognosis can only be determined by senior doctors in a further prospective clinical verification trial.
Conclusion
This study mainly determined the incidence of fibular head fracture to be an associated injury of tibial plateau fractures. We proposed a new classification of this associated injury, describing the morphology of the fracture fragments. This classification system may improve the understanding of fibular head fractures as an associated injury of tibial plateau fractures, and help to enhance surgical plans and reduction strategies. | 2019-03-08T14:11:42.497Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "dbb29c51e4878cee7c60d08da0ab3e4602c804df",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/os.12424",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93ef1a1e8365d95d1d6e3e59fbb8decde932ce2c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2435715 | pes2o/s2orc | v3-fos-license | Late Time Behaviors of an Inhomogeneous Rolling Tachyon
We study an inhomogeneous decay of an unstable D-brane in the context of Dirac-Born-Infeld~(DBI)-type effective action. We consider tachyon and electromagnetic fields with dependence of time and one spatial coordinate, and an exact solution is found under an exponentially decreasing tachyon potential, $e^{-|T|/\sqrt{2}}$, which is valid for the description of the late time behavior of an unstable D-brane. Though the obtained solution contains both time and spatial dependence, the corresponding momentum density vanishes over the entire spacetime region. The solution is governed by two parameters. One adjusts the distribution of energy density in the inhomogeneous direction, and the other interpolates between the homogeneous rolling tachyon and static configuration. As time evolves, the energy of the unstable D-brane is converted into the electric flux and tachyon matter.
Introduction
Time dependent process in string theory has been intensely studied in recent years. Assuming that an unstable D-brane decays homogeneously, the whole decay processes, in the vanishing string coupling limit g s → 0, can be described by the marginally deformed boundary conformal field theory (BCFT) [1]. The main results of this time dependent solution, referred as rolling tachyon, indicate that, according to time evolution, the energy density remains constant but the pressure goes to zero asymptotically.
On the other hand, spatial inhomogeneity has been another important issue. In particular, much works has been studied on tachyon solitons, such as tachyon kinks [2,3,4,5,6] and vortices [2,7]. These solitons are interpreted as the lower dimensional D-branes on the worldvolume of the original unstable system. Thus in order to see the dynamical formation of the lower dimensional D-branes, it is indispensable to take into account the spatial inhomogeneity in the decay process of an unstable system.
The rolling of tachyon, which is inhomogeneous along one spatial direction, was considered in BCFT [8,9,10]. The late time behavior of the resulting energy-momentum tensor is qualitatively different from the case of the homogeneous rolling tachyon. The relevant components of the energy-momentum tensor exhibit singularities at spatially periodic locations within a finite critical time. These spatial singularities were interpreted as the codimension-one D-branes [8,9,10]. This subject was also considered in the boundary string field theory [11] or the DBI-type effective field theory [12,13,14,15,16,17,18,19,20,21]. The Ref. [12] showed that the inhomogeneous solutions with a runaway tachyon potential formed caustics with multi-valued regions beyond a finite critical time, and proposed that in the presence of caustics, the higher derivatives of the tachyon field blow up. For this reason the DBI-type effective action is not reliable after the formation of caustics, since it was proposed as an effective action for the tachyon field in string theory where the higher derivatives of the tachyon field are truncated [22].
Another interesting aspect in the decay process is the dynamics at the bottom of the tachyon potential. This process has two kinds of decay products, which carry the effective degrees of freedom of the original unstable D-brane, such as energy-momentum and fundamental string charge [23]. They are called tachyon matter [1,24] and string fluid [25]. In the tachyon vacuum, the dynamics of the system is characterized as the two degrees of freedom [26,16,27]. The main purpose of this paper is to explore the formation of these final states in terms of the inhomogeneous tachyon and electromagnetic fields at late time.
Let us consider the DBI-type tachyon effective action with gauge field interactions [22] described by where A µ is an U(1) gauge field, V (T ) is a runaway tachyon potential, and T p the tension of an unstable Dp-brane. This action (1.1) is expected to provide a good description of an unstable D-brane in the case that the tachyon field T is large, and the higher derivatives of T are small. As we have seen in the rolling tachyon solution [1], the tachyon field T goes to infinity at late time of the D-brane decay process. Thus DBI-type effective action describes well the late time behaviors of the process. However, once we take into account the inhomogeneity of the tachyon field without gauge field interactions, as mentioned before, DBI-type effective action becomes inadequate in a finite critical time in describing the dynamics of an unstable D-brane [12,13,14,15,17,18,20]. Though we include the constant electromagnetic fields, the singularity we encounter seems unavoidable [10].
In this paper, we suggest that the roles of the spacetime dependent electromagnetic fields are nontrivial on unstable D-brane system. We assume that the tachyon and electromagnetic fields depend on time and one spatial coordinate under an exponentially decreasing tachyon potential, e −|T |/ √ 2 . We find an exact solution in a frame which gives vanishing momentum density provided by an appropriate electromagnetic fields. The solution represents periodic profile along the spatial direction and it involves the interesting and inaccessible regions alternatively. In the interesting regions, the solution has no singularities in time direction, and describes the late time behaviors of an unstable D-brane decay.
In section 2, we describe the calculation of the exact solution for the tachyon and electromagnetic fields. In section 3, we analyze the late time behaviors of an unstable D-brane. Section 4 is devoted to conclusion.
An Exact Inhomogeneous Solution
Our purpose in this work is to understand the late time behaviors of the inhomogeneous tachyon condensation in terms of an exact solution. We take a specific frame which gives the vanishing momentum density over all space-time.
Equations of motion of the tachyon T and gauge field A µ in the action (1.1) are written as where C µν S and C µν A are the symmetric and anti-symmetric parts of the cofactor C µν of the matrix (X) µν in Eq. (1.2), and we define Conservation of energy-momentum is described by where T µν is the energy-momentum tensor. Hamiltonian density H is expressed as whereṪ ≡ ∂ 0 T , h ij = δ ij + ∂ i T ∂ j T + F ij , (i, j = 1, 2, · · · p), the conjugate momenta, Π T and Π i , for the tachyon T and gauge field A i respectively, and the conserved linear momentum, P i , associated with the translation symmetry are From now on, let us introduce an exponentially decreasing tachyon potential, where V 0 and R are arbitrary constants. We consider an ansatz for fields which live on the worldvolume of the unstable Dp-brane, and, for simplicity, turn off all other components of the gauge fields.
Then the only non-vanishing linear momentum in Eq. (2.8) is where T ′ = ∂ 1 T . Here, we choose the zero-momentum frame due to cancelation of the effects of tachyon and electromagnetic fields, Additionally the determinant X in Eq. (1.2) under condition (2.12) is factored as Conservation of energy-momentum, ∂ µ T µν = 0, under the condition (2.12) leads to an observation that the energy density T 00 has only spatial dependence and T 11 time dependence, i.e., (2.14) where we used the notations, t = x 0 and x = x 1 . The Eqs. (2.14) and (2.15) are rewritten by Under the tachyon potential (2.9), the equation of motion for the tachyon field (2.3) is simplified asT Using the Eqs. (2.16), (2.17), and (2. 19), we arrive at the following important results, The derivation of this equation is rather technical and therefore is recorded in Appendix. The factorization (2.16) implieṡ From the relations (2.12), (2.20), and (2.21) we get where α is an arbitrary constant. Inserting the expressions (2.21) and (2.22) into the Eq. (2.17), we obtain the first-order differential equations for f (t) and g(x), where ξ is a positive constant. Solutions for the equations (2.23) -(2.24) are given by where c 1 and c 2 are integration constants, which represent the translation symmetries along time and spatial directions respectively. Of course, the expressions (2.23) -(2.26) satisfy the 2-component of the gauge equation (2.4), Substituting the solution (2.25) into the Eq. (2.5), we find Finally we obtain an exact solution for the tachyon field by inserting the expression (2.25) into (2.16), This solution is characterized by two parameters, γ and α. We will investigate the roles of these parameters in section 3. At first glance this result seems to be unnatural since T (t, x) has the periodic divergencies in the limit cos α(x−c 2 ) → 0 due to the property of cosine function. Actually in the spatially periodic regions at the initial time t = 0 which satisfy the corresponding tachyon field T (t = 0, x) is negative. In these regions the DBI-type effective action does not provide a good description for the dynamics of an unstable Dbrane as we explained in the section 1. In order to describe the late time behaviors of the decay process of an unstable D-brane, we restrict our interest to the spatially periodic regions which correspond to the large positive value of tachyon field. We will describe the details in the next section.
Late Time Behaviors of the Decay Process
It was observed in the previous section that there is an exact solution (2.28) for the exponentially decreasing tachyon potential in momentum zero frame. Our purpose in this section is to analyze the solution (2.28) in superstring theory. Since a total charge is conserved (we will mention later in detail at subsection 3.3) and the tachyon potential has Z 2 -symmetry under T → −T , and runaway property V (±∞) = 0, we employ a tachyon potential composed of two parts, Tachyon profile is read from the Eq. (2.28) in the regions (I) and (II) by choosing the appropriate integration constants, c 1 and c 2 , where x I (x II ) represents the spatial coordinate belonging to the region (I)((II)), t 0 is some large value introduced to figure out the late time behaviors of the inhomogeneous fields.
The ranges of x I and x II are given by There are periodic regions, where the decay process of the unstable D-brane is not described well by the solution (3.31), referred as inaccessible regions, To illustrate the solution (3.31) graphically, we draw two figures for the tachyon potential V (T ) and tachyon field T (t, x) in Fig.1. The arrows in Fig.1 (b) represent growing tachyon field as time elapses. The corresponding tachyon fields spans the ranges at initial time t = 0, where Electromagnetic fields in Eq. (2.22) take the functional forms . (3.37) The configuration for the tachyon field (3.31) represents the time evolution of the spatially periodic profiles governed by two parameters, γ and α. γ adjusts the distribution of the energy density, while α is the scaling parameter of time and controls the period. We investigate the roles of the parameter α.
This analysis is also applicable to the case of the region (II). It is well-known that when the tachyon and electromagnetic fields in the DBI-type effective action (1.1) depend on only one spacetime coordinate, the results of equations of motion for the given system show that all the electromagnetic fields are constants [5,28]. In this limit, the period of magnetic field B(x) reaches to infinity but its amplitude remains finite. Therefore the results in Eqs. (3.38) represent a homogeneous rolling tachyon with almost constant magnetic field and constant energy density T 00 = T 2 p V 2 0 /γ. The pressure and tachyon matter density are (3.39) In the limit of t → ∞, we obtain the pressureless matter with constant energy density and tachyon matter density [24].
α → ∞ case : Static configurations
On the other hand, when α goes to infinity, x ± 4m+1 andx ± 4m+1 reach to fixed finite values, where γ adjusts the distribution of energy density for the given static configuration. In the limit γ → 0, most of energy is localized at x ± 4m+1 . The energy stored in half period of one cycle in the limit is obtained by where x 0 = √ 1 + α 2 π/ √ 2α. Since our exact solution is valid only for large T , the decent relation (3.42) is approximately correct.
As we have worked in the above subsections 3.1 and 3.2, α is the only parameter which governs the time scale and period along spatial direction for a given γ in the solution (3.31). Thus α is an interpolating parameter between the homogeneous rolling tachyon (α = 0) and the static configuration (α → ∞).
Late time behaviors
As time elapses, the tachyon profiles (3.31) in the regions (I) and (II) grow up along the arrows in Fig.1 (b). In the interesting regions (3.32), all physical quantities can be expressed explicitly, and the unstable system evolves without singularity in our system. Non-vanishing components of energy-momentum tensor are given by (II) .
(3.45)
Since the momentum density flow is zero in the interesting regions of the solution (3.32), the initial energy density distribution is not changed in time direction. As time goes to infinity, pressure along the inhomogeneous direction, x, goes to zero exponentially [24]. T 22 component depends on t and x-coordinates for finite α. However, in α → 0 limit (homogeneous rolling tachyon limit), T 22 and T 11 share with the same behavior in time direction. The solution (3.31) also provides the expression for the electric flux density which satisfies Gauss constraint ∂ i Π i = 0 and the gauge equation (2.27), . (3.46) This expression denotes that absolute values of the electric flux and tachyon matter densities increase in the D-brane decay process. However, total string charge accumulated in the interesting regions is conserved in one period ; Combining the Eqs. (3.36), (3.43), and (3.46), we obtain the following two relations, where the Hamiltonian density H is given by Since there is no singularity in interesting regions in time direction, the obtained solution describes safely the decay processes near the tachyon vacuum (V → 0) in t → ∞ limit. As time goes to infinity, the time dependent electric fields in the regions (I) and (II) become constants with opposite sign, . (3.51) These relations in tachyon vacuum (V → 0) reproduce the well-known expression [29,26], This result leads to an intriguing observation. As we have discussed in subsection 3.2, the system gives a static configuration in the limit α → ∞. At initial time the electric field is equal to zero in Eq. (3.41). As time evolves to infinity (t/α → ∞), the electric field becomes critical andṪ is suppressed to zero, As we have seen in Eq. (3.50), the energy density is composed of three parts, such as string flux density (Π 2 ), tachyon matter density (Π T ), and tachyon potential energy. As time evolves, the contributions from Π 2 and Π T increase, while the contribution from the tachyon potential decreases. Finally the unstable D-brane disappears at the tachyon vacuum (V → 0). The resultant energy density in the tachyon vacuum is composed of two parts [25,26], . (3.55)
Conclusion
We have investigated the spatially inhomogeneous decay of an unstable D-brane in DBItype effective action. We found an exact solution under an exponentially decreasing tachyon potential. The resulting solution involves the periodic inaccessible region along the inhomogeneous direction, while the behavior in time direction is well-defined. The solution is governed by two parameters, γ and α. γ adjusts the distribution of energy density, and α is an interpolating parameter between the homogeneous rolling tachyon and the static solution.
It is well-known that the inhomogeneous rolling tachyon with a runaway type tachyon potential forms caustics with multi-valued regions beyond a finite critical time. After the critical time the unstable system may not be described by DBI-type tachyon effective action. However, as we have seen in section 3, it was possible to describe the late time behaviors of an unstable D-brane in the interesting regions due to the nontrivial roles of the spacetime dependent electromagnetic fields. Therefore our solution may open a possibility to find the caustic free tachyon field solution in a specific setting with spacetime dependent electromagnetic fields in tachyon effective field theory.
As time evolves, all physical quantities are well defined and go to tachyon vacuum (V = 0) without developing further singularities in the interesting regions. Electric flux density, which is proportional to the tachyon matter density with a constant ratio α, increase in magnitude, but have the opposite signs in the region (I) and (II). They finally reach to space dependent finite configurations. As a result, the energy stored in the unstable D-brane at the initial stage is converted to that of the string fluid and the tachyon matter. Since two interesting regions in one cycle (See Fig.1 (b)) go to the different vacua (T → ∞ and T → −∞) in t → ∞ limit, inaccessible region between them contains a topological kink which seems to be interpreted as D(p − 1)D(p − 1) or D(p − 1)F1D(p − 1)F1 . | 2014-10-01T00:00:00.000Z | 2006-01-31T00:00:00.000 | {
"year": 2006,
"sha1": "bca2849dbc947fe6a7fbccfcf0b70d0b2ebfdbbe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0601236",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2299690c49a5d65036c0d198d1d539f67a4ab565",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244104509 | pes2o/s2orc | v3-fos-license | The Economic Impact of Originator-to-Biosimilar Non-medical Switching in the Real-World Setting: A Systematic Literature Review
Introduction To save costs to the healthcare system, forced non-medical switch (NMS) policies that cut drug coverage for originator biologics and fund only less expensive biosimilars are being implemented. However, costs related to the impact of NMS on healthcare resource utilization (HCRU) must also be considered. This study aims to summarize the evidence on the economic impact of an originator-to-biosimilar NMS. Methods A systematic literature review (SLR) was conducted. Publications reporting on HCRU or costs associated with originator-to-biosimilar NMS in the real-world setting were searched in MEDLINE and EMBASE from January 2008 to February 2020. In addition to hand searching the reference lists of relevant publications and SLRs, key conference websites, PubMed, and various government sites were also searched for the 2 years preceding the search (2018–2020). Results A total of 1845 citations were identified, of which 49 were retained for data extraction. Most studies reporting on the HCRU associated with NMS reported on post-NMS HCRU alone without a comparison pre-NMS. However, four studies described a difference in HCRU (i.e., investigations pre- vs post-switch or between non-switchers vs switchers), all of which reported a relative increase in HCRU, including laboratory testing, imaging, medical visits, and hospitalizations, amongst patients who underwent an originator-to-biosimilar NMS. Most studies reporting on the costs associated with NMS reported significant savings following NMS on the basis of drug costs alone. However, four studies specifically reporting on the difference of costs following originator-to-biosimilar NMS all demonstrated an increase in HCRU-related costs associated with NMS (increase in HCRU-related costs of 4–37% or 148–2234 2020 Canadian dollars). Conclusion Amongst the studies that reported on the difference in HCRU pre- vs post-switch or between non-switchers and switchers, all showed an increase in HCRU and related costs associated with NMS, suggesting that the expected overall savings due to less costly drug prices may be reduced as a result of an increase in HCRU and its associated costs post-switch. Nevertheless, more real-world studies that include NMS-related healthcare costs in addition to drug costs are needed. Supplementary Information The online version contains supplementary material available at 10.1007/s12325-021-01951-z.
publications and SLRs, key conference websites, PubMed, and various government sites were also searched for the 2 years preceding the search (2018-2020).
Results: A total of 1845 citations were identified, of which 49 were retained for data extraction. Most studies reporting on the HCRU associated with NMS reported on post-NMS HCRU alone without a comparison pre-NMS. However, four studies described a difference in HCRU (i.e., investigations pre-vs post-switch or between non-switchers vs switchers), all of which reported a relative increase in HCRU, including laboratory testing, imaging, medical visits, and hospitalizations, amongst patients who underwent an originator-to-biosimilar NMS. Most studies reporting on the costs associated with NMS reported significant savings following NMS on the basis of drug costs alone. However, four studies specifically reporting on the difference of costs following originator-tobiosimilar NMS all demonstrated an increase in HCRU-related costs associated with NMS (increase in HCRU-related costs of 4-37% or 148-2234 2020 Canadian dollars). Conclusion: Amongst the studies that reported on the difference in HCRU pre-vs post-switch or between non-switchers and switchers, all showed an increase in HCRU and related costs associated with NMS, suggesting that the expected overall savings due to less costly drug prices may be reduced as a result of an increase in HCRU and its associated costs post-switch.
INTRODUCTION
A biologic drug is any pharmaceutical drug product whose components or precursors are manufactured in, extracted from, or synthesized from, a living organism, or their cells, such as humans, animals, plants, and fungal or microbial organisms [1]. Important biologic drugs include hormones, hematopoietic growth factors, thrombolytic agents, cytokines, therapeutic enzymes, and antibodies [1]. Biologics are used in the treatment of rheumatological diseases, such as rheumatoid arthritis (RA), and gastrointestinal diseases, such as Crohn's disease (CD) and ulcerative colitis (UC) [2][3][4][5]; they can also be used to treat patients suffering from other chronic conditions in the areas of dermatology, hepatology, oncology, and growth development [6][7][8][9]. For this reason, the discovery of biological therapies have made a substantial clinical impact on the Canadian healthcare system. Canada is known to have a high prevalence of many of these chronic conditions, such as UC, CD, RA, and psoriasis, having some of the highest rates reported worldwide. Additionally, as a result of an aging population, population growth, and increasing life expectancy, the incidence and prevalence of some of these conditions have been increasing in recent years [10][11][12].
While biologic drugs comprise various vital therapeutic options for patients, they can be very costly to the healthcare system. In 2018, sales of biologic drugs in Canada reached $7.7 billion, placing Canada among the topranked countries in terms of per capita spending [13]. Biosimilars, on the other hand, are biologic medicinal products that are highly similar to a reference biologic drug that was already authorized for sale, and often sold at a lower price [1,[14][15][16][17]. Specifically in Canada, biosimilar drugs are sold at a reduced price that is, on average, 30% less than the price of the reference biologic [13,18]. Accordingly, in comparison to Remicade Ò , biosimilar infliximab drugs are associated with an approximate 30-40% decrease in the listed price [18].
Biosimilars can play a role in limiting the economic burden on the healthcare system and increasing patient access to biological treatments. Indeed, biosimilars can be offered at lower prices than the reference biologic and, in consequence, lead to price competition amongst biologic drugs [19]. Consequently, the adoption of biosimilars can help to liberate resources that could be used elsewhere by the healthcare system, such as for the reimbursement of innovative medicines [19]. A number of studies have also suggested that switching from a reference biologic to a biosimilar is not associated with any major efficacy, safety, or immunogenicity issues [19,20]. For these reasons, governments in some jurisdictions have or are planning on implementing forced nonmedical switch (NMS) policies by cutting drug coverage for reference biologics and funding only less expensive biosimilars. These NMS policies describe a plan whereby a stable patient's treatment regimen is changed for reasons other than efficacy, side effects, or adherence related to the original treatment [21]. Importantly, there has been ongoing debate as to whether or not the originator-to-biosimilar NMS is a viable option for patients that are successfully being treated with an originator biologic [21,22]. Health Canada has authorized various biosimilars for sale in Canada and provinces have already introduced reimbursement policies for the utilization of biosimilars instead of the biologic originator for new patients. British Columbia announced in May 2019 a NMS policy that is expected to reduce costs by an estimated $96.6 million over the first 3 years alone [23,24]. Specifically, while treatmentnaïve patients will receive the biosimilar at treatment initiation, the NMS policy will force patients who are currently receiving the reference biologic to switch to the biosimilar drug regardless of disease activity. In December 2019, Alberta also announced the implementation of a similar originator-to-biosimilar NMS policy, while Ontario is taking steps towards realization of a similar policy [25,26].
Although the introduction of biosimilars is expected to provide cost savings to the healthcare system, the impact of originator-tobiosimilar NMS on healthcare resource utilization (HCRU) and their associated costs is complex to assess. Importantly, biosimilars are often wrongly likened to generic drugs. Biosimilars are not generic drugs; they can never be exactly the same as their originator. Approved biosimilars are biotherapeutics that have been shown to have no clinically meaningful differences compared to their originator products. Therefore, when estimating the economic impact of originator-to-biosimilar NMS, one must consider indirect costs such as costs associated with additional healthcare resources including medical visits, laboratory tests, and phone consultations.
In 2019, Liu et al. published a systematic literature review (SLR) to retrieve studies that assessed the impact of NMS on HCRU and costs and found that the true economic impact of originator-to-biosimilar NMS remains uncertain as the focus of most studies remains on drug costs [27]. Liu et al. also concluded that more real-world studies focused on drug costs as well as the additional costs associated with HCRU are needed in order to accurately evaluate the overall economic impact of originator-tobiosimilar NMS. Considering the rapidly changing regulatory and market access framework for biosimilars, there are potentially several key studies reporting real-world data on originator-to-biosimilar NMS that have recently been published or presented at recent conferences. Consequently, an updated SLR on this topic, specifically in a real-world setting, is needed to provide more current evidence on the economic impact of introducing such NMS policies in Canada. Accordingly, the objective of this SLR was to systematically identify studies evaluating the HCRU or costs associated with originator-to-biosimilar NMS in the real-world setting.
Study Identification
The literature search was performed in the MEDLINE and EMBASE databases using relevant keywords to identify published studies and conference proceedings reporting data associated with HCRU or costs associated with originator-to-biosimilar NMS, from January 2008 until the time of the search (March 3, 2020). For MEDLINE and EMBASE, in order to better align with the precise SLR objective, a search filter was developed and based on the Canadian Agency for Drugs and Technologies in Health (CADTH) for economic evaluations/cost/economic models as well as the recent publication by Lui et al. (2019) in the Cochrane database of systematic reviews, entitled ''Search strategies to identify observational studies in MEDLINE and Embase'' [28]. The developed filter was supplemented with keywords regarding treatments of interest (i.e., biosimilar, originator, etc.), studies in the real-world setting (i.e., cohort, cross-sectional, real-world, longitudinal, retrospective, etc.), various terms related to HCRU and costs (i.e., health resources, economics, cost, etc.), and NMS (i.e., switch, alternative, launch, etc.). Any additional publications were identified by hand searching reference lists of relevant publications and previously published SLRs. Full details of the literature search are presented in Appendix 1 in the electronic supplementary material.
In order to identify relevant study results that might not have been indexed by EMBASE or MEDLINE at the time of the search, key conference proceedings of disease areas that may be treated with biologics/biosimilars were consulted for the 2 years preceding the search (2018-2020). In parallel, PubMed and government sites, namely National Institute for Health and Care Excellence (NICE), CADTH, and Canadian provincial sites (ex. Institut national d'excellence en santé et en services sociaux, Ontario Health Technology Advisory Committee, etc.) were searched for relevant reports for the same period (2018-2020). For conference proceedings, PubMed, and government sites, simple search terms (e.g., biosimilar, originator, switch) were used independently. A complete list of the conference websites is presented in Appendix 2 in the electronic supplementary material.
Study Eligibility Criteria
This SLR was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [29]. The review question was established using the PICOS framework (Population, Interventions, Comparators, Outcomes, Study Design). Specifically, the study population consisted of patients who underwent an originator-tobiosimilar switch for non-medical, or presumably non-medical, reasons (i.e., patient choice, all patients switched, patients switched irrespective of disease activity, patients with stable disease switched, financial reason), with no restrictions pertaining to patient age, gender, or disease area. Interventions included any biosimilar following treatment with the reference biologic. There was no restriction on the study comparator. The outcomes of interest included HCRU and any costs associated with originator-to-biosimilar NMS. This SLR was restricted to interviews, surveys, cohort studies, database studies, and patient-reported outcomes (PRO) studies in the real-world setting. Lastly, this SLR was limited to English publications, except when searching Québec provincial sites, namely Institut national d'excellence en santé et en services sociaux, for which French publications were also included. The SLR is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.
Study Selection
Two reviewers independently screened titles and abstracts for relevance. Any citation/abstract deemed relevant by either reviewer was obtained in full-text form. Full-text articles and conference abstracts were then reviewed by both reviewers independently. Any publication failing to meet the eligibility criteria was excluded. In the case of duplicated publications on the same study, the most up-to-date publication was used. Discrepancies in study selection were resolved by consensus or with the help of a third reviewer.
Data Extraction
Using a predefined extraction form, one reviewer extracted information from each eligible study, which was subsequently validated by a second reviewer to ensure accuracy. Data extracted from each publication and conference proceeding, if available, are shown in Table S1 in the electronic supplementary material. All costs were converted and inflated to 2020 Canadian dollars ($C) using the general annual consumer price index [30].
Study Quality Assessment
The risk of bias of each individual selected study available in full-text form was assessed using the Cochrane Collaboration Risk of Bias in Non-Randomized Studies-of Interventions (ROBINS-I) [31].
Search Results
A flowchart of the selection process for the included studies is illustrated in Fig. 1. A total of 1845 studies were initially identified from the MEDLINE and EMBASE databases. After the exclusion of duplicates, 1720 studies were evaluated on the basis of title and abstract. Of them, 1242 were excluded on the basis of title selection and 425 were excluded on the basis of abstract selection. Of the 53 studies remaining, six were excluded for the following reasons: not real-world data (n = 1), not originator-tobiosimilar NMS (n = 1), no HCRU or costing data (n = 4). The search of conference websites and handsearching the reference lists of relevant publications resulted in two additional studies, namely one conference proceeding from each reference source. In total, 18 full-text publications and 31 abstracts were selected for data extraction.
Description of Included Studies
The study characteristics are described in Table 1. The majority were center-based cohort studies (n = 41); other study types comprised interviews (n = 2), physician surveys as part of a simulation or decision tree model (n = 2), postmarketing (n = 1), and database (n = 3) studies. Most of the studies were from various countries in Europe (n = 43). Of note, only one study was based in North America, specifically the USA.
The disease areas identified were primarily in rheumatology (n = 19) and gastroenterology (n = 21). Infliximab was the sole biosimilar drug investigated in gastroenterology, while studies in rheumatology included infliximab (n = 5), rituximab (n = 1), and etanercept (n = 13). The patient populations and patient follow-ups varied considerably between studies. In gastroenterology studies, the mean number of patients studied was 92.6 (range 5-313) with a mean follow-up time of 13.6 months (range 6-60 months).
In studies investigating rheumatology populations, the mean number of patients studied was 170.9 (range 25-1259) with a mean follow-up time of 9.2 months (range 4-15.8 months). Moreover, one study each was performed in dermatology (etanercept, 17 patients, 3-month follow-up), growth development (somatropin, 98 patients, followup not reported [NR]), hepatology (erythropoiesis-stimulating agent [ESA], 163 patients, 24 week follow-up), and oncology (filgrastim, 37 patients, follow-up NR). There were five included studies that either focused on multiple disease areas or did not specify the disease area. Lastly, amongst the 49 identified, eight citations reported on the costs associated with the implementation of a switch program at their center in addition to HCRU and/or costs post-NMS.
Healthcare Resource Utilization (HCRU)
Nineteen studies reported on real-world HCRU associated with originator-to-biosimilar NMS ( Table 2). Among them, 11 studies investigated gastroenterology patients, four investigated rheumatology, two investigated multiple disease or unspecified areas, and there was one study each on oncology and growth development. The majority of these studies (n = 15) reported on HCRU during the follow-up period after NMS only; therefore, as there was no comparison to a study period or patient population without NMS, it cannot be concluded whether or not the reported utilization of healthcare resources was likely due to NMS in these studies. The 11 studies investigating gastroenterology patients demonstrated that hospitalizations and surgeries were common among patients following originator-to-biosimilar NMS; however, these studies did not show that these events were more, equally, or less likely to occur following NMS as there was no comparison to a pre-switch or non-switch population.
Four studies reported on real-world HCRU associated with originator-to-biosimilar NMS by describing the difference between patients preand post-switch or between patients who switched and those who remained on the reference biologic (i.e., switchers and non-switchers, respectively). Interestingly, all four of these studies reported an increase in HCRU with originator-to-biosimilar NMS, three of which were focused on rheumatology and the other on oncology. More specifically, they found that NMS can be associated with increased medical visits, medical services such as imaging, phone consultations, and emergency room (ER) visits, in addition to hospitalizations [6,[32][33][34].
In rheumatology, Tarallo et al. (2019) reported an increase in HCRU for rheumatology patients following NMS [32]. In this study, rheumatology specialists were surveyed and reported on a total of 1259 patients who switched from the etanercept reference biologic to an etanercept biosimilar. It was found that, in comparison to non-switchers, patients who switched to the biosimilar experienced an increase in the number of various services at both 0-3 months and 4-6 months post-switch, which included blood tests, x-rays, ultrasounds, ER visits, specialist visits, and hospitalizations [32]. In line with these results, the studies by [33,34]. The difference in outpatient visits for patients with rheumatic disease associated with NMS was greater in the study by Tarallo [33]. Together, these studies demonstrated that while some healthcare resources may remain unchanged, other healthcare resources may significantly increase following originator-to-biosimilar NMS in rheumatic patients.
In an oncology study, Al Rabayah et al. (2018) also found an increase in both the frequency and duration of hospitalizations among patients who switched to a biosimilar in comparison to those who remained on the reference biologic (15.6% vs 12.6% and 7 days vs 6.4 days, respectively, follow-up period not specified) [6].
Non-medical Switching-Related Costs
Thirty-three studies reported on real-world HCRU-related and drug-related costs associated with NMS (Table 3). Among them, 13 studies investigated gastroenterology patients, 15 Annual cost-per-patient was estimated using the patient journey scenario of the decision tree model for which inputs were based on survey and registry data b The selected cohort of 30 patients included all patients treated with etanercept biosimilar since its incorporation into the pharmacotherapeutic guide of the hospital. The number of patients switched from the reference biologic is not specified nvestigated rheumatology, three investigated unspecified or multiple disease areas, and there was one study each on dermatology and growth development. The majority of these studies reported on the savings associated with drug costs alone or the overall savings following NMS without specifying the inputs used for the calculations; however, four of these studies reported on the difference in costs between patients pre-and post-switch (n = 1) or between patients who switched and those who remained on the reference biologic (i.e., switchers and nonswitchers, respectively, n = 3) [32,[35][36][37].
With regards to infliximab originator-tobiosimilar NMS, a recent publication by Huo- ]/year, a 60% reduction in drug cost), the HCRU-related costs were numerically greater in patients following NMS. While not as significant as the savings gained as a result of reduced biosimilar drug costs, the total annual secondary healthcare costs following NMS in patients with CD and UC/IBDU were €3898 ($C6183) and €2763 ($C4382), respectively, in comparison to €3202 ($C5079) and €2648 ($C4200), respectively, prior to NMS, amounting to an increase in total healthcare costs ranging from 4% to 22% [35]. Phillips 10.0% (£930 [$C1618]) per patient following NMS to etanercept biosimilar, but that the switch generated an increase in annual HCRU-related costs of £1120 ($C1948) to £1283 ($C2232) per patient, amounting to an increase of 32% to 37% in total costs per patient following NMS, which is greater than the savings attributed to drug costs alone [32]. Although biosimilars are expected to provide savings to healthcare systems, these studies suggested that, when taking into account HCRU-related costs in addition to drug costs, the overall savings associated with originator-to-biosimilar NMS are either reduced or eliminated resulting in an increase, rather than decrease, in the annual costs per patient.
Another factor that must be considered in the costs associated with originator-to-biosimilar NMS is the establishment of a switch program. Eight studies reported on the costs associated with the implementation of a switch program within their center (Table 1) [38][39][40][41][42][43][44][45]. [45], reported substantial savings that were b Fig. 2 Risk of bias assessment calculated using drug costs alone (Table 3). In addition, Nisar et al. (2019) stated an overall annual savings of approximately £100,000 ($C171,725) without specifying the inputs used in the calculations [42]. The three remaining studies reported on the specific costs generated by the switch program. St Clair Jones et al. (2017) found that the savings related to yearly drug costs amounted to £224,000 ($C355,291) and an overall savings of £300,000 ($C475,836); however, in order to fund a specialist IBD nurse, the program also required a one-time fee of £1250 ($C1983) in funding per patient [39]. Rhamany et al. (2016) also reported substantial savings of more than £200,000 ($C343,450) over a 6-month period, though the authors also specified an additional staff cost of £90,000 ($C180,660) over the 6-month period associated with the program [38]. Barnes [40]. Therefore, while the implementation of switch programs are expected to provide cost savings to the healthcare system, the calculations are often based on drug costs alone. Accordingly, the inclusion of other factors, such as additional staff time and program funding, reduces the anticipated cost savings associated with originator-to-biosimilar switch programs.
Study Quality
The general risk of bias of the included full-text articles, according to the Cochrane Collaboration ROBINS-I tool, is presented in Fig. 2. Of the 18 published journal articles, the overall risk of bias was rated as moderate for 14 citations, serious for three citations, and unclear for one citation. Of note, each citation, except for Tarallo et al. (2019), which could not be assessed for the domain, was evaluated as a moderate risk of bias for domain 6, which pertains to the measurement of outcomes. As both the patients and physicians were not blinded to treatment allocation (i.e., NMS) in a clinical setting, a moderate risk of bias was considered for most studies as it is possible that outcome measures or answers to survey questions were influenced by the knowledge of the intervention received by the patients. As the overall risk of bias is judged as moderate when a moderate risk is determined for at least one of the domains, the overall risk of bias was, consequently, considered as moderate for the majority of included studies.
DISCUSSION
While the introduction of biosimilars is expected to provide cost savings to the healthcare system, the economic impact of originator-tobiosimilar NMS is complex to assess. While highly similar, Health Canada authorization of biosimilar drugs does not signify equivalence to, or interchangeability with, the reference biologic drug [1]. Consequently, additional costs, such as those related to HCRU, in addition to drug acquisition costs, need to be taken into account when estimating the economic impact of originator-to-biosimilar NMS. In 2019, Liu et al. published a SLR evaluating the economic impact of originator-to-biosimilar NMS [27]. The authors stated that their review retrieved more data on anticipated cost estimates (i.e., generated from simulation studies) than on real-world observed post-NMS HCRU and costs. As a result, this SLR focused on realworld data in order to evaluate the economic impact of originator-to-biosimilar NMS in a real-word setting.
In the current SLR, we found many studies that focussed on savings related to drug costs alone without taking HCRU-related costs into account. Moreover, few studies investigated the difference in HCRU or costs associated with originator-to-biosimilar NMS, where findings were presented for patients both prior to and following the switch or were presented for patients who underwent NMS in comparison to patients that remained on the reference biologic. While these studies were scarce, they provided a better understanding of the savings or costs that were associated with the switch from the reference biologic to the biosimilar drug. Specifically, three studies reported on HCRU differences [6,33,34], three studies reported on cost differences [35][36][37], and one study reported on differences in both HCRU and overall costs [32]. With regards to HCRU, all studies concluded that NMS was associated with a significant or numerical increase in HCRU among patients who underwent originator-tobiosimilar NMS [6,[32][33][34]. Interestingly, there was a notable difference between studies in terms of outpatient visits associated with NMS, where this was greater for the study by Tarallo [32,36,37]. Importantly, Tarallo et al. identified blood and imaging tests, emergency visits, hospitalizations, and visits with various specialists as the primary healthcare costs leading to the increase in total patient costs following NMS [32]. Altogether, these studies suggested that post-NMS costs can, at times, be greater than the savings attributed to drug costs following a switch from the reference biologic to the biosimilar drug, such that NMS can result in an increase, rather than the anticipated decrease, in total costs per patient, at least in the short-term. Total costs per patient in the long-term following an originator-to-biosimilar NMS remain to be elucidated. Accordingly, potential long-term savings generated from an originator-to-biosimilar NMS could increase resources for the reimbursement of innovative drugs, which could be beneficial to patients. Altogether, as the patient populations of interest are dealing with chronic conditions, studies evaluating HCRU and costs in the long-term would provide much needed information. However, these analyses can prove challenging as, particularly in immunologic conditions, patients often lose response and switch to more expensive therapy, which may limit the long-term cost differences associated with NMS to a more finite time horizon.
In Canada, biosimilar drugs are sold at a reduced price that is, on average, 30% less than the price of the reference biologic [13,18], suggesting that originator-to-biosimilar NMS policies result in savings to the Canadian healthcare system. However, understanding the full economic impact of introducing originatorto-biosimilar NMS policies in Canada requires the consideration of HCRU-related costs associated with NMS as well. In order to better understand the costs associated with originatorto-biosimilar NMS in Canada, HCRU-related costs associated with NMS, based on the HCRU data retrieved from this current SLR, were estimated from a Canadian perspective [46]. Using unit costs from Canadian governmental sources and published literature, it was determined that, over a 6-month period, rheumatic patients who underwent originator-to-biosimilar NMS incurred greater HCRU-related costs, estimated at an additional $1317 per patient, compared to those who stayed on the originator biologic. In this analysis, the main drivers of the difference in costs between switchers and non-switchers were hospitalization costs and productivity loss [46].
The results of the current SLR are in line with those of Liu et al. [27]. While Liu et al. found that many studies demonstrated a cost reduction associated with NMS, the authors noted that many of these same studies were largely limited to drug costs alone and did not take into consideration the costs related to HCRU. When Liu et al. isolated the real-world studies that reported on NMS-related costs, aside from drug costs alone, the authors found that originatorto-biosimilar NMS was associated with increased HCRU and HCRU-related costs. More specifically, Liu et al. emphasized three realworld database studies identified in their search, two of which pertained to the same study by Glintborg et al. [33] and the other to the conference abstract by Phillips et al. [36], both of which were also identified and highlighted in the current SLR. Liu et al. concluded by emphasizing the need for more real-world studies that include both drug costs and other NMS-related costs in order to appreciate the full economic impact of NMS in both the short and long term.
Additional factors can also have an impact on the costs associated with originator-tobiosimilar NMS. Indeed, three studies reported on the costs, aside from drug costs alone, associated with a switch program, which highlighted patient funding, program implementation, and the additional staff time required as important costing parameters that should not be overlooked [38][39][40]. In British Columbia, the Biosimilar Initiative, which supports originatorto-biosimilar NMS, encourages the reimbursement of various fees billable to the Medical Service Plan, including pharmacist and physician visit fees as well as a fee to fund the nursing staff required to support patients with gastrointestinal diseases [47][48][49]. These fees add to the overall cost of implementing a switch program. While originator-to-biosimilar switch programs may be accompanied by added costs to the healthcare system, it is noteworthy to mention that managed switch programs can be funded through a gain share agreement [44]. Specifically, a gain share agreement is a collaborative arrangement between healthcare commissioners and providers to distribute the resulting cost savings between the stakeholders so that the cost savings can be reinvested by hospitals in patient care [44]. Therefore, the short-term costs associated with a switch program may be outweighed by the long-term benefits to patients if funded through a gain share agreement.
Aside from the costs associated with a switch program, additional factors related to differences in efficacy and safety between originators and biosimilars can also have an impact on the costs associated with originator-to-biosimilar NMS. The manufacturing of biologic drugs is complex, hence the position of Health Canada about the non-interchangeability of an originator biologic to biosimilar [1]. After NMS, an inadequate response can lead to treatment discontinuation, which is another factor that can be associated with increased total costs, particularly when treatment discontinuation is associated with another treatment switch or adverse events (AEs) requiring medical intervention. Accordingly, Tarallo et al. (2019) determined that the total costs associated with patients who, following initial originator-to-biosimilar NMS, switched back to the reference biologic or to an alternative biologic were consistently greater than the total costs for patients who switched just once [32]. The characteristics associated with treatment discontinuation following originator-to-biosimilar NMS are presented in Table S2 in the electronic supplementary material. Biosimilar discontinuation rates were variable between studies and disease areas ranging from 2.6% to 38.5% [32,35,36,39,42,44,45,. Switch-back rates (to the reference biologic) ranged from 0.5% to 16% [7, 32, 36, 41-44, 61, 63, 64, 67-73], while the rate of switching to an alternative drug ranged from 0.9% to 18.2% [32,35,37,39,41,42,45,50,54,55,57,61,62,64,[67][68][69][70]. Common reasons for discontinuation resulting in a switch included loss of response (LOR) [39, 44, 52-57, 59, 62, 63, 66, 69, 70], disease activity [41-43, 45, 50, 51, 65, 67-69, 72], and AEs [42-45, 51, 53-55, 57, 59-61, 63-65, 67, 69, 70, 73], all of which could be directly associated with additional treatment costs. Moreover, LOR may be addressed through dose escalation prior to discontinuation. For the studies that reported dose escalation, the rates ranged from 2.1% to 48.5% [8,39,45,51,54,56,[58][59][60][61][62]74]; however, dose reductions were also reported at a frequency of 8-21.5% [8,39,51,59]. In this study, switching and discontinuation rates for biologic originators by disease area were not captured. However, interestingly, a recent study conducted by Fitzgerald et al. indicated that patients switching from originator to biosimilar infliximab were two to three times more likely to switch to another originator biologic compared to those remaining on originator infliximab [75]. While results are variable between studies, these findings validate that, at the very least, there is the potential that a patient who undergoes NMS may subsequently undergo dose escalation, or be switched to an alternative treatment or back to the reference biologic, where multiple switches may be associated with greater total healthcare costs [32]. Along with additional costs associated with HCRU and switch programs, these added elements must also be considered in the decision to adopt a NMS policy.
Subjective reasons such as negative expectations, often referred to as the nocebo effect, can lead to biosimilar discontinuations and should also be considered as a factor that may impact the overall costs post-NMS. The nocebo effect describes negative outcomes with active treatments in the real-world clinical setting, including new or worsening symptoms and AEs, stemming from a patient's negative expectation rather than the pharmacologic action of the treatment itself [76]. The nocebo effect can reduce adherence to biosimilar treatment, particularly in the setting of NMS [22,76]. To minimize this risk, additional costs related to the education of both patients and healthcare professionals on biosimilars would be necessary. The implementation of such comprehensive education programs should also be taken into account when considering the implementation costs associated with an originator-to-biosimilar NMS policy.
Some governments have discussed and/or announced the implementation of NMS policies [23][24][25][26]. While several experts support NMS policies [77,78], others have voiced their opposition to such ''forced'' switches for nonmedical reasons [79][80][81]. Moreover, the Canadian Association of Gastroenterology and Crohn's and Colitis Canada released a joint statement wherein they recommend against infliximab originator-to-biosimilar NMS in patients who have stable CD or UC and who are doing well on the reference biologic [82]. This opinion was formed as a result of data suggesting that switching in this setting leads to an increased risk of LOR, dose escalation, or secondary switching [82]. Importantly, various studies reporting on HCRU and/or costs post-NMS also reported biosimilar dose escalations and listed LOR as a reason behind treatment discontinuation or secondary switching in patients who were stable prior to NMS (Table S2 in the electronic supplementary material). More recently, the Institut national d'excellence en santé et en services sociaux of Québec published a report on the position of various medical societies, associations, and clinicians with regard to biologic-to-biosimilar NMS policies [83]. It was concluded that, while the use of biosimilars in treatment-naïve patients or as a substitution in patients for a medical reason is generally accepted, the implementation of an originator-to-biosimilar switch for non-medical reasons is not accepted. Accordingly, only two Canadian provinces, namely British Columbia and Alberta, have originator-to-biosimilar NMS policies currently in place. Québec clinicians agree that forcing an originator-to-biosimilar NMS in patients comes with a risk of destabilization to the patient, with a possibility of nonresponse or development of significant adverse events, for whom little treatment options are available [83]. Altogether, the idea of forcing stable patients to switch without a medical reason to a biosimilar drug remains a debatable topic amongst expert groups.
This study is subject to some limitations. First, the studies included in the SLR were limited in number and comprised primarily conference abstracts, highlighting a need for more studies, and subsequent publication of the results, regarding HCRU and/or costs associated with originator-to-biosimilar NMS. Secondly, many of the included studies were funded by pharmaceutical companies, such that the investigated outcomes or results shown may be biased towards the affiliated drug. The variability in the methodologies used by the identified studies may limit the interpretation and generalizability of the synthesized results. Furthermore, the skewed proportion of studies considering infliximab originator-to-biosimilar NMS may limit the generalizability of the current results to other biologics. Similarly, the skewed proportion of studies investigating rheumatological or gastroenterological diseases may also limit the generalizability of the current results to other disease areas. Among the identified studies, most are conference abstracts. While conference abstracts allow for the inclusion of studies that have yet to be published, it must be noted that publications from conference proceedings have not undergone a thorough peer-review process, as is required for an article published by a journal. Moreover, as conference abstracts follow a strict word limit, there is often a lack of details and information pertaining to the study. While the risk of bias was assessed for all published journal articles, this assessment was not performed for conference abstracts, which represented most of the included studies. It must also be noted that the ROBINS-I tool, which was used for this SLR, was not considered suitable for all included journal articles. While the Newcastle-Ottawa scale is the preferred tool for the assessment of database studies, the ROBINS-I was used as the majority of included journal articles were cohort-based studies. Future research providing more real-world evidence regarding originatorto-biosimilar NMS is warranted.
CONCLUSION
This systematic literature review found that the overall economic impact of originator-tobiosimilar NMS in the real-world setting remains uncertain, as drug costs alone, without consideration of the additional HCRU associated with NMS, continue to be the focus of most economic studies. Nevertheless, among the seven studies that reported on the difference in HCRU or costs with and without NMS, all studies showed an increase in healthcare services used and HCRU-related costs associated with NMS. These findings suggest that the expected overall savings generated by an originator-to-biosimilar switch owing to less costly drug prices may be reduced because of an increase in HCRU and its associated costs postswitch. More real-world studies that include both drug costs and additional NMS-related healthcare costs are needed to better evaluate the full economic impact of NMS.
ACKNOWLEDGEMENTS
Funding. This study was funded by AbbVie Corporation. AbbVie sponsored the study, contributed to the design and to the review, approved the final version and funded the journal's Rapid Service and Open Access Fees. No author has received funding for developing the abstract. No honoraria or payments were made for authorship.
Medical Writing. The authors would like to acknowledge the participation of Christopher Vannabouathong for writing the manuscript. Christopher Vannabouathong has received funding from PeriPharm as a freelance for providing medical writing.
Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
Author Contributions. JL, CB, EH, KM, and JB, from PeriPharm, have participated in the study conduct, data interpretation, and the approval of the manuscript. DP and YR participated in data interpretation and the approval of the manuscript.
Prior Publication. This systematic literature review was previously presented as a poster at the annual ISPOR meeting, which was held virtually on May 17-20, 2021.
Disclosures. Jean Lachaine and Catherine
Beauchemin are partners at PeriPharm, while Erin Hillhouse, Karine Mathurin, and Joëlle Bibeau are employees at PeriPharm, a company that has served as a consultant to AbbVie and has received funding from AbbVie. Yasmine Rahal and Diana Parison are AbbVie employees and have stock/stock options.
Compliance with Ethics Guidelines. This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.
Data Availability. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study..
Open Access. This article is licensed under a Creative
Commons Attribution-Non-Commercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by-nc/4.0/. | 2021-11-15T14:37:39.950Z | 2021-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "419b69878b3797268aadf26236ffa9f757970b47",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12325-021-01951-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "419b69878b3797268aadf26236ffa9f757970b47",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239343444 | pes2o/s2orc | v3-fos-license | A Rainwater Harvesting and Treatment System for Domestic Use and Human Consumption in Native Communities in Amazonas (NW Peru): Technical and Economic Validation
The inhabitants of Tunants and Yahuahua face water supply problems in terms of quantity and quality, leading to socio-environmental and health impacts in the areas. The objective of this research, therefore, is to determine the technical and economic feasibility of a proposal for a rainwater harvesting and treatment system for human consumption in the native communities. For the technical feasibility, monthly water demand per family was compared with the amount of water collected in the rainy and dry seasons. In addition, 16 physical, chemical, and microbiological parameters were evaluated at the inlet and outlet of the water system. The economic feasibility was determined by the initial investment and maintenance of the systems; with the benefits, we obtained the net present social value (NPSV), social internal rate of return (SIRR), and cost-effectiveness (CE). Technically, oxygenation and chlorination in the storage tanks allowed for water quality in physical, chemical, and microbiological aspects, according to the D.S. N° 031-2010-SA standard, in all cases. Finally, with an initial investment of S/2,600 and S/70.00 for annual maintenance of the system, it is possible to supply up to six people per family with an average daily consumption of 32.5 L per person. It is suggested that the system be used at scale in the context of native communities in north-eastern Peru.
Introduction
Safe drinking water and basic sanitation must be available, accessible, safe, acceptable, and affordable for the entire population [1].
e World Health Organisation (WHO) recommends at least 50 L per person per day of water to ensure basic hygiene and nutrition [2]. However, around the world, people die from lack of quality water, especially in rural areas (native and peasant communities) [3]. For instance, Urakusa native community in the Amazonas region (NW Peru) has no basic sanitation services (water supply for drinking and domestic use) and relies on communal silos and latrines for disposal of human waste [4]. In Amazonas, unfortunately, the province of Condorcanqui has the highest percentage of lack of both services (92.3%) [5]. e lack of basic services in rural areas (such as water), together with economic and climatic factors, directly influence chronic child malnutrition and anaemia [6]. e provision of safe drinking water for rural communities must, therefore, be a public priority. However, public projects are unsustainable due to dispersed housing, requiring costly distribution networks [7]. In this situation, rainwater harvesting, storage, and utilisation systems are of paramount importance for those populations that still do not have access to water or have shortages [8].
us, rainwater harvesting and rainwater harvesting systems have become an economical and ecological alternative [9]; yet their use has not become widespread due to their long financial return periods [10]. However, there are studies that demonstrate the feasibility of these systems. For example, in Ireland, they focused on treating rainwater to address water depletion due to massive population growth [11,12]. In Spain, a feasible predictive model was developed for rainwater harvesting in rural communities [13]. In Sydney, average annual water savings are related to annual rainfall and a positive cost/benefit ratio of rainwater storage tanks [14]. In Latin America, because of conditions from northern Chile, Peru, and parts of Ecuador, rainwater harvesting is also feasible [15]. Rainwater storage depends on the size of the tanks and the area, for which technical and economic considerations must be taken into account when choosing the type of storage system [8].
e quality of rainwater must be analysed based on urban areas, physical, chemical, and microbiological factors, which depend on various components suspended in the air [16]. Population growth, forest burning, and industrial expansion cause chemical modification of rainwater [17]. In that sense, rainwater harvesting and treatment is what determines its use, depending on its ability to eliminate enterobacteria, viruses, protozoan cysts, and bacterial spores that can cause disease [18]. Global health depends not only on the quantity of water supplied but also on the water quality; a quarter of the world's population suffers from water-related illnesses [19]. In Urakusa, rainwater quality is poorly prioritised because of the lack of sanitation services [4]. In this sense, rainwater may be used to avoid the use of water from springs and streams, in order to preserve them as they are threatened and highly polluted by human activities [19,20]. Rainwater treatment has only sense if it is done properly; therefore, the most widely used disinfection method (as part of the treatment) is chlorination due to its easy accessibility and application, as well as its high oxidant capacity expressed in the reduction of organic matter [21]. e cost-effectiveness of rainwater harvesting systems needs to be assessed in order to determine the systems' effectiveness at the user level. e economic analysis allows determining the feasibility of production from rainwater [22,23]. Water is one of the most important and scarce commodities available to people worldwide, and Peru is no exception in this respect. Many populations are forced to drink from sources whose quality is outside the regulations (D.S. N°031-2010-SA) leading to health risks for children and adults [24]. In rural Peru, people lack access to safe drinking water; in fact, only 20.0% of the population have access to this service through the public water network [25]. One of the goals of the development objectives (SDGs) is to achieve universal access to safe drinking water, sanitation, and hygiene [26], and Peru is a party to these agreements.
Little have rainwater harvesting projects in native communities been studied, as well as socialisation and prior training for maintenance of the systems implemented [27]. erefore, it is necessary to implement rainwater harvesting systems in rural areas where access to drinking water is a neglected asset [19]. Based on the above, this research aims, for the first time, to technically and economically validate the rainwater harvesting and treatment system designed for mass use in two native communities (Tunants and Yahuahua) in the Amazonas region (NW Peru).
Study Area and Characterisation of Target Beneficiaries.
e study is located in two native communities, inhabited by Awajún and Wampis peoples (Tunants and Yahuahua), district of Nieva, province of Condorcanqui in the jungle of NW Peru (Figure 1). ey are located at an altitude of 196 meters above sea level and an average temperature of 26°C and has an average annual rainfall of 3,121 mm [28]. e communities were created 22 years ago and have a reported population of 217 people in the 2017 census [29]. e province of Condorcanqui faces transportation barriers due to demographic dispersion, as well as it lacks access to basic needs, which include, among others, food, drinking water, and drainage [30]. eir economy is subsistence-based, with land (between 0.5 and 1 ha) dedicated to the cultivation of cassava, bananas, and maize [31]. e characterisation of beneficiaries, on which the systems were designed, was based on interviews aimed at obtaining general data on the population, dwellings, water consumption habits, and evaluating the acceptance level of rainwater harvesting and treatment systems installed in these two native communities.
System Design and Installation.
Four stratus rain gauge model 6330 were installed, one in each system (two in each native community). e construction area to set up the systems was determined, ensuring it meets the minimum conditions for the area (place and area of the systems) and the number of users. For the tank construction, three main materials were used: iron, cement, and pipes (PVC). e supporting structure of the tank was built with a mixture of concrete and cement, reinforced with corrugated steel. e design consists of 16 parts indicated in Figure 2, which include a footing of 1 m × 1 m, a central column of section 25 cm × 30 cm and support slab of 1.40 m × 1.40 m, and PVC pipes of 6 and a polypropylene storage tank of 1,100 L with protection against ultraviolet rays ( Figure 2). e characteristics of the systems were the same in all four dwellings, except for the size of the column, which was subject to the height of the dwelling. Roof coverings of all dwellings were of galvanised calamine.
Technical Feasibility Determination.
To determine the technical feasibility, physical, chemical, and microbiological factors were determined by sampling water, at the inlet and outlet of the systems during three months of the rainy season (December 2019 and January and February 2020) and two months of the dry season (September and October 2020). Sample collection, storage, and transfer, as well as laboratory analysis, were performed according to APHA, AWWA, and WEF [32]. In the rainy season, 264 physicochemical and 72 microbiological samples were analysed, and in the dry season, 64 physicochemical and 32 microbiological samples were analysed. e microbiological parameters were reduced in the dry season, due to the scarce economic resources allocated and the difficult access to the native communities, due to the effects of the COVID-19 pandemic. However, this was not a limitation to continue with the study, given that efforts were made to analyse the TC and CF; the only parameter not taken in the dry season was Escherichia coli.
Data collection for pH was in situ, with a Hanna multiparametric water meter model HI 98194, while samples were collected in transparent plastic containers to determine the physicochemical parameters of electrical conductivity (EC), turbidity, total dissolved solids (TDS), total suspended solids (TSS), alkalinity, hardness, nitrates, nitrites, phosphates, sulphates, aluminium, copper, and zinc. Samples were collected for microbiological analysis of total coliforms, faecal coliforms, and E. coli in properly sterilised glass bottles with a capacity of 500 ml. ey were transported in a cooler with dry ice at a temperature of 5°C. Parameters were analysed at the Water and Soil Laboratory of the Research Institute for Sustainable Development of Ceja de Selva (INDES-CES) of the National University Toribio Rodríguez de Mendoza (UNTRM). Water quality calibration was carried out through chlorination for disinfection at the outlet of the system [33], with commercial bleach in a mechanical way, through the application of a graduated syringe; residual chlorine measurements were carried out with a Hanna HI729 model colorimeter. Likewise, before each sampling, the pH was measured, and the application of potassium hydroxide (KOH) tablets was determined accordingly.
Harvested Water and Projected Catchment Area of the Roof for Water Supply.
e volume of rainwater captured in the systems (Vr) was determined by the catchment area of Scientifica the roof (CR, variable according to the dwelling), the type of roof material (galvanised metal sheet), and its runoff coefficient (Rc, 0.9) [34]. Based on the water harvested, a projection was made of the ideal area to supply water.
Monthly Water
Demand. e monthly water demand per household (Wdh) was assessed. For this, the average amount of water consumption per person (Wcp, 30 L/day [35]), the number of individuals or beneficiaries of the system (Nu), and the period of consumption analysed (Nd, 29, 30, or 31 depending on the month) were identified. e number of individuals per household was obtained through the application of socio-economic surveys [36]. e priorities or activities taken into account were the demand for water at the individual level, including food preparation, personal hygiene, and cleaning of personal items and objects [37].
2.6. Economic Feasibility Determination. Economic feasibility was determined based on cost-effectiveness, according to geographical aspects (the location of the dwellings and roof area) and costs of water system installation and maintenance. For this, the amount of water supply to dwellings was assessed.
e volume of rainwater captured by the roofs (supply) was calculated and then weighed against the members' water needs (demand) [38]. e costs and expenses of the inputs per unit and on average, including the design plans, were taken into account. Inputs and services of the households were also valued.
Economic Viability.
To determine the economic viability, a socio-economic evaluation of rainwater harvesting projects was conducted to assess the current situation, current supply, current demand, and problem description [39]. Benefit-cost analysis of the systems installed in the native communities by evaluating the total cost of the system divided into three phases as follows.
Preinvestment and Investment Phase.
In the preinvestment phase, the conditioning of the systems and labour costs were taken into account. In the investment phase, the construction of the systems was evaluated, taking into account the components of the catchment area, conduction, storage, filtration, potabilisation, and distribution of rainwater. e opportunity cost of terrain was also considered, as the tank installation requires a large area.
Postinvestment Phase.
In this phase, the costs of operation and maintenance were determined, estimating the timescale it should be done.
Scientifica
Cost-benefit: cost-benefit analysis is based on Jianbing's formula [40]: where AVB is the present value of rainwater benefits, Inv is the investment, and PVC is the present value of costs. e net present social value (NPSV) was carried out to indicate the profitability of the systems, and the projected project horizon was 5 years.
where CFt is the year t cash flow, t is the number of time periods (number of years), r means 10% social discount rate, and n is the number of years in assessment horizon minus one. NPSV > 0 indicates that the investment will generate returns. NPSV � 0 indicates that investment project will neither generate profits nor losses. NPSV < 0 indicates that the investment project should be postponed.
Social Internal Rate of Return (SIRR).
It was calculated using the following formula: where Ct: period t cash flow, I 0 : initial investment (t � 0), n: number of time periods, and t: time period.
Cost-Effectiveness.
e cost-effectiveness analysis of a social-economic analysis and nonproject evaluation costs were measured as economic costs, and the results were valued as units of effectiveness [41], assuming that families do not have water, based on the question "How much would a litre of water cost?" and the number of times they carry water, as well as the demand for water per family. A comparison was made between the costs incurred by not having water versus the situation of the satisfaction of having water in the training and treatment systems. e costs were identified in terms of the number of water hauls and the loss of productivity from hauling water (the daily labour cost was taken into account in the internal regulations of the native community of Urakusa). formulas (6) and (7) were used to calculate the daily and annual costs.
Cost-Effectiveness Calculation of Carrying Water from the Stream (Daily).
where CE � cost-effectiveness, Ta � water carrying time in hours per day, Jl � working hours per day, and Cj � cost of working time per day.
Cost-Effectiveness Calculation of Carrying Water from the Stream (Annual).
Can � Cad * Da, where Can � annual cost of water without catchment system, Cad � daily water carrying cost, and Da � number of days per year.
2.14. Comparing Projected Costs. We made a comparison for projected costs between the proposed tank water harvesting system (situation with project) versus the tankless water harvesting system, as this is the way the community currently uses the water (situation without project). A 5-year evaluation was carried out, based on the calculation of the annual for each case, including the increase in the number of families (85 families by 2021-2026). e projection (2021-2026) was also calculated using stormwater treatment information and comparing these costs. Additionally, the costs of installing the system with the proposed tank water harvesting system (concrete-based materials) and an installation alternative for families using local materials (materials using native wood) were also described.
We applied a nonparametric Kruskal-Wallis test to identify if there are significant differences between the dry and rainy seasons, using the Minitab 17.1 software (Spanish version).
Characteristics of Beneficiaries.
In the selected households of a native community in Amazonas, there are a maximum of 6 family members using water. While for Biswas and Mandal [42], in a remote and rural area of Khulna (Bangladesh), there were a maximum of 4 members, meeting their domestic use throughout the year. Of the selected families, 50% are engaged in agriculture (maize, banana, and cassava cultivation) with an average size of 1 ha per family. ey are also engaged in other casual work (day labour) at a daily rate of 40 soles, an amount established by internal rules (apu) within the community.
In Tunants and Yahuahua, the inhabitants draw their water from nearby streams or ponds (at an average distance of 75 minutes round journey). Nevertheless, these direct water sources are contaminated by anthropogenic and natural sources [20]. Here, water is commonly carried in gallons and 10 L buckets for daytime supply (Figure 3(a)). However, for their personal hygiene, they usually go directly to the stream (Figure 3(b)). e families also store the water in large containers (between 100 and 1,000 L capacity) to ensure the particles can settle during storage. e water is always boiled before drinking, as the water is contaminated by different types of pollutants, for example, washing powder and faecal dropping from domestic animals. e main reason for nonconstruction of a rainwater harvesting system is the economic factor.
Monthly Rainfall.
Studies indicate that the annual rainfall in the province of Condorcanqui is between 1,200 and 1,800 mm [27]. e data collected from the rain gauges installed in the study area showed rainfall of up to 396.2 mm in November (Pluviometer-FP-S2) and 429 mm in June (Pluviometer-FT-S4) corresponding to Tunants and Yahuahua, respectively ( Figure 4). e lowest rainfall occurred in August (24 mm), corresponding to the Yahuhua area, and 5.76 mm for Tunants. Consequently, rainfall in both communities was consistent and was sufficient capacity for the water catchment systems.
Annual rainfall variations at the stations showed a maximum of 2,032.1 mm and a minimum of 987.64 mm ( Figure 5).
e National Service of Meteorology and Hydrology of Peru (SENAMHI) shows rainfall values between 1,376.4 and 2,227.8 mm per year at the level of the Nieva district. e values of the installed rain gauges demonstrated the tendencies with respect to the values given by SENAMHI, given the distance of the station from the installed systems.
Amount of Rainwater Collected.
e amount of rainwater collected in the systems was not homogeneous (Table 1). In the FPK-S2 family system, there was the highest amount of water collected, with a maximum of 14,263.2 L (December) and a minimum of 311.04 L (June). Rainfall shortage was pronounced in the summer season, during the months of June, July, and August. e amount of rainwater collected is proportional to the area of the roofs. And rainfall is linked to the seasons of the year [15].
Monthly Household Water
Demand. Water distribution is unequal; in fact, the poorest areas use about 15 L of water per day and is, of course, influenced by the economic factor [43]. In Mexico, every person has the right to access disposal and sanitation of water equivalent to 30 L per person per day; however, it is still lower than recommended by the World Health Organisation (WHO), suggesting at least 50 L of water per person per day to ensure basic hygiene and nutrition [2]. Household water consumption in the native communities in this research was 71,280 L/year ( Table 2) and consumption per person was 32.55 L/day, for an average number of 6 users. e annual backlog for the FPK-S2 system was 44,244 L. erefore, it is clear that the water backlog is higher than the demand. e implementation of water recycling systems is proposed, as the water demand is higher than the normative allocation of 30 L per person per day [24]. e backlog in the FI-S3 system was 4,122 L; although the annual backlog is positive, August, September, and October were the most critical period with negative values (−4,455, −2,160, and −257 L, respectively), months in which food preparation is the exclusive priority. In the FJT-SI and FT-S4 systems, the annual lag was −15,258 and −59,473 L. Water deficit was observed in almost all months (Table 2); generally, these negative values are associated with water use in laundry and showering (in months where the lag is negative, water use should be prioritised). For water supply, each month water use should be prioritised, and larger sheds should be installed to capture more water. e 1,100 L storage systems tank was sufficient to supply all of the families' needs for a week, assuming no rain. However, if they only prioritise water for food consumption, it can supply up to 15 days. It was determined that during the rainy months, storage tanks with a maximum capacity of 460 L are needed; therefore, the chosen tanks are the 1,100 L tanks; this is justified because the rains are constant, and there are days when even for the FI-S3 system; only a 15.00 L container is needed to supply water to the family.
Projections of Areas for Rainwater Catchment.
e amount of water collected is dependent on the catchment area of the sheds, so roof area measurements have been projected 6 Scientifica based on water deficit for seasonal low rainfall. erefore, the average area for installing future investment projects is 89 m 2 ( Table 3). With 89 m 2 modules, an annual collection of up to 165,884.4 L can be achieved. Unfortunately, the investment in rainwater harvesting may be very costly, making it impossible to install due to economic reasons, thus declining the system's affordability [44]. As such, governments have an obligation to guarantee access to a sufficient quantity of safe drinking water for personal and domestic use [45].
Physicochemical Parameters.
e physicochemical parameters (Table 4) for the FPK-S2 and FJT-S1 systems were within the drinking water quality regulations [24] in both periods. In contrast, in the FT-S4 and FI-S3 systems, the parameter aluminium (Al) was the only one that exceeded water quality regulations. e high presence of aluminium may have been influenced by calamine roofs, as well as the combustion of fossil fuels, crude oil, and sources of vehicular traffic close to the installation of the systems [46,47]. Different pollutants can reach water by wind speed, wind direction, temperature, and the degree of atmospheric stability [48,49]. In this respect, the quality of rainwater is also influenced by the type of system design [50]. Zinc levels were below the maximum permissible limits (3.0 mg Zn/L) during the rainy season. However, Chubaka et al. [51] found zinc concentrations above 3.0 ppm and copper concentrations above 2.69 ppm. It is possible that this metal is associated Nonprojected dwellings (NP) and projected dwelling (P). Scientifica with the corrosive action of calamine, such as ultraviolet solar radiation that can damage calamine sheets or structures, causing tiny metal microparticles and paint on the surfaces. e maximum amount of nitrate (NO 3 ) was 3.60 ppm in the FI-S3 system. Nitrate concentration above 50 ppm in water is detrimental to health, and infants may be most affected due to the formation of methemoglobinemia [52].
Scientifica
Rainwater quality varies according to the type of roof and directly influences the parameters of hardness, alkalinity, and turbidity [53]. e maximum turbidity was 1.27 NTU, which is within the Peruvian standard, but it could be due to the number of dry days preceding a rainy event [54]. With respect to total solids, González [54] found parameters between 79 ppm and 94 ppm; for this reason, continuous maintenance of the systems is recommended to reduce the TDS 22.20 mg/L, in the FPK-S2 system. ese high and discontinuous values are observed due to the lack of cleanliness of the roof. is dynamic is typical of indigenous communities. TSS varies between 17.60 and 52.83 mg/L. Other studies showed results for total suspended solids ranging from 3 to 304 mg/L. Alkalinity values ranged from 11.13 to 36.57 mg/L CaCO 3 , and all values were very low and acceptable. According to the literature [55,56], alkalinity is a very important parameter for drinking water, as it buffers rapid pH changes. e physicochemical results for the low-water season are shown in Table 5, where zinc problems are evident for the FI-S3 and FT-S4 family system, not meeting the standard (D.S. N°031-2010-SA). However, these heavy metal values in rainwater are lower than values in river water, obtained by the regional government of Amazonas in the community of Kusu Kubaim, in the Nieva district, with high heavy metal values (0.45 and 0.442, respectively) [57]. In the community of Kigkis, in the Nieva district, water from the distribution network showed aluminium (0.527) and iron (0.482) above acceptable limits [57]. Moreover, in the Chiangos community, in the Nieva district, high values of aluminium (0.2062) were found. Aluminium in all systems ranged from minimum and maximum values of 0.2-0.67 mg Al/L, respectively. e problems of heavy metals persist for both periods; technical and economical measures such as oxygenation of the storage tanks should be taken to achieve precipitation of both aluminium and zinc. is is left as a proposal: recommended, with at least maintenance every two months. It is an easy method of operation for the users and will bring benefits such as the removal of inorganic (including aluminium that could be present as a precipitate) and organic particles and reduction of turbidity [58]. Table 6 shows the results for microbiological parameters in the rainy season, which were above the water quality regulation (>1,600 NMP/100 mL). In many parts of the world, rainwater does not meet quality standards, and this is attributed to the frequent presence of faecal contamination, mainly from animal origin [59,60]. High contamination densities are likely to have been caused by the abrupt temperature change during rainfall [61]. Particulates and total coliforms are likely to affect the functioning of the rainwater utilisation system, making ongoing studies a necessity [62].
In low water season, all the results met the standard at the outlet of the system, given that the water samples were taken after treatment (chlorination). e importance of chlorinating the water lies in eliminating microorganisms [63,64], so disinfection was carried out with commercial bleach at a rate of 5 drops per gallon (of 5 L) and left to stand for 30 minutes before use. When water is not chlorinated, microorganisms may be present in the water [65], as evidenced during the rainy season. With the operation and maintenance of rainwater harvesting systems, the quality of water for human consumption is guaranteed [66]. However, it is recommended that rainwater be chlorinated [67]. Chlorination of stored water reduces the risk of diarrhoea [68]. erefore, rainwater harvesting systems can improve the quality of life of the inhabitants. In Australia, samples collected from 10 tanks contained E. coli in concentrations that exceeded the limit of 150 MPN/100 mL for recreational water quality [69]. Bacteria may be associated with rainfall events and be in connecting pipes, and they can survive and even grow in an open environment, subject to the environmental level of nutrients and conditions such as temperature and pH [70].
pH showed no significant difference (Table 7) and falls within the water quality standards. pH value allows the determination of the degree of contamination caused by sulphur oxides and nitrogen oxides [71].
e pH values obtained are related to the type of storage tank [72], for example, asbestos sheet roofs have pH values of 6.75 [73]. Rainwater pH can vary from weakly acidic (pH 3.1) to weakly alkaline (pH 11.4). In previous studies, the pH of rainwater ranged from 6.6 to 8.26 [74]. In this study, the pH ranged between 6.82 and 7.02.
Rainwater turbidity was below the standard (5 NTU), with average values of 1.24 for the rainy season and 1.58 NTU for the dry season. ere were no significant differences between seasons. Turbidity is important to analyse because it influences water clarity, and its presence may be associated with extreme rainfall allowing the presence of suspended solids [16].
Aluminium in rainwater, 0.16 and 0.67 mg/L, exceeded the Peruvian water quality standard for human consumption (0.2 mg Al/L). e statistical analysis shows significant differences between seasons, with higher amounts of aluminium and zinc found during the dry season. e presence of aluminium in water is detrimental to life [16]. e presence of zinc ranged between 2.55 and 3.15, zinc is associated with the type of shed. Acuña [75] found that rainwater collected on galvanised steel roofs is distinguished by higher zinc content (69 to 102 mg/L).
Economic Feasibility.
e initial investment of the systems installed is S/2,600 at full cost, and their maintenance is S/70 per year, built of concrete. An alternative rainwater harvesting system is also proposed at a lower cost (S/2,000), with a base constructed of wood, which is abundant in the area, known as Huacapu (Minquartia guianensis Aubl). Huacapu is a suitable wood, as it is strong and durable, widely used in construction [76]. e details of the costs in each case (reinforced concrete support and local alternative) are described in Table 8. e economic evaluation, at a discount rate of 10%, shows an NPSV of S/1,911. e SIRR was above the discount rate, which indicates that future investment in the systems is profitable.
e annual benefits to the families are S/1,260, valued at the time spent bringing water to their homes and the cost of consuming clean water (Table 9). In the native community, Juum in the Amazon region, Jiménez [77] evaluated technically and economically a rainwater harvesting system for domestic use and determined that the design of the system is viable and sustainable. e cost of harvested rainwater can be up to nine times lower than desalinated or treated water, and policies are needed to promote the construction and installation of rainwater harvesting systems [78]. Rainwater harvesting is a viable alternative for domestic use and even for irrigation [79]. To reduce costs in treatment systems, it is advisable to place co-layers (grids) that serve as a trap for large particles and leaves from trees that fall on the roof and clog the system [80]. us, treated rainwater costs 60% less than drinking water provided by the supplier [79]. e B/C is 1.73 soles, which is cost-effective, but this depends on the project area as it does not agree with the study by Domínguez et al. [79], who found that the cost benefit was $1.34.
Cost-Effectiveness Analysis.
e valuation of the cost of water was based on the assumption that how much would the family save to stop carrying water. In this sense, we took into account that the average time spent carrying water is 30 minutes one way and 40 minutes return; the difference is due to the weight of water carried. e average time per family is 2.34 hours/day to carry water just for food preparation and washing dishes (Table 10). e annual cost of the water supply for food preparation was 4,203 soles, without taking into account the time it takes them in the evenings to go to the streams to have their personal hygiene. Compared to the cost of carrying water in a year, with less than half, they can install a proposed water harvesting and reuse system (S/2,600). As such, access to water has become a management problem to improve the quality of life in rural areas, due to high costs [19]. e lack of water has caused great famines and has led to the mobilisation of entire villages in search of solutions [81]. Native communities are no exception to these social conflicts (access to basic services such as water) [5].
Horizon Assessment for Both without Project and with
Project.
e 5-year evaluation was carried out on the basis of the total haulage cost per family year (Table 11). e cost of hauling during the evaluation horizon (2021-2026) is shown to be S/2,181.35, which corresponds to the sum of the annual hauling costs in years 0, 1, 2, 3, 4, and 5 (estimated situation without project). It is important to mention that, currently, these costs are covered by the population in the time spent carrying the water. Consequently, they are unable to perform their normal activities (social, family, economic, educational, etc.). Water access, like money, is a fundamental need of any population and is an essential condition for many people to have a better life [82]. Table 12 shows the investment costs over the 5-year evaluation horizon. For the implementation of the rainwater harvesting and treatment system, 89 families were estimated with an annual investment cost of S/2,600.00 per family. e investment in the fifth year would be S/231,400. is evaluation horizon will allow competent bodies to determine the amount of investment in rainwater in order to satisfy the human right to water, without necessarily achieving an economic benefit [82]. In this sense, a cost-effectiveness analysis was useful to value costs that could not be presented in terms of monetary values [83]. Table 13 shows the cost flow over the evaluation horizon, both with and without project. With 10% of the total haulage costs incurred by the inhabitants of the locality (Tunants and Yahuhua), the water supply problem could be solved. is would allow them to cover their needs for human consumption and domestic use water. Socio-economic factors of the population would have positive readjustments, such as
Scientifica 13
human health improvements [83]. Another benefit of rainwater harvesting systems is the reduction of vulnerability to floods and river overflows, which are strategies for the implementation of disaster risk management [39]. erefore, it is shown that the implementation of rainwater treatment systems projected over 5 years is 262,550 soles. e costs are less than the costs of carrying water from the stream (2,181,357). Future research in the native communities of the Amazon region related to the use of well water is important, as it has shown high potential in other places since it is cheap and quickly accessible in times of drought [84]. Taking into consideration that a limiting factor is microbial contamination of groundwater, which has become a global problem and remains a management challenge as an integrated groundwater model [85,86].
Conclusions
Rainwater harvesting for domestic use and human consumption in native communities in Amazonas (NW Peru) is feasible according to technical and economical validation. Rainwater harvesting can supply six family members with daily consumption of 32.5 L per person. Regarding water quality, no significant differences in physicochemical parameters are shown. However, for heavy metals, aluminium showed the most significant difference. A mechanical oxygenation system should be implemented to sediment heavy metals, as it is economical and easy to use. e implementation of rainwater harvesting systems can be an alternative water supply in native communities as it is cheap and accessible. However, water management systems must be implemented for its use, after treatment.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest. 14 Scientifica | 2021-10-22T15:07:40.859Z | 2021-10-19T00:00:00.000 | {
"year": 2021,
"sha1": "68323a9bf840b3b1e7ed3f8b282e6248ab1cd903",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/4136379",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "990ff09b3525920ce81e1e90c176aa380cb96a9f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265073207 | pes2o/s2orc | v3-fos-license | Customers' satisfaction towards Addis Ababa City's minibus taxi service
In Addis Ababa, Shared minibus taxis are contributing significantly more than any other form of public transit to meeting the city's transportation needs. But there were limited research done on taxis in general and customer satisfaction with minibus taxis in particular. Therefore, this study aims to assess the satisfaction of minibus taxi customers through a survey questionnaire distributed and collected at taxi stations. Descriptive analysis was used to measure the satisfaction levels/rates of respondents towards each service quality attribute of the minibus taxis. Then, we compared the means value of satisfaction responses followed by factor/principal component analysis. Once the most important satisfaction variables are identified through the factor analysis, an ordered logit model was used to create a relationship between the selected satisfaction variables and the socio-demographic characteristics of taxi riders. The results of the study showed that minibus taxi overload, safety, and security at stations are attributes in which the respondents show greater dissatisfaction. The result of the ordered logit model revealed that the respondents who showed greater dissatisfaction with the taxi drivers and their assistants' behavior are those who had been stolen at least once on a minibus taxi. Also, riders weigh more on the functionality of the service than their comfort and security. Thus, the service providers, Addis Ababa Road Authority, security personnel, and any relevant body should work together on maximizing the customers' satisfaction in minibus taxis.
Introduction
The majority of public transit in most African nations is provided by minibus taxis.They are responsible for more than 70 % of all urban travel and control the majority of the social and economic facets of urban mobility [1].When local governments in developing countries plan for public transportation, taxis are the least of their consideration, mainly due to the fact that taxis are run by private entities and all what is expected of local government is regulating the system.However, shared and paratransit taxis are the main mode of transportation for many urban residents in developing countries.For example, in Addis Ababa, taxi transportation (shared minibus taxis) covers 79 % of the public transportation modal share [2].In Addis Ababa, residents are served by four major public transport modes; namely Light Rail Transit (LRT), minibus taxis, medium buses (Higers) and regular public buses (Anbessa, Sheger and Public Service buses), of which the minibus taxis cover the larger modal share.
In 2017, Addis Ababa's population was estimated by the Ethiopian Central Statistical Agency (CSA) to be 3,435,028.With the population is expected to grow to 4,281,394 in 2027 and 5,131,892 in 2037 [3], the demand for transportation is also expected to increase.In response, the city is investing in public buses and light rail systems to meet the existing gap and forecasted travel demand.Yet, minibus shared taxis are playing a major role in meeting the passenger demand more than any other public transport mode in the city.Especially during peak hours, the supply of the shared taxis does not match the demand, which creates crowded stations and long waiting lines.In general, public transportation in Addis Ababa is characterized by chaotic, unreliable, unsafe, unaffordable, and inefficient service for a fast-expanding city [4].
Minibus taxi in Addis Ababa is a semi-bus service in a network of independent taxi operators serving a complex route all over the city.Because of the shared nature of the minibus taxis, they positioned themselves as the 'public transport' mode.Although the locals call them taxis, they serve as public transport having a set origin, destination, and to some extent, fixed-route but not fixed stops in between.Passengers can be picked up and dropped off anywhere along the route and in between the origin and the destination.Minibus taxis have owned by individuals and are often driven by a hired driver.The driver has an assistant (also called woyalas) that collects fares and helps passengers get on and off the vehicle.In this sense, in addition to providing mobility, minibus taxis create job opportunities for drivers and their assistants.
Taxi owners are part of the taxi associations and they abide by the rules and regulations of Addis Ababa's Transportation Authority.The system of minibus taxi is operated by drivers that know the city well, and can drive without route maps and timetables but with a sense of coordination and order.From this, one can say that minibus taxis comprise a complex-adaptive system run by self-regulated drivers.The city's effort to create a zoning system and capping the number of passengers is usually met by resistance and nonconformity from drivers.The carrying capacity of a minibus taxi is 12 passengers, but during peak hours, it is common to see a taxi carrying up to 18 people, which is highly overcrowded.Despite the importance of taxis as public transport mode, there is a wide criticism of the service in terms of passenger safety, age of minibus taxis that leads to environmental pollution and accidents, and the question of meeting commuter demand in peak periods [2].
There is no significant research done on customer's satisfaction of minibus taxis in Addis Ababa.However, the customer satisfaction in various modes of public transportation, such as the Addis Ababa Light Rail Transit and city bus has been studied by Refs.[5,6], respectively.Different authors have used varied service attributes to measure the customer satisfaction in taxi services.For instance, to assess consumer satisfaction in traditional taxi services, researchers employed comfort, internal environment, and safety [ [7][8][9][10][11]].Furthermore, study by Ref. [12] identified indicators of customer satisfaction with general public transportation as service availability, fares, safety, and security, waiting and access time, comfort, reliability, overcrowding, cleanliness, and information system.This study covers the wide range of customer satisfaction measures comprising fifteen (15) service quality variables in minibus taxi.This distinguishes this study from the previous related studies.
The purpose of this study is to assess the level of customer satisfaction with the Addis Ababa minibus taxi services through responses ) what are the services areas that need major improvements?Thus, the result of the study is anticipated to offer a substantial contribution in aiding decision-makers and other pertinent entities engaged in enhancing the Addis Ababa city transportation system.
Location of Addis Ababa
Addis Ababa is a city in Ethiopia's central highlands, having a total area of around 527 km 2 and an average elevation of 2600 m above mean sea level (asl).The elevation ranges from the highest peak at Mount Entoto which is 3041 m to 2051 mean above sea level at the lower part of Akaki plain.Addis Ababa city shares a boundary with surrounding Oromia Special zone towns: Burayuto the West, Sebetato South West, Gelan to the East, LagaTafao to the North, and Sululta to the North East.Fig. 1.Location map of the study area.
Due to the city's horizontal growth and limited access to transportation infrastructures, accessing business activities, education, employment and recreational opportunities is challenging in Addis Ababa.
As the result, there is a significant gap between the supply and demand for public transportation.The existing public transportation is serving more than 8 million people including the people in the town of Oromia Special Zone Surrounding Addis Ababa.In 2017, the city had 3.4 million residents; in 2037, that number is projected to increase to 5,124,480 as forecasted from Ref. [3] using growth rate of 3.8% per year.The city is divided into 11 sub-cities known as kifle-ketemas and 120 woredas, which are the lowest administrative entities (Fig. 1).
Sampling technique 2.2.1. Data collection process
In Addis Ababa city, there are various major and minor hub taxi stations.The City Road Authority have already identified four nodal points of transport stations, from which taxi transportation is available to connect various parts of the city and the surrounding Oromia special zone towns as well.Those nodal points are Piassa, Mercato, Torhailoch and Stadium stations.Besides, there are various taxi stations progressing to be nodal points.This study took place at eight major hub taxi stations.These stations were purposively selected for being the major hub taxi services providers of the city.Accordingly, four of the nodal taxi stations (Piassa, Mercato, Torhailoch and Stadium) and the others four progressing stations Megenagna, Jemmo, Bole Bridge and Ayertena taxi stations were purposively selected as sample locations [2].The location map of the selected minibus taxi stations, where the survey questionnaires were distributed has shown in Fig. 2.
Questionnaire
400 minibus taxi customers were randomly selected during survey data collection.Because a population size of equal to or more than 100,000 is represented by 385 people or more [13][14][15].The simple random sampling technique was used to obtain sufficient information about customers' level of satisfaction towards minibus taxi service.For illiterate customers, a data collector helped them out with the completion of the questionnaire paper.Out of 400 distributed samples, only 351 were filled and responded back (87.75 % response rate).
Service quality attributes of Addis Ababa minibus taxi customers are designed on basis of frequently raised questions by minibus taxi customers, which are also anonymously supported by literatures as well.Accordingly, fifteen different service quality attributes/ components used for this study.Those variables are waiting time, minibus taxi preferability, home to station walking distance, number of transfers, frequency of being stolen at the stations, fare of the minibus taxis, behavior of the taxi drivers and their assistants, security in the minibus taxi and at stations, comfort inside the minibus taxi and at stations, availability of minibus taxis (frequency), the age of minibus taxi and carrying capacity of the taxi.On the questionnaires there are two types of questions due to the nature of the variables i, e. multiple choice questions and Likert scale.Thus, five (5) out of the 15 questions such as waiting time, minibus taxi preferability, home to station walking distance, number of transfers, frequency of being stolen at the stations are collected through multiple choice questions while 10 of them are conducted by a Likert scale type of response.The 10 variables that used likert scale types of responses are fare of the minibus taxis, behavior of the taxi drivers and their assistants, security in the minibus taxi and at stations, comfort inside the minibus taxi and at stations, availability of minibus taxis (frequency), the age of minibus taxi and taxi (over)load.Customers of the minibus taxi were asked to rate their degree of satisfaction with the level of service provided both inside the vehicle and at the stops.The question "How satisfied are you with the following minibus taxi performance and quality indicators?" was followed by a five-point Likert scale response.
Data analysis
For data analysis, three-step process is administered along with descriptive data analysis.Three steps are focused on Likert scale response designed for 10 variables.The first phase involves comparing the means, median, and mode of consumers' levels of satisfaction with each service quality element.Then, factor/principal component analysis with the Varimax orthogonal rotation method was used in order to find which satisfaction factors are the most important ones.Factors were extracted using the following criteria: an eigenvalue greater than 1 and factor loadings greater than 0.5.A reliability analysis (Cronbach's alpha, α) was used to assess the correlation between variables of each identified factor.All factors with an α reliability above 0.50 were accepted for this study.
Finally, ordered logit model was run to identify the most important socio-demographic variables and variables related to customers experience that influence satisfaction with important service parameters of the minibus taxi identified by the factor analysis.The satisfaction response is inherently ordered (1-5 Likert Scale).Despite the fact that the outcome is discrete, the ordinal character of satisfaction (the dependent variable), makes it impossible for linear multiple regression, multinomial logit, or probit models to take into consideration [16].Responses on an ordinal scale can be rated or ranked, but the gap between them is not quantifiable.As a result, on a Likert scale, the distinctions between "very satisfied," "satisfied," and "neutral" are not always equal.In other words, one cannot assume that the difference between responses is equidistant even though the numbers assigned to those responses are seems followed sequential order.Therefore, the ordered logit model is help to examine the ordered character of satisfaction responses because it can handle variables with a ranking order.As the result, this study uses the ordinal logit model to examine customer satisfaction with minibus taxis and determine the variables that influence the customers' satisfaction.
Factors affecting satisfaction in public transportation
In today's society, transportation plays a vital role in socioeconomic progress.Thus, the level of service provided by transportation mode has an impact on the passengers, either directly or indirectly.People prefer some mode of transportation over another based on the quality of the service provided.To measure customer satisfaction with public transportation, numerous authors from throughout the world have suggested various service quality indicators.According to the study conducted in Lagos, Nigeria, fare, travel time, waiting time, safety and reliability, and fuel consumption are used as factors of customer satisfaction in public transportation [17].Some researchers mentioned five service qualities namely reliability, tangibility, assurance, responsiveness, and empathy in influencing customer satisfaction in the public transportation [18,19].
Other authors relied on reliability, frequency, affordability and safety to measure the service quality in public transport [20,21].[22] defined the quality of service in public transportation as the all-encompassing metrics and perceived performance from the perspective of the passengers [23].also studied customer satisfaction in public transport in Porto and pointed out the dissatisfying factors as an overload (overcrowd), traffic congestion, lack of control, lack of comfort, unreliability, long waiting times, lack of flexibility, time uncertainty, transfer problems, and long walking time.As can be seen from the different authors' points of view, the service quality measurements on customer satisfaction in public transportation are measured in a diverse range.Thus, satisfaction is a relative concept and not a measure of absolute success (or failure) in public transport [24].G.T. Deyas et al.
Taxi-specific factors that affect satisfaction
The availability of literature on passengers' views and perspectives towards taxis is not as vast as on the other public transportations.Some studies consider minibus taxis as part of the public transportation system [25]; while other studies categorized minibus taxis as an on-demand pickup service [26].
There are no standardized types of service that affect customer satisfaction in public transport in general and minibus taxis in particular.Hence, various authors have developed different service quality indicators to measure customer satisfaction in taxis (minibus taxis).In Cape Coast, Ghana, comfort, continuous service, reliability, and affordability influenced customer satisfaction in minicab taxi [27].[28] revealed that timely arrival at destinations, affordability, punctuality, and reliability were the major service quality concerns of minibus passengers in Johannesburg.Furthermore [29], investigated the factors affecting customer satisfaction in the taxi service in India and found out that drivers' behaviors such as professionalism and convenience were identified to be having a significant impact on the overall satisfaction.In addition, the result of a study conducted by the researchers show that, the driver behavior of the taxis was the most important factor of passengers' overall service quality [30].According to Ref. [31] study on the metered taxi service quality in Bangkok, Thailand, the responsiveness of the taxi drivers had an impact on customer satisfaction.
Moreover, according to a study carried out on the three Taxi companies in Jakarta, there were six service qualities identified to have an effect on customer satisfaction.Those were perceived value, perceived quality, customer expectations, customer trust, company image, and customer complaints [32].In Malaysia, a study on customer satisfaction on taxi-sharing service discovered that comfort is the most influential factor, among others, on customer satisfaction of ride-sharing services [33].When we derive a conclusion from the different authors perspective on customers satisfaction in taxi (minibus taxi), we can say that the subjective satisfaction measures don't necessarily confirm with the objective service provision.
Demographic characteristics of respondents
The survey includes demographic information (age, gender, marital status), socio-economic variables (educational status and occupation) and customers' experience with the minibus taxi (number of transfers and waiting time).68 % of respondents are male while 32 % are female.This does not reflect the gender composition of the city; however, indicates that more males are responded to the survey questions and use minibus taxis than females.Regarding the age classes of respondents, majority of the respondents are in the age range of 25-34 followed by 15-24 years (Table 1).This shows that most of the respondents are young and middle-aged working class and school-age population of the city.The occupation type of the respondents indicated that 39 % of the respondents, the majority, are full time employees followed by students (28 %).Education level showed that most of the respondents have higher institution degree, diploma and vocational trainings, respectively.Only 6 % are illiterate with no knowledge of writing and reading.The marital status of respondents showed that, 53 % of the respondents are married, followed by singles with 44 %, and 1 % divorce, respectively.
Factors determining level of satisfaction
Various variables determine the satisfaction level of transport customers.Overall,15 variables were considered to measure the satisfaction level of minibus taxi customers.Five of them were analyzed using descriptive statistics as show under Table 1 while the other ten are analyzed using Likert scale and inferential statistics.The five factors analyzed using descriptive statistics include number of transfers, peak-hour waiting time, off-peak hour waiting time, got stolen while using minibus taxi, and distance between home to station.
As to the distance from home to a taxi station is concerned, 34 % of the respondents reported that they live within 500 m radius from where they catch the taxi.However, a significant percentage of respondents (27 %) replied that they traveled more than a kilometer to access the nearest taxi station.Waiting time is one among many challenges that taxi customers face.During peak hours, the majority of the respondents (41 %) reported that they wait for a taxi for more than 30 min.About 19 % of respondent reported that they are waiting taxi for more than 30 min during off peak hour.One of the issues with using the minibus taxi, especially during peak hour is pick pocketing.Respondents were asked in the survey on how many times they were stolen in a taxi or at the station.Accordingly, 60 % of the respondents answered 'none', however, 31 % said they experienced pick-pocketing once or twice.The rest 9 % reported that they were stolen more than two times.As far as numbers of transfers to reach to their destinations, 71 % of the respondents replied they made two or more than two transfers.This shows that it is hard to get a direct taxi line from an origin to a destination.
Likert scale results
In addition to the descriptively analyzed factors, 10 factors that determine the satisfaction level of Minibus taxi customers were analyzed based on the Likert scale response.Accordingly, most of the customers (80 %) reported that they are not satisfied with the minibus taxi overload (Fig. 3).This can be seen from day-to-day operation of the minibus taxi in Addis Ababa.The loading capacity of a minibus taxi was supposed to be 12 people, however, most of the time, especially during peak hours the taxi loads 18 to 20 people.Moreover, the age of the minibus taxi, availability, comfort and security inside the taxi and at the stations are other variables that has got higher rating in the 'dissatisfied' and 'very dissatisfied' category.This shows that the minibus taxi customers are not satisfied with their overall experience during boarding and alighting the taxi, and inside the vehicle.On top of this, 29.6 % and 45.3 % of the respondents reported that they are 'dissatisfied' and 'very dissatisfied' with the behavior of the minibus taxi assistants, respectively.Every minibus taxi in Addis Ababa has an assistant to the driver that collects fares and help passengers in and out of the taxi.Often, those assistants are too young; less educated and not well-behaved which cause passengers to dissatisfy with their behavior.As comparison, the dissatisfaction level towards the taxi drivers is relatively lower than that of their assistants.The results of the study show that, dissatisfaction level towards the minibus taxi service in the study area is very high.Regarding tariff of the minibus taxis, the respondents have little complaint.Only 32.5 % reported that they are unsatisfied (13.4 % very dissatisfied and 19.1 % dissatisfied) with the cost of using the taxi.This shows that Minibus taxis are one of the cheap public transportation alternatives in the city.The minimum fare for a shortest distance is 1.50 Ethiopian birr (0.05 US Dollar) and the maximum for longer one-way distance within a city may go up to 6 Ethiopian Birr (0.2 US Dollar).There are cases of paying more, especially for taxis reaching to the peripheral neighborhood of the city.The dissatisfaction with the availability of taxis is an indication that the minibus taxi is either scarce or the frequency is not as high as customers expected it.A long line observed in morning and the afternoon at taxi stations is a living proof for this.
Satisfaction level by age
As shown in Fig. 4 below, age groups from 25 to 34 years are dominant age group around the minibus taxi stations during data collection period.25-34 age groups appreciated and complained the quality of service provided by minibus taxis more than any other age groups.So, 28.6 % of thoseage group are dissatisfied (10.5 % very dissatisfied and 18.1 % dissatisfied) and on the contrary 6.6 % of them are satisfied (1.4 % very satisfied and 5.2 % satisfied).The second dissatisfied age groups are those within 15-24 years sharing 20.6 % (9.7 % very dissatisfied and 11.9 % dissatisfied).
Satisfaction level by gender
The most dissatisfied response rate goes to male covering 29.1 % and 13.1 % for female.The next higher response rate goes to very dissatisfied having 18.9 % for males and 9.4 % for female.The satisfied level response rate shares 9.2 % for male and 3.8 % for female; while the least response rate is very satisfied level sharing 2.4 % for male and 0.7 % for female as can be seen in Fig. 5 below.The satisfaction/dissatisfaction response rate for female is low.This might be because there are lower numbers of female around minibus taxi stations than males during data collection period.
Satisfaction level by education level
Different category of education levels have various satisfaction rates about the service quality provided the Addis Ababa minibus taxis as can be seen from Fig. 6 below.From the all category of education levels, degree holders have highest dissatisfaction response rate with 14.7%dissatisfaction and 8.2 % very dissatisfaction response rate.Again, the degree holders responded highest response rate in satisfied category sharing 3.4 % of the total response categories.This might tell us that, the more people educated, the more they expect good service quality from the service providers.
Comparing means
The 10 different variable factors of the minibus taxi provide information on how customers view different components separately.Table 2 illustrates a comparison of those variables by means, median and mode (in descending order by means).Minibus taxi customers are not extremely satisfied with a single variable, with no variable gets a mean above three (out of five), except only one variable with a score above 3.0, which is fare of the minibus taxi.A variable with lowest score is minibus taxi (over)load, which draws a negative reaction from respondents.Other variables such as availability of minibus taxis, security of the minibus taxi stations, the age and comfort of the minibus taxis are also among the variables with low mean score.
Factor/principal component analysis
In order to decide on which satisfaction variables are important, factor analysis is conducted using SPSS software, which resulted in two-factor categories, explaining 53.12% of the total variance (Table 3).Individual satisfaction variables were grouped according to the factors they are in and given a group/factor label.Factor 1, labeled as human and functional variables, has a good reliability coefficient (α) of 0.816.The factor includes seven variables with factor loadings greater than 0.5.Those variables include behavior of the taxi drivers, behavior of the minibus taxi assistants, minibus taxi (over) load, age of minibus taxi, comfort of the minibus taxis, availability of minibus taxis, and fare of the minibus taxis.They explain 42.07% of the variance, meaning these seven variables have a higher relative importance to affect satisfaction of minibus taxi customers.Customers give priority for human and functional aspects of the minibus taxi service over the comfort and security factors.The second factor (α = 0.720) includes three variables, labeled as comfort and security factors.These include security at the minibus taxi stations, security in the minibus taxis, and comfort of the minibus taxi stations.These three factors explain only 11.04% of the total variance, being the least important factor to determine customers' satisfaction.
The seven satisfaction variables under Factor 1 are chosen from the list of variables since they have a higher variance and will be included in the ordered logit model analysis.In this situation, the factor analysis is successfully helped to reduce the number of variables used for further analysis.
Results of ordered logit model
The results of the ordered logit model are presented in Table 4.The dependent variable is an ordered response of respondent's satisfaction with 7 important service parameters of the minibus taxi identified by the factor analysis.A p-value with less than 0.05 are strictly used to look into relationships between the satisfaction variables, and the explanatory variables.However, p-value between 0.05 and 0.1 are also considered to see what explanatory variables are marginally related to satisfaction variables.As it is important for ordinal regression model, analysis of the parallel line assumption was conducted in order to prove that the independent variables remain constant for various categories of the dependent variables.To test the parallel line assumption, log likelihood is distributed with chi-square.The result shows that all the dependent variables have log likelihood differentials above the chi-square cutoff values for the given degrees of freedom and significant level (refer Table 4).
According to Table 4, peak-hour waiting time and minibus taxi preferability are the two variables statistically significant with p < 0.05 for satisfaction with comfort inside the minibus taxi.The number of transfer and unemployment are variables statistically significant with p < 0.1 (90 % confidence level).The positive beta value with minibus taxi preferability shows for those who prefer taxis as their main mode of travel have higher likelihood of being satisfied with taxi comfort.Whereas, those who make several transfers to get to their destination, are unemployed and those who have longer waiting time have less likelihood of being satisfied with taxi comfort.This is important because long-distance travelers experience waiting at origin stations as well as when they transfer from one route to the other.When satisfaction with station comfort is concerned, age, number of transfers, minibus taxi preferability, being divorced and a student are important statistically significant variables with p-value less than 0.05.The negative sign associated with the beta coefficient of age, number of transfers, and being a student shows that elderly taxi customers, students, and those making several transfers have a high likelihood of being dissatisfied with the comfort inside the taxi.The low level of satisfaction was emanated from the fact that taxis are crowded (overloaded) and not convenient.However, in the case of this study, those who prefer minibus taxi as their main mode of travel seems not to care about comfort.Minibus taxi preferability is still significant for satisfaction with the fare oftravel.Along with preferability, marital status-single and divorced also are statistically significant with a90 % confidence level.The positive beta coefficient shows that those variables are positively related to satisfaction with the fare of a minibus taxi.This coincides with the descriptive results in the previous section that fare is the only variable that respondents are satisfied with.Experience of having been stolen while catching a taxi and taxi preferability are two common statistically significant variables for satisfaction with the behavior of drivers and assistants.Those who have past experience of being pick-pocketed have a lesser likelihood of being satisfied with drivers and their assistants' behavior whereas for those who prefer minibus taxis, this matters the least.Educational status is a variable, which is statistically significant for satisfaction with drivers' behavior.The negative beta value shows that the more educated the respondents are, the less likely they have to be satisfied with drivers' behavior.
Regarding satisfaction with availability of taxis (frequency), the number of transfers, minibus taxi preferability, and marital status is statistically significant variables.Especially, the number of transfers and satisfaction with the availability of minibus taxis is negatively related in this study.This only makes sense because the scarcity of minibus taxis is evident throughout the city, especially during morning and afternoon peak hours.
When a person has to transfer from taxi to taxi to reach the destination (which is common), the waiting time adds up and contributes to the dissatisfaction of customers with the taxi service.Gender is an important variable, which is statistically significant for satisfaction with the age of the minibus taxi (most of the minibus taxis are very old).The negative sign attached to the gender variable shows that male respondents are not satisfied with the age of the taxi.The opposite is not necessarily true, and this does not imply that female respondents are satisfied with the age of the taxi vehicle.For statistical reasons, gender-female is kept as a reference variable.Full-time and part-time workers are statistically significant with positive beta values, indicating that they have a higher likelihood of being satisfied with the age of a minibus taxi vehicle.From the ordered logit model, one important finding stands out.Those who said minibus taxi is their preferred mode of travel showed a higher probability of being satisfied with the overall performance and quality of the minibus taxi service.This is an indication that for daily or captive customers, the service quality does not matter more than reaching their destination.For a big city like Addis Ababa where the transportation demand is far from the supply, accessing one's destination such as work, or school is not a luxury but a necessity, therefore, riders use the service despite its poor quality.Also, interesting, those who make chained trips with several transfers have high level of dissatisfaction towards several service factors.
Discussion
This study used a variety of analysis techniques, including descriptive analysis, mean comparison, factor/principal component analysis, and an ordered logit model.The most unsatisfactory service in Addis Ababa minibus taxi was taxi overload.From the descriptive analysis, 85.2 % (299 out of 351) of the customers felt inconvenience about minibus taxi overload sharing 40.2 % very dissatisfied and 45 % dissatisfied respondents from customers.According to studies by Refs.[1,34], taxi overload reduced the customers' satisfaction.
Concerning the minibus taxi availability, 76 % of the customers are unhappy (29.3 % very dissatisfied and 46.7 % dissatisfied).As we can see from the above result taxi overload is likely to be affected by availability of minibus.Since customers do not get minibus taxis when they need it, they choose to be overloaded on available minibus taxis, especially during morning and afternoon/off work time.When the waiting time of minibus taxis assessed, 41 % of the customers during peak hours and 19 % of the customers during offpeak hours wait for a taxi for more than 30 min (see Table 1).On the contrary, the fare of minibus taxi is the service quality that the customers least complain about covering 32.4 % (13.4 % very dissatisfied and 19.1 % dissatisfied).The mean score and the factor analysis result in this study also shows that the minibus taxi fare is the variable that riders are complaining less about.Although shared minibus taxis in Addis Ababa are not the cheapest alternatives (compared to regular buses), the satisfaction shows that riders are willing to pay for the service and their satisfaction with taxi fare is unaffected as much as with other variables.In a city where the elasticity of demand is low, price is not found out to be a deal-breaker to affect riders' satisfaction.In supporting the above result, according to Ref. [35], when public transport is provided throughout the city, it should give consideration to those in need, the urban poor.Again, the study by Ref. [36] reveals that the fare of public transportation is said to meet customer satisfaction when reasonable fare charges meet the majority of passenger's demand.
Form the factor analysis and ordered logit model result, the human factors such as drivers' and assistants' behaviors are variables with which riders show greater dissatisfaction.Similarly, the result of a study conducted by the researchers show that, the driver behavior of the taxis was the most important factor of passengers' overall service quality [30].According to the factor analysis, the comfort and security factors are not as important as the human and functional variables showing that riders weigh more on the functionality of the service than their comfort and security.This is not to mean that comfort and security are not important but, in a city, where the supply of transportation is low, people are willing to compromise their comfort for getting to their destination on time.Contrarily, a study conducted in Kenya in 2010 found that comfort was a key service factor that contributed to more enticing public transportation [37].Additionally, a different study [34] discovered that comfort is one of the elements that influence people to select one mode over another.A low level of crowding, good standards of cleanliness, and comfortable seats are some of the practical variables that contribute to the high degree of passenger comfort, according to Ref. [38].
The ordered logit model results show that, those who are dissatisfied with the taxi drivers and their assistants' behavior are those had been stolen at least once during their previous travel with a minibus taxi.Forty (40) percent of the customers were pick-pocketed at least once at the minibus taxi stations.(see Table 1).In addition, those who make long trips with several transfers were either dissatisfied or very dissatisfied with the many variables of the minibus taxi service.During their trips, the customers that transferred at least once covers 88 % (17 % one-time transfer, 33 % two times transfer, and 38 % more than two-time transfer) of the total respondents (see Table 1).However, according to the previous study by Ref. [38], the passengers were satisfied when they completed their journey without having to transfer.As can be seen from the ordered logit model analysis, the respondents who prefer minibus taxis as means of transport were those who tolerate unsatisfactory service attributes in a minibus taxi.So mixed technique of data analysis used in this study helped to analyze a broad range of service quality indicators can be seen as its strength.However, since there was no standard type and number of indicators to measure the customer satisfaction in public transport in general and minibus taxis in particular.The previous authors have used different satisfaction indicators to assess customer satisfaction in public transportation and minibus taxis.Thus, this study tries to address a wide range of satisfaction issues in minibus taxis of Addis Ababa by using 15 different variables.To assess this wide range of satisfaction factors, 3 methods were used.As the result, conducting the study requires much time and energy.So, the future researchers have to set the standard number and type of variables used to assess the quality of customer satisfaction in public transportation and minibus taxis.
Conclusion
Shared taxi service covers most trips made in Addis Ababa city, as is the case in many developing countries.Given its growing importance of meeting the ever-increasing transportation demand, promoting a good quality service is important.To promote the use of public transportation, planning for shared taxi service (as part of public transportation) needs to include the views and perspectives of those who use the service.Since there was no research done on customer views and perspectives with minibus taxis in Addis Ababa, there is a need to study customers' satisfaction towards the service quality in minibus taxis.The collected data were analyzed through descriptive and three steps processes starting from comparing means of satisfaction responses, and then factor/principal component analysis followed by an ordered logit model.Thus, the result of descriptive analysis, mean score and the factor shows that, the minibus taxi overload is the service that the customers complain the most about.
This is an indication that the number of minibus taxis in the city should be increased and the minibus taxi should be deployed according to demand at minibus taxi stations.There should also be provision of alternative modes of public transportation in Addis Ababa city.The factor/principal component analysis result shows that the human factors such as drivers' and assistants' behaviors are variables with which riders show greater dissatisfaction.This is one possible intervention area for service providers.Taxi drivers and their assistants need customer service trainings.The license issuing process need to include training on soft skills such as interpersonal communication for drivers' and assistants of minibus taxis as well.The ordered logit model results show that, those who are dissatisfied with the taxi drivers and their assistants' behavior are those had been stolen at least once during their previous travel with a minibus taxi.In addition, those who make long trips with several transfers were dissatisfied with the many variables of the minibus taxi service.Thus, to improve the services that many riders are dissatisfied with, the minibus taxi owners (service providers), Addis Ababa Road Authority (to give customer service trainings for drivers and assistants, regulate licensing issue) and security personnel (to keep the security and safety of customers) should work together on improving the minibus taxi service that many riders are dissatisfied with.
Ethical approval
This work was approved by Wollega University Research Ethics Review Committee on September 2020 with ethics approval number of 0348/2020.As completion of the questionnaire implies consent to participate, the respondents consent was obtained for the information gathered from the people and the confidentiality of the information was respected.
Fig. 1 .
Fig. 1.Location map of the study area.
Fig. 2 .
Fig. 2. Minibus taxi stations selected for the data collection.
Table 1
Respondents' characteristics and descriptively analyzed levels of satisfaction.
Table 2
Respondents' satisfaction with minibus taxi service aspects.
Table 3
Factor analysis of satisfaction components of minibus taxi.
Table 4
Ordered logit model results. | 2023-11-10T16:20:24.642Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "bf7d97c7d39e50e8faff1c39a713d6a8ce1c5f46",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e22102",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6059ac700513cc76b2a5888472f1f3dd376a9bf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
232086328 | pes2o/s2orc | v3-fos-license | Estimation of Additive and Dominance Genetic Effects on Body Weight, Carcass and Ham Quality Traits in Heavy Pigs
Simple Summary The response to genetic selection in animal populations depends on both additive and nonadditive (e.g., dominance) effects. Neglecting nonadditive effects in genetic evaluations, when they are relevant, may lead to an overestimation of the genetic progress achievable. Our study evidenced that dominance effects influence the prediction of the total genetic progress achievable in heavy pigs, for growth, carcass, fresh ham and dry-cured ham seasoning traits, and indicated that neglecting nonadditive effects leads to an overestimation of the additive genetic variance. However, goodness of fit and ranking of breeding candidates obtained by models including litter and dominance effects simultaneously were not different from those obtained by models including only litter effects. Consequently, accounting for litter effects in the models for genetic evaluations, even when neglecting dominance effects, would be sufficient to prevent possible consequences arising from the overestimation of the genetic variance, with no repercussions on the ranking of animals and on accuracy of breeding values, ensuring at the same time computational efficiency. Abstract Neglecting dominance effects in genetic evaluations may overestimate the predicted genetic response achievable by a breeding program. Additive and dominance genetic effects were estimated by pedigree-based models for growth, carcass, fresh ham and dry-cured ham seasoning traits in 13,295 crossbred heavy pigs. Variance components estimated by models including litter effects, dominance effects, or both, were compared. Across traits, dominance variance contributed up to 26% of the phenotypic variance and was, on average, 22% of the additive genetic variance. The inclusion of litter, dominance, or both these effects in models reduced the estimated heritability by 9% on average. Confounding was observed among litter, additive genetic and dominance effects. Model fitting improved for models including either the litter or dominance effects, but it did not benefit from the inclusion of both. For 15 traits, model fitting slightly improved when dominance effects were included in place of litter effects, but no effects on animal ranking and accuracy of breeding values were detected. Accounting for litter effects in the models for genetic evaluations would be sufficient to prevent the overestimation of the genetic variance while ensuring computational efficiency.
Introduction
Current genetic evaluations in pigs make use of additive genetic effects. However, the availability of estimates of nonadditive effects may increase the accuracy of prediction of breeding values [1], improve mate allocation procedures between candidates for selection [1,2], and facilitate the design of appropriate crossbreeding or purebred breeding schemes [2]. As pigs are a litter bearing species with a large expression of dominance relationships, the use of dominance models may significantly improve the accuracy of genetic evaluations, particularly when prediction of genetic merit of purebred breeding candidates is based on phenotypic information of full-sib families of crossbred individuals. The lack of informative pedigrees, such as large full-sib families, the complexity of calculations, and the difficulty in using dominant values in practice for mate allocation, make the estimation of nonadditive effects difficult [3,4]. A shortcoming of pedigree-based estimates of nonadditive effects derives from the confounding between common environmental and additive genetic effects, as dominance effects are estimated based on the combination of sire and dam and may largely coincide with litter effects [3,4]. The use of genomic information can disentangle these components because, while pedigree-based models for dominance are based on "expected" dominant relationships, genomic models are based on "observed" heterozygotes [4]. However, a major obstacle is the need of extensive data sets with genotypes and phenotypes, which are not always available [4].
Several pedigree-based studies indicated that nonadditive genetic components (e.g., dominance effects) can account for a variable proportion of the phenotypic variation in quantitative traits in a number of species [5][6][7][8]. In those studies, fitting dominance effects in statistical models resulted generally in a decrease in the estimated additive genetic variance and, consequently, in the heritability (h 2 ), whereas the residual variance remained either unchanged or increased slightly. As a consequence, the predicted genetic response achievable by a breeding program may be overestimated when dominance genetic effects are neglected.
In Italy, the pig industry relies mostly on heavy pig farming where animals are fed in restricted conditions and slaughtered at 160 kg body weight (BW) and at no less than 9 months of age in order to comply with specifications of Protected Designation of Origin (PDO) dry-cured ham production [9]. Estimates of nonadditive genetic effects have been reported for a few traits and pig populations, showing that estimates are populationdependent [10]. No estimates are available for heavy pigs or ham quality traits. While the genomic reference population is still under development, an extensive dataset of phenotypic records measured on crossbred pigs within the sib-testing program of the C21 Goland sire line (Gorzagri, Fonzaso, Italy) is available.
The present study attempted to investigate the pedigree-based contribution of the additive and dominance variances to the phenotypic variation of 50 traits in crossbred heavy pigs. Traits included average daily gain, BW, carcass traits, composition of raw ham subcutaneous fat, raw ham quality traits, and ham weight losses during curing. This study provides for the first time estimates of dominance genetic effects in heavy pigs and on ham quality traits.
Animals
Observations used in this study were from 13,295 crossbred finishing pigs produced in the sib-testing program of the C21 Goland sire line (Gorzagri, Fonzaso, Italy). Besides growth and feed efficiency, the breeding goal of the sire line is focused on the quality of dry cured ham evaluated at the crossbred level [11,12]. Selection of C21 breeding candidates is performed using estimates of genetic merit obtained from their own phenotypes for growth performance and from phenotypic data on carcass and ham quality provided by a group of crossbred half-sibs raised in the testing farm under commercial conditions. In the testing farm, semen of C21 nucleus boars is used to inseminate a group of crossbred sows in order to produce, for each boar, families of approximately 35 crossbred piglets which are paternal half-sibs of C21 purebred breeding candidates. Crossbred sows originate from a cross involving boars of a synthetic line, derived from Large White and Pietrain breeds, and sows of a Large White line selected for maternal ability and prolificacy. In the testing farm, crossbred piglets are raised and fattened under consistent conditions and feeding strategies [13], which are comparable to those used in heavy pig farming [14]. Crossbred pigs are all slaughtered, in groups of 70 animals each, at the same abattoir (Montorsi, Correggio, Italy). Age at slaughter is constrained to a minimum of 9 months by guidelines of Parma ham production [9]. After slaughter, hams are removed from both carcass halves and dry-cured for 12 months following the Parma ham PDO specification [9]. The crossbred sib-testing program ensures the availability, for the genetic evaluation program of the sire line, of phenotypic information that are (i) specific of traits measurable only after slaughter, (ii) measured on crossbred animals owning the same genetic background of pigs originated by C21 boars in farrow-to-feeder or farrow-to-finish commercial farms, and (iii) affected by nongenetic influences that are comparable to those arising in commercial farms.
Carcass Traits
Final BW was adjusted to 270 d (BW270; kg) using individual linear regressions of BW on age estimated from six BW measures (at 60, 90, 135, 180, 245 d of age and the day before slaughter). Fat O-Meater (Carometec, Soeborg, Denmark) measures of carcass backfat and loin depth were used to estimate carcass lean meat content, as detailed in a previous study [13].
Measures of killing out percentage, average weight of the raw trimmed hams and weight of raw hams as a percentage of carcass weight were also available. All left thighs of crossbred pigs were further examined for raw and dry-cured ham quality traits, iodine number (IOD) and fatty acid (FA) composition of subcutaneous fat.
Traits Assessed on Trimmed Raw Hams
Ham subcutaneous fat depth was measured in the proximity of semimembranosus and quadriceps femoris muscles [11]. Hams were scored by a trained expert, using a linear grading system, for round shape (0: low roundness to 4: high roundness), subcutaneous fat depth (−4: low depth to 4: high depth), marbling of visible muscles of the thigh (0: low to 4: high), muscle color (−4: pale to 4: dark), and veining (visible blood vessels; 0: low to 4: high) [11]. A sample of subcutaneous fat was collected from each raw ham to assess IOD and FA composition.
Assessment of Iodine Number and Fatty Acid Composition of Subcutaneous Fat of Raw Hams
In agreement with official analytical procedures used by Parma ham consortium, IOD was assessed analytically on 1455 samples. Homogenized fat (30 g) was melted at 100 • C for 40 min, filtered with a paper filter and poured with anhydrous sodium sulphate to remove residual moisture. Samples were then heated at 100 • C for 30 min. An aliquot of 0.4 g was used for the determination of IOD using the Wijs method [15].
An aliquot of 5 mg of melted fat was diluted in 2 mL of N-heptane. Trans-methylation was carried out using 100 µL of Na-Methoxide and 150 µL of oxalic acid. Gas chromatography was performed on an automated apparatus (GC Shimadzu 17A, Kyoto, Japan) equipped with a flame ionization detector and a Supelco Omegowax 250 type capillary column (30 m × 0.25 mm ID; Supelco, Bellafonte, PA, USA). The operating conditions were as follows: injector temperature 260 • C, detector temperature 260 • C, helium flow 0.8 mL/min (linear velocity: 22 cm/s), thermostatic chamber program equal to 140 • C (initial isotherm) with an increase of 4 • C/min until achievement of a final isotherm of 220 • C. Fatty acids were identified by comparing their retention times to those of a mixture of FA methyl ester standards (Mix C4-24, 18919-1AMP, Supelco, Bellafonte, PA, USA). Results were expressed as the percentage of individual FA or of groups of FA in fat. Only data for the major groups of FA, individual FA representing at least 1% of fat, and ω3 FA were considered in this study.
Infrared Predictions of Fatty Acid Composition of Ham Subcutaneous Fat
Prediction of the percentage of C18:2n6, C18:0, ω6 FA and PUFA, of the MUFA to PUFA ratio and IOD was obtained for all samples of raw ham subcutaneous fat by nearinfrared spectroscopy. Reflectance spectra were collected on a homogenized sample of the trimmed subcutaneous fat. Acquisition of the infrared spectra was performed using a Foss NIRSystem 5000 (Foss NIRSystem, Silver Spring, MD, USA) with a wavelength range of 1100-2500 nm. Prediction equations were developed through the years [16]. Such equations are very accurate, with values of R 2 in cross-validation greater than 85% and have been used to provide phenotypes for genetic evaluations of C21 boars since 2006 [12].
Dry-Cured Ham Traits
Dry-cured hams were manufactured through a process that took 368 ± 4 d to complete. The three major steps (salting, resting, and curing) occurring during processing have been detailed earlier [11]. The salting phase lasted 23 d. After removing salt residues, hams were stored in resting rooms for approximately 70 d. After resting, hams were transferred to the curing phase, where they remained until the end of the dry-curing process (12 months). Left hams were weighted at the beginning and at the end of each processing stage. Measures of weight loss (%) at 23 d (end of salting), 90 d (end of resting), 12 months (end of dry-curing) and weight loss from 23 to 90 d, from 90 d to 12 months, and from 23 d to 12 months were calculated.
Pedigree Information
Pedigree information was available for all crossbred pigs and for all purebred C21 Goland boars, whereas only the parents and grandparents were known for the dams of the crossbred finishing pigs. Additive relationships were computed on the basis of a minimum of six generations of known ancestors. Sire and dam of crossbred pigs were unrelated.
Statistical Analysis
Sex and slaughter group effects were tested in preliminary analyses and were significant for all traits (p < 0.05), hence they were included in the models for estimation of (co)variance components. (Co)variance components were estimated using AIRemlF90 software [17] using univariate linear mixed models: where y is a vector of observed phenotypes for one trait; b is a vector of nongenetic fixed effects which included sex (female and castrated male) and slaughter group effects, a is a random vector of additive genetic effects, g is a random vector of social group (animals grouped together in the same pen) effects, c is a random vector of litter effects, f is a random vector of dominance effects, e is a vector of random residuals, and X, W, Z, U, and V are incidence matrices relating b, a, g, c, and f to y, respectively. Unlike other dominance studies [13], inbreeding effects were not accounted for in models because sires and dams of crossbred pigs were unrelated.
Number of records, social groups, and families was variable across traits because phenotyping procedures did not begin simultaneously for all traits. In addition, the number of samples measured for IOD and FA composition and for dry-curing traits was considerably lower than the one for the other traits (Table A1) because analytical measures of fat quality, unlike their infrared predictions, were part of a specific research project and were not assessed routinely. The structure of the data used in this study is described in Table 1. Assumptions on the probability distributions of social group effects, litter, and residuals were: g~N (0, Iσ 2 g ), c~N (0, Iσ 2 c ), and e~N (0, Iσ 2 e ), where N ( ) indicates a normal distribution, I is an identity matrix of appropriate order, and σ 2 g , σ 2 c , and σ 2 e are variance components for social group effects, litter, and residuals, respectively. In all models, additive genetic effects were assumed to be generated from the following probability distribution: where A is the numerator relationship matrix and σ 2 a is the variance of additive genetic effects. In models including nonadditive genetic effects, such effects were assumed to be generated from the following distribution: where D is the dominance relationship matrix. Contributions of σ 2 a , σ 2 d , σ 2 c , σ 2 g , and σ 2 e to total phenotypic variance (σ 2 P ) were also calculated. Total phenotypic variance was calculated as: To evaluate the relative importance of litter and dominance effects, the proportion of σ 2 c to σ 2 P (c 2 ) was obtained for M-L and M-LD, and the proportion of σ 2 d to σ 2 P (d 2 ) was obtained for M-D and M-LD. The percentage difference in c 2 (∆c 2 %) and d 2 (∆d 2 %) obtained by M-LD as compared to M-L and M-D, respectively, was also calculated. The magnitude of dominance effects was evaluated by the ratio of σ 2 d to the total genetic variance (D%), calculated as σ 2 d /(σ 2 a + σ 2 d ). The Akaike information criterion (AIC) was used for model pairwise comparison. When comparing models differing in the number of parameters, the parsimonious model was considered to be significantly better if its AIC was more than 2 units lower than the AIC of the complex model. Models M-L and M-D were compared by their relative likelihood. The relative likelihood of Model M-L with respect to Model M-D was calculated as exp((AIC M−D − AIC M−L )/2) and can be interpreted as the probability that Model M-L is as good as Model M-D in minimizing the information loss.
Results and Discussion
Number of records and descriptive statistics for the investigated traits are reported in Table A1. On average, BW270 was 167 ± 15 kg, in compliance with the specification for PDO dry-cured ham production [9], which requires a minimum body weight and age at slaughter (160 kg and 270 d, respectively) to ensure optimal body tissue composition for dry-curing. Additional requirements include a minimum thickness (15 mm) and maximum IOD and linoleic acid content on total FA (70 and 15%, respectively) of ham subcutaneous fat [9].
Estimates of Variance Components
All models converged, except for M-D and M-LD for the ratio of the sum of SFA and MUFA to PUFA. Table 2 shows the average contribution of variance components to σ 2 P obtained with the four models across traits. Estimates of σ 2 a , σ 2 g , and σ 2 e obtained with M-0 represented on average 44%, 4%, and 52% of σ 2 P , respectively. When litter effects were included in the model (M-L, i.e., the model currently used in genetic evaluations), no substantial change in σ 2 g occurred, but σ 2 e slightly increased compared to the estimates obtained with M-0, whereas σ 2 a decreased on average by 10%. This suggested that there is confounding between litter and additive genetic effect, as observed also previously [13]. In agreement with our results, the direct additive genetic variance for daily gain in Large White gilts, when ignoring litter effects, was of magnitude similar to the sum of litter plus additive variance when both these sources of variation were taken into account in the analysis [18]. As a consequence, ignoring contributions of litter effects to the overall variance inflated the estimated additive genetic variance, resulting in biased estimates of genetic parameters. Group variance estimated with M-0 ranged from 0% (for the ratio of the sum of SFA and MUFA to PUFA) to 8% of σ 2 P and it remained constant across models, suggesting that there was no confounding between group and other effects. Pigs were assigned to pens randomly, as to minimize the probability of forming groups constituted by individuals from the same litter. This enabled separation of group and litter variance in the estimation process. Residual variance estimated by M-0 represented on average 52% of σ 2 P and its proportion to σ 2 P ranged from 19% to 73%. Across traits, it slightly increased (by 2%) in M-L, and decreased (by 9%) in M-D and M-LD, compared to M-0. For five traits (SFA, Unsaturated FA/SFA, C14:0, C16:0 and ω3), σ 2 c and σ 2 d were not different from 0.
Heritability Estimates
Estimates of σ 2 a and h 2 obtained with M-0 are reported in Table 3. The SE of σ 2 a ranged from 2% to 7% of the point estimate in the carcass traits, ham evaluation traits, and infrared-predicted fat composition. It was 7-13% of σ 2 a for dry-curing traits and 7-17% of σ 2 a in fat composition traits, with the only exception of C18:0 and C16:1, for which the SE was more than 30% of σ 2 a . Standard errors obtained by M-L, M-D and M-LD were of the same magnitude of those obtained by M-0 (results not reported in tables). Heritability ranged from 0.24 (for ham weight loss from resting to the end of curing, %) to more than 0.70 (for C18:0 and C16:1 contents). The SE of h 2 estimates averaged 0.05 and ranged from 0.02 to 0.14. Values of h 2 for BW270, backfat depth, carcass lean meat content, IOD, linoleic acid, ham subcutaneous fat depth measured in the proximity of semimembranosus and quadriceps femoris muscles, round shape, subcutaneous fat, and marbling scores were in agreement with findings of a previous study carried out using M-0 on the same traits and genetic line [13].
Weight losses during the different phases of seasoning exhibited h 2 values ranging from 24% to 32% for percentage losses, and from 40% to 55% for losses expressed as kg. Heritability for quality traits collected during dry-curing of hams have been scarcely investigated, except for the % weight loss at first salting (7 d). Its estimates of h 2 ranged from 0.30 to 0.61 [19][20][21] and the trait is currently used in selection plans toward the improvement of meat quality for seasoning aptitude in Italian purebred pigs. In the current study, h 2 estimated for the % weight loss at the end of salting (23 d) was 0.29, close to the lowest estimate reported in the literature for weight loss at first salting. Pigs enrolled in this study were raised in the same farm under standardized conditions, and slaughtered at the same abattoir following standardized practices. These factors likely contributed to the generally medium-to-high h 2 estimates. Table 3. Across traits, h 2 estimated with M-L was on average 9%, and up to 26%, lower than that obtained with M-0. The decrease was lower than 5% for 17 traits, but it was greater than 25% for the ratio of the sum of SFA and MUFA to PUFA, and ham % weight loss from resting to the end of curing. The lower h 2 estimates obtained by M-L compared to M-0 were due to a decrease in additive genetic variance (Table 2). This was consistent across all the traits for which σ 2 c was >0.
These results are in agreement with those obtained in another study [13] performed on the same crossbred pig population reporting that, when litter effects were neglected, h 2 were larger than those obtained with models accounting for litter effects, as a consequence of inflated estimates of additive genetic variance. A further slight decrease in h 2 (by 1.5 percentage points on average) was observed comparing M-L with M-D. This indicates that the inclusion of litter, dominance, or both in models for genetic evaluations is expected to have a considerable effect on h 2 estimates, and, consequently, on the estimated genetic progress.
Results from model M-LD were very similar to those of M-L and M-D, with the exception of IOD and some of the dry-curing traits, for which h 2 dropped further in model M-LD when compared to M-D. In agreement with our results, pedigree-based studies reported in the literature have consistently shown that fitting nonadditive effects, particularly dominance genetic effects, resulted in a remarkable decrease in the h 2 estimates, while the residual variance either remained the same or increased slightly. Across traits, the decrease in h 2 ranged from 3% to 53%. Similar tendencies were observed for pig longevity traits [22], for the number of kits born alive and dead, respectively [7], and for daily gain in pigs [8].
Confounding between Litter and Dominance Variance
Estimates of the proportion of σ 2 c to σ 2 P (c 2 ), and of σ 2 d to σ 2 P (d 2 ) obtained by M-L and M-D, respectively, are reported in Table 4. Across traits, c 2 was on average 0.025 (see also Table 2) and ranged from 0 to 0.07. On average, d 2 was 0.11 and ranged from 0 to 0.26. Standard errors were generally around 30% of c 2 and d 2 estimates for carcass traits, ham evaluation traits, and infrared predicted fat composition, whereas higher SE were obtained for FA and ham weight losses, as these traits were available for a limited number of animals (≈1450 and ≈1700 for FA and weight losses, respectively). For some of the FA and most measures of ham weight loss, the SE was as big as the estimate or greater.
The estimated σ 2 d may partly contain a full-sib common environmental variance [4] and this effect should be fitted along the dominance effect. For the traits investigated in this study, for which phenotypes are measured far from the time when the litter mates share a common environment, the common environmental variance is expected to be low and the fitting of litter effects in models may be a simple way to account for nonadditive genetic effects shared by full-sib family members. Estimates of c 2 and d 2 varied when litter and dominance effects were fitted simultaneously in the model (M-LD). The percentage differences in c 2 (∆c 2 %) and d 2 (∆d 2 %) obtained by M-LD as compared to M-L and M-D, respectively, are reported in Table 4. Across all traits in which σ 2 c was significantly different from zero, the proportion of σ 2 c to σ 2 P in M-LD was on average only 30% (ranging from 0% to 100%) of the one estimated by M-L. This indicates that a large part of σ 2 c was removed when accounting for dominance effects due to confounding between litter and dominance effects. Analogously, across the traits with reliable and non-null estimates of dominance effects, the proportion of σ 2 d to σ 2 P in M-LD was approximately 70% (ranging from 0% to 100%) of the one estimated by M-D. This is a further indication of the confounding between litter and dominance effects. As a consequence, a model including litter effects will account for dominance effects as well, and vice versa.
Other pedigree-based studies found similar results: common litter variance components were twice as high using models that did not contain dominance effects compared to a model containing dominance and litter effects in pigs [22]; likewise, full-sib effect of laying hens removed almost all the dominance variance when the dominance effect was not included in the model, while dominance effects explained almost all the full-sib variance when full-sib effect was not included in the statistical model [23].
Magnitude of Dominance Variance
Being litter and dominance effects confounded, the estimates of σ 2 d obtained with M-LD might be affected by the presence in the model of the litter effect, and vice-versa. Therefore, the magnitude of dominance variance was evaluated considering estimates of σ 2 d obtained from M-D. On the other hand, these estimates may be inflated as they represent both litter and dominance components. Proportion of σ 2 d to total genetic variance (D%) and of σ 2 d to σ 2 a (Da%) obtained with M-D are reported in Table 4. Across carcass and raw ham evaluation traits, D% was on average 27.6% and ranged from 10% (for veining score) to 41% (for body weight at 270 d).
For FA, D% ranged from 0% to 44%. Dominance variance was null or negligible for SFA and individual saturated FA, but it represented approximately 40% of σ 2 a for the content of PUFA, C18:1n9ct, C18:2n6, and ω6. The proportion of σ 2 d to total genetic variance (D%) was on average 21% across infrared-predicted traits, ranging from 9% to 28%. Values of D% were 40% and 45% for the initial and final ham weight and ranged from 0% to 60% in ham curing weight loss traits. However, estimates for ham weight loss traits exhibited very high SE, with the exception of the weight loss from resting to the end of curing, for which D% was above 40%.
Across traits, σ 2 d contributed up to 154% of σ 2 a (for percentage ham weight loss from resting to the end of curing). In particular, it accounted for at least 26% of σ 2 a for all carcass traits. For traits measured on raw hams, Da% ranged from 11% to 38% for the raw ham quality traits evaluated with the linear scoring system, whereas it was 19% and 35% for the measures of subcutaneous fat depth. Values of Da% reached 78% in FA composition (for ω6) and 38% for infrared predicted fat composition (for the ratio between MUFA and PUFA).
In pigs, most of the estimates of nonadditive genetic effects have been obtained for maternal traits, daily gain, and backfat thickness [24], and the large majority of the studies were performed on purebred pigs, in which dominance effects are expected to be small, as compared to crossbreds [25]. Estimates varied across studies, supporting the hypothesis that dominance effects are trait-and population-specific. Detection of dominance variance needs the locus to be segregating at intermediate gene frequency, hence population-specific dominance effects can result from differences in allele frequencies in each population [25].
In purebreds, the ratio of σ 2 d to the total phenotypic variance ranged from 0.04 to 0.11 for growth traits, and between 0.02 and 0.05 for backfat thickness [24][25][26][27], hence dominance effects contributed only slightly to the phenotypic expression of the traits investigated, and their contributions were lower than the contributions of additive genetic effects. These estimates, as expected, are lower than those obtained in our study.
For growth traits, the high absolute value of σ 2 d , as well as the large σ 2 d compared with σ 2 a found in our study, agreed with previous results for growth traits in crossbred pigs [10]. Estimates of σ 2 d of body weight at different ages were reported to contribute to 27-54% of σ 2 a , while the ratio of σ 2 d to σ 2 a was 1.17 for slaughter weight, 0.57 for carcass weight, 0.94 for loin eye area and it ranged from 0.57 to 1.56 for different measures of backfat thickness [10].
Despite the uncertainty of the estimates due to the limited amount of records, a recent study [28] reported ratios of dominance deviation variance to the total phenotypic variance in 22 traits related to growth rate, feed efficiency, carcass composition, meat quality, behavior, boar taint, and puberty. For many traits, the dominance deviation variance was higher in crossbreds than in purebreds, but a clear common pattern of dominance expression between groups of analyzed traits and between populations was not encountered. In that study, the ratio of dominance deviation variance to phenotypic variance in crossbreds was 0.08 for average daily gain, 0.12 for backfat thickness, 0.09 for lean meat content, and 0.14 for ham cut (kg/kg). These values are slightly lower than our estimates. To our knowledge, dominance effects have never been estimated for fat composition and dry-cured ham quality traits, but results of the current study, although associated to relatively high SE, seem to indicate that also these traits may be affected by nonadditive effects.
Usefulness of Including Dominance in Models for Genetic Evaluations
To determine whether including litter or dominance effects in models for genetic evaluations improved model fitting, AIC of models M-L and M-D were compared to those yielded by M-0 (Table 5). For all carcass and raw ham evaluation traits except for veining score, as well as for all the infrared predicted traits and initial and final ham weight, both M-L and M-D had a significantly better fitting than M-0. Marginal improvements were also obtained with M-L for (SFA+MUFA)/PUFA and with M-D for ham weight loss (expressed as % and kg) from resting to the end of curing. However, model fitting of M-LD was not significantly different from that of M-L and of M-D, indicating that fitting either the litter effect or the dominance effect is sufficient to account for the nonadditive components. To evaluate whether dominance effects should be included in models for genetic evaluations in place of litter effects, the AIC of model M-D was also compared to the one obtained by M-L (Table 5). For 24 traits for which the AIC of M-D was significantly lower than that of M-0, the AIC of model M-D was also slightly lower than the one of M-L. These models were also compared using their relative likelihood. In 15 out of those 24 traits, M-D had a better fitting (relative likelihood < 0.8) than M-L. These traits include carcass, raw ham evaluation traits, infrared predicted FA, and ham weight loss (expressed as % and kg) from resting to the end of curing.
Despite the small difference in AIC of M-D models compared to M-L, the Spearman's rank correlation between the breeding values (EBV) estimated by the different models for C21 breeding candidates and for C21 nucleus boars were >0.999 in all traits (data not reported in tables). These results indicate that, although dominance variance represents a significant proportion of the total genetic variance, the ranking of breeding candidates provided by models neglecting dominance effects is very similar to the one obtainable by models in which such effects are accounted for. Very high correlations (>0.999) between EBV predicted by pedigree-based models including or not dominance effects were reported also for stature in cattle [29], harvest body weight in Coho salmon [1], and number of kits born alive, number of kits born dead, and total number of kits in rabbit [7]. In addition, the accuracy of the EBV did not improve when models accounted for dominance effects, as reported in the majority of studies [4]. Substantial dominance variation was found to affect carcass and ham quality traits. Litter and dominance effects affect the estimates of h 2 and, if their contribution to the total genetic variance is ignored, the heritable variation and the response to selection may be incorrectly estimated. Nonadditive genetic components such as dominance effects are usually not accounted for in pedigree-based models because they tend to be confounded with the common maternal environment and they are thought to have little practical application in selection [3,4]. In addition, their estimation is computationally demanding. Currently, genetic evaluation of breeding candidates of the C21 sire line is performed for all the investigated traits with models neglecting nonadditive genetic effects, but including litter effects. Our results indicate that, for some traits, the common litter effect removes part or all of the nonadditive genetic effects when the two effects are accounted for in the model jointly. Accurate prediction of nonadditive effects may be important in selection of mates based on their specific combining abilities [3], where these nonadditive genetic effects may be exploited directly through specific mate allocation. However, specific mate allocation is not performed in commercial farms rearing crossbred finishing pigs. In such case, accounting for litter effects, even though neglecting dominance effects, in the models for genetic evaluations would be sufficient to prevent the effects arising from the overestimation of the genetic variance in a computationally efficient way.
Nonadditive effects result from the interaction between alleles at a locus (dominance), and among alleles at different loci (epistasis). For the past 30 years, the goal of molecular quantitative genetics has been to define the genetic architecture of quantitative traits, to identify whether allelic effects are additive within and across loci, one allele is dominant over another, or the effect of a quantitative trait locus (QTL) is dependent on the genotype at another locus [30]. In quantitative genetics, partitioning genetic variance for a trait into statistical components due to additivity, dominance, and epistasis is useful for prediction and selection, even if it does not reflect the biological (or functional) effect of the underlying genes [30]. In pedigree-based estimates, while epistasis refers to the interaction among additive and dominance genetic effects (e.g., additive by additive, additive by dominance, additive by additive by dominance), dominance relationship between two given animals represents the probability that they share common pairs of alleles [31]. If two animals have the same set of parents or grandparents, it is possible that they share common pairs of alleles [31]. As a consequence, in studies performed on full-sib families, the dominance relationship matrix tends to be very similar to the incidence matrix of the common litter effect and genetic factors can be confounded with nongenetic factors such as shared environmental effects. Methods exploiting genomic information, as compared to traditional pedigree-based quantitative genetics methods, provide more accurate estimates of dominance effects [3] because the computation of the genomic dominance relationship matrix only requires knowledge about whether marker genotypes are heterozygous or not, and the estimate does not rely on probabilities of identical genotypes. As a consequence, dominance effects can be successfully disentangled from common environment effects [32]. However, accurate estimates of dominance variance need large genomic datasets (>2000 records) [25], as well as a large number of genotyped individuals per litter, in order to enable the detection of identical genotypes among individuals and dominance relationships [27]. For mating programs, genomic data can also be used to calculate genotype probabilities of hypothetical progeny resulting from possible matings between candidates [1]. These probabilities together with the estimated additive and dominance effects of marker genotypes can be used to define a set of mates that maximize performance in the future generation, if genotypes of males and females are available in the population. Compared to random mating, mate allocation can generate a further increase in the genetic response ranging between 6% and 22% [1].
Conclusions
Substantial dominance variation was found to affect carcass and ham quality traits, however, litter and dominance effects could not be disentangled. For some traits, the common litter effects removed part or all of the variance due to nonadditive genetic effects when both such effects were accounted for by the statistical model. Neglecting litter and dominance effects affected the estimates of h 2 and, when their contribution to the phenotypic variance is ignored by models, the heritable variation and the expected response to selection may be incorrectly estimated. Accurate prediction of nonadditive effects may be important in selection of mates based on their specific combining abilities. However, specific mate allocation is not performed in commercial farms rearing crossbred finishing pigs. In such case, accounting for litter effects in place of dominance effect in the models for breeding values prediction would be sufficient to prevent possible effects arising from the overestimation of the genetic variance, with no effect on the ranking of animals and accuracy of EBV, and to ensure computational efficiency. The availability of genomic information enables the dissection of the total genetic variance into additive and nonadditive components. In the near future, the dominance contribution to the total variance in the traits investigated in this study might be re-evaluated making use of genomic information.
Author Contributions: Conceptualization, methodology, formal analysis, P.C.; writing-original draft preparation, R.R.; writing-review and editing, V.B. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Ethical review and approval were waived for this study, because animals providing data for the study were subjected to standard production and slaughter conditions and no additional measurements were taken. Observations used in this study were from pigs produced in the sib-testing program of the C21 Goland sire line (Gorzagri, Fonzaso, Italy) and were registered at the farm where the program has been carried out since 1998. The farm operates in compliance with regulations of the Italian law on protection of animals.
Informed Consent Statement: Not applicable.
Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from Gorzagri (Fonzaso, Italy) and are available from the authors with the permission of Gorzagri.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-03-03T05:19:56.772Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "63316e23e5e7f9827cf7d36e1d19ae8e2e52da19",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/2/481/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63316e23e5e7f9827cf7d36e1d19ae8e2e52da19",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10179707 | pes2o/s2orc | v3-fos-license | Increased pulmonary secretion of tumor necrosis factor-α in calves experimentally infected with bovine respiratory syncytial virus
Bovine respiratory syncytial virus (BRSV) is an important cause of respiratory disease among calves in the Danish cattle industry. An experimental BRSV infection model was used to study the pathogenesis of the disease in calves. Broncho alveolar lung lavage (BAL) was performed on 28 Jersey calves, of which 23 were experimentally infected with BRSV and five were given a mock inoculum. The presence of the cytokine tumor necrosis factor α (TNF-α) in the BAL fluids was detected and quantified by a capture ELISA. TNF-α was detected in 21 of the infected animals. The amount of TNF-α in the BAL fluid of calves killed post inoculation day (PID) 2 and 4 was at the same very low level as in the uninfected control animals. Large amounts of TNF-α were detected on PID 6, maximum levels of TNF-α were reached on PID 7, and smaller amounts of TNF-α were seen on PID 8. The high levels of TNF-α appeared on the days where severe lung lesions and clinical signs were obvious and the amounts of BRSV-antigen were at their greatest. Although Pasteurellaceae were isolated from some of the BRSV-infected calves, calves treated with antibiotics before and through the whole period of the infection, as well as BRSV-infected calves free of bacteria reached the same level of TNF-α as animals from which bacteria were isolated from the lungs. It is concluded that significant quantities of TNF-α are produced in the lungs of the calves on PID 6–7 of BRSV infection. The involvement of TNF-α in the pathogenesis of, as well as the anti-viral immune response against, BRSV infection is discussed.
Increased pulmonary secretion of tumor necrosis factor-a in calves experimentally infected with bovine respiratory syncytial virus
Introduction
Enzootic pneumonia is a widespread disease in cattle populations throughout the world (Bryson, 1985). In Denmark, bovine respiratory syncytial virus (BRSV) plays a crucial role in the respiratory disease complex, leading to substantial losses within the calf rearing industry . BRSV is a single-stranded RNA virus, which belongs to the genus Pneumovirus, classi®ed in the subfamily Pneumovirinae in the family of Paramyxoviridae (Murphy et al., 1995). The virus is closely related to the human respiratory syncytial virus (HRSV) (Collins et al., 1996).
The gross lesions appear as large areas of consolidated tissue in the cranioventral parts of the lungs in which bronchi and bronchioli often are ®lled with mucopurulent exudate. Interstitial edema may be seen in all lobes, whereas emphysema typically is located in the diaphragmatic lobes (Kimman et al., 1989;Bryson, 1993). Microscopic ®ndings consist of epithelial lesions in the bronchi, bronchioles and alveoli, which are ®lled or in®ltrated with neutrophils and macrophages. Frequently, multinuclear syncytia as well as hyaline membranes can be found. Also, evidence of bronchiolar repair including epithelial hyperplasia and organisation of exudate are observed fairly early in the course of the infection (Kimman et al., 1989;Bryson, 1993). Immunostainings performed on lung tissue indicate that BRSV are located in the cranioventral lobes in most cases, and rarely in the caudal parts of the diaphragmatic lobes (Kimman et al., 1989;Ellis et al., 1996;Viuff et al., 1996). The similarity of the clinical signs and pathology appearing in BRSVinfected calves and HRSV-infected infants is noticeable (Baker, 1991). Hence, studies of BRSV-infections in calves also contribute valuable information to the disease in infants. Today, the available treatments and prophylactic precautions against the diseases are far from optimal. In fact, a vaccine against HRSV is still not available for children (Collins et al., 1996). Therefore, a great effort has been made to study the immune response and pathogenesis of the viral infection in ruminants (Kimman, 1993;Belknap et al., 1995).
Since lesions are present in areas of the lung where little or no virus is detected, one or several immunopathological mechanisms are assumed to be involved in the disease (Baker, 1991;Kimman, 1993). Moreover, the importance of different subpopulations of leukocytes in the lung during infection has been investigated (Kimman, 1993;Taylor et al., 1995). In particular, alveolar macrophages (AM), which phagocytise pathogens and secrete in¯ammatory and immunoregulatory cytokines, have attracted attention. Human or murine AM infected with HRSV in vitro produce large amounts of the cytokine, tumor necrosis factor-a (TNF-a) (Panuska et al., 1990;Becker et al., 1991). Today, accumulating data from in vivo investigations indicate that TNF-a is implicated in the pathogenesis of the disease. AM isolated from infants suffering from an acute RSV infection have been found to express TNF-a protein in a cell-associated manner (Midulla et al., 1993). Moreover, Matsuda et al. (1995) demonstrated the presence of TNF-a in the nasal discharge from HRSV-infected infants. In the murine HRSV-model, high levels of bioactive TNF-a were found in the lungs and sera of HRSV-infected BALBc mice, post inoculation day (PID) 2 (Hayes et al., 1994). Additionally, TNF-a has been shown to possess an antiviral effect against HRSV in vitro (Cirino et al., 1993;Merolla et al., 1995) as well as in mice in vivo (Neuzil et al., 1996). Therefore, the cytokine may also be involved in the immune defense directed against the RSV infection.
Hence, TNF-a seems to be an important cytokine to look for in the lungs of BRSVinfected calves.
The aims of this study were to investigate whether BRSV induces secretion of TNF-a in the lungs of infected calves, and if so, to match these ®ndings with the clinical signs, the pathological changes and the appearance of the BRSV-antigen in the lungs of the animals.
To our knowledge, we submit the ®rst evidence of TNF-a being present in the lungs of BRSV-infected calves.
Viral strain/isolate and the preparation of inoculum
The third cell culture passage of a BRSV isolate designated 2022 was used as an inoculum for the experimental infections of calves. Origin and propagation of the virus are described in Larsen et al. (1998). Mock inoculum consisted of fetal calf lung cells (FBL cells) which were grown and frozen as the FBL cells the virus was propagated in. All inocula were tested free of bovine virus diarrhea virus (BVDV), bovine PI-3 virus (BPI-3V), bovine adeno virus (BAV), bovine corona virus (BCV), bovine entero virus (BEV), infectious bovine rhinotrachitis virus (IBRV), bovine reo virus, bacteria and bovine mycoplasmas.
The experimental BRSV model
An experimental model for BRSV infections in calves (Tjùrnehùj, in preparation) was used to study the pathogenesis of the disease. Brie¯y, 7±14 day old male Jersey calves were purchased from closed herds and reared in isolation units following normal management procedures for calves. Eight to twenty-one week old calves were inoculated once by a 10 min aerosol exposure, followed by an intratracheal injection of viral inoculum with 10 4.6±5.2 tissue culture infectious dose 50 (TCID 50 ) of BRSV by each route. Mock inocula were diluted and administered in the same way as the viral inoculum. All calves in the experiments were tested free from infection with BVDV before inoculation.
After inoculation with virus, the general health status of the calves was monitored. The onset and degree of clinical signs were followed daily.
A total of 28 calves from ®ve different experiments were included in the study. Twenty-three animals were given the BRSV inoculum and the rest of the calves received a mock inoculum. The calves were killed PID 2 (2), 4 (2), 6 (9), 7 (8) and 8 (2). Four of the calves killed PID 6 were treated systemically with a broad spectrum antibiotic, enro¯oxacin (Baytril 1 ) 2.5 mg/kg (Bayer), starting from 2 days before the inoculation with BRSV and throughout the experiment.
Necropsy
The lungs were immediately removed from the animals after exsanguination. Photographs were taken of the ventral and dorsal sides of the lungs. The extent of consolidated lung tissue was scored from 0 to 5, where the score 0 was given to lungs completely free of lung lesions; 1 to lungs with few spots (1±5%) of consolidated lung tissue, 2 to lungs with 5±15%, 3 to lungs with 15±30% and 4 to lungs with 30±50% of consolidated tissue. The score 5 was given to lungs where most of the tissue in the cranial, medial and accessory lobes and at least a third of the diaphragmatic lobes consisted of consolidated tissue (>50%).
Bronchoalveolar lavage (BAL)
BAL was performed on the left lung by¯ushing the bronchi of the lung with 100 ml of Eagles Minimum Essential Medium (MEM). After collection, the BAL¯uids were supplemented with equal amounts of medium containing 1000 IU penicillin G/ml, 1 mg of streptomycin per ml and 0.005% amphotericin. BAL¯uids for cytokine and viral examinations were immediately snapfrozen in liquid nitrogen, whereas the BAL¯uids for cytospin preparations were kept at 48C and processed within 8±10 h. Cells (3 Â 10 4 to 5 Â 10 4 ) were centrifuged (Shandon cytospin centrifuge, 3 min/1200 rpm) onto a coated slide (SuperFrost ), air dried for 5 min and ®xed in 99% ethanol for 45 min at À208C.
Demonstration of other infectious agents
To rule out the possibility of other agents causing disease or lesions in the lung, tissue from the right lung was tested for the presence of BPI-3V, BAV, and BCV by antigen enzyme-linked immunosorbent assay (ELISA) . In addition, lung tissues were examined for viable BEV, IBRV, bovine reo virus, BPI-3V, BAV and BVDV. Brie¯y, supernatants of minced lung tissue were transferred to a monolayer of bovine kidney cells, and cytopathic effect (CPE) combined with immuno¯uorescence technique were used to con®rm their presence. Also, tissue samples were taken from the lung, spleen and liver and cultured for bacteria. Finally, bronchial swabs were analyzed for the presence of Mycoplasma spp. The Danish Veterinary Laboratory and the Danish Veterinary Institute for Virus Research carried out these analyses.
Demonstration of BRSV in the lungs
Two grams of lung tissue were collected from each of nine different predetermined areas of the left lung representing the dorsal, medial and the ventral parts of all the lobes. The lung tissues were stored at À408C until the examination for the presence of BRSVantigen by an indirect antigen ELISA (Tjùrnehùj et al., in preparation) was performed. The sum of the titers in each of the nine samples was used as a measurement of the level of BRSV-antigen in the lung tissue.
Lung lavage¯uids and supernatants from lung tissue soaked in MEM were investigated for infectious BRSV (Tjùrnehùj et al., in preparation). Samples were transferred to FBL cells and grown until CPE occurred. Two cell passages were made followed by indirect immuno¯uorescence test using hyperimmune guinea pig serum against BRSV and a FITC conjugated rabbit anti-guinea pig antibody (DAKO) to visualize BRSV in the cell culture.
Immunocytochemistry performed on BAL cells
Detection of BRSV-antigen in BAL cells was done by immunocytochemistry. Cytospin preparations were incubated for 10 min in a Tris-buffered saline (TBS: 0.05 M Tris, 0.15 M NaCl, pH 7.6) followed by a 15 min blocking with TBS, 5% swine serum. Slides were then incubated for 1 h with a bovine biotinylated hyperimmune serum against BRSV diluted 1:8000 in TBS containing 5% swine serum . This was followed by a 1 h incubation with a streptavidin-alkaline phosphatase complex (DAKO). Three washes with TBS were performed between each incubation step. Finally, virus positive cells were visualized with Fast Red substrate (KemEnTec), and Harris Haematoxylen as a counterstain. Cytospin preparations of FBL cells infected with BRSV were used as positive control, while non-infected FBL cells and BAL cells from animals tested negative for BRSV served as negative controls. The immunostainings were evaluated by light microscopy. Each slide was given one of the following scores: 0 (no virus positive cells), 1(1±20% virus positive cells), 2 (20±40% virus positive cells), 3 (40± 60% virus positive cells), 4 (60±80% virus positive cells) or 5 (80±100% virus positive cells).
Immunocytochemistry was also used to study the presence of TNF-a in the BAL cells. Rehydration and blocking were performed as described above. Cytospin preparations of BAL cells were incubated with a monoclonal antibody against TNF-a (Ellis et al., 1993), diluted 1:25 in TBS containing 5% swine serum for 1 h followed by 30 min incubation with EnVision (DAKO) and visualized with Fast Red substrate (KemEnTec). An isotype matched antibody (IgG 1 ) (DAKO) diluted 1:25 was applied as a negative control for the immunocytochemistry procedure whereas cytospin preparations of freshly isolated BAL cells from uninfected calves were run as negative control cells. Cytospin preparations of BAM stimulated for 5 h with endotoxin 5 mg/ml (E. coli, O111: B4 (Sigma)) were used as positive controls.
ELISA to quantify TNF-a
The presence of the TNF-a in the lung lavage¯uid was examined by a capture antigen ELISA. The monoclonal and polyclonal antibody used to detect TNF-a were described by Ellis et al. (1993). C96 maxisorb immunoplates (Nunc) were coated for 16±24 h with the monoclonal antibody 1D11-13 against TNF-a, diluted in coating buffer 1:1000 (4.53 mM NaHCO 3 , 1.82 mM Na 2 CO 3 , pH 9.6). The plates were washed six times with TBS containing 0.05% Tween (TBS-T) before triplicates of lung lavage diluted in TBS-T, 0.5% gelatin (TBS-T-g) were transferred to the wells and left overnight at 48C. The following day the plates were washed with TBS-T, and incubated for 1 h with a polyclonal rabbit anti-TNF-a (pool 88) diluted 1:1500 in TBS-T-g. This was followed by a washing step, a 1 h incubation period with biotinylated goat anti-rabbit, H L chain (Zymed) diluted 1:10.000, another washing step, and a 1 h incubation period with streptavidin-alkaline phosphatase (GibcoBRL) diluted 1:2000. Finally, p-nitrophenyl phosphate-substrate (GibcoBRL) was added to the wells and incubated for 20 min. The plates were read at 405 nm using 495 nm as a reference. A dilution of recombinant bovine TNF-a (Ciba-Geigy) was used as the standard. Plain medium was used as negative control. OD-values from the standard were plotted against ng/ml TNF-a. The dilution of the lavage¯uid occurring in the lungs during the washing procedure was not taken into account when calculating the amounts of TNF-a.
Statistical analysis of the data
A two-tailed Student t-test was used to compare (1) the amount of TNF-a present in BRSV-infected calves and calves given the mock inoculum; (2) the amount of TNF-a present in BRSV-infected calves treated or not treated with antibiotics PID 6; and (3) the amount of TNF-a in BRSV-infected calves with or without bacteria in the lungs, PID 6±8. The data was log-transformed to achieve a normal distribution. P`0X05 was considered as signi®cant.
Clinical symptoms and gross pathology
Based on the temperatures and respiration rates recorded the day the animals were killed, the ®rst clinical signs appeared in calves killed on days 2±4, some of which had mild coughing, but body temperature (39.38C) and respiratory rate (40 min À1 ) within the normal range (Table 1). Severe clinical signs were seen in calves killed on PID 6±8, which exhibited depression, mucopurulent nasal discharge, severe coughing and dyspnea. Furthermore, respiratory rates ranging from 50 to 105 min were monitored in 16 of the 19 calves killed PID 6±8. Febrile reactions were present in the calves PID 6±8. In total, seven out of the nine calves killed on PID 6 and ®ve out of the eight calves killed PID 7 reached temperatures between 39.3 and 40.98C. One calf killed PID 8 had a normal temperature through the whole period. The other calf killed PID 8 dropped from 40.98C on PID 7 to 39.78C on PID 8 (results not shown).
A few spots (1±5%) of dark red consolidated lung tissue were seen on PID 2±4 in the lungs of the calves infected with BRSV. These lesions could hardly be distinguished from the subacute to chronic lesions seen in some of the negative control calves.
Small amounts of mucopurulent discharge were present in bronchi and trachea in both BRSV and mock-inoculated animals. Lungs from these calves were given a score from 0 to 1.
On PID 6±8, the infected calves had a moderate to severe exudative bronchopneumonia with 10±50% consolidated lung tissue (score 2±5). Average scores were 2.8 on PID 6, 3.3 on PID 7, and 3.5 on PID 8 (Fig. 1A). Trachea and bronchi contained large amounts of mucopurulent to purulent discharge. In addition, there was widespread interstitial edema in all lobes and emphysema was found especially in the diaphragmatic lobe of the most affected calves.
Isolation of other infectious agents
No other bovine virus were isolated from the lower airways of the calves during the experiment. Mycoplasma dispar was present in almost every calf including the ones given the mock inoculum (Table 1). Moreover, it was possible to isolate Mycoplasma bovirhinis or Ureaplasma simultaneously in some individuals. Of the 23 BRSV-infected calves, six were found to be negative for mycoplasmas; three of these animals were treated with antibiotics (Table 1). No bacteria were isolated from BRSV-infected calves PID 2, PID 4, calves treated with antibiotics (PID 6), or any calf given the mock inoculum. Haemophilus somnus and Pasteurella multocida were isolated from two and ®ve calves, respectively, of the 14 BRSV-infected calves killed on PID 6±8.
Detection of BRSV
The total BRSV titer of the left lung was calculated as the sum of the antigen titers measured in the nine different predetermined spots. BRSV antigens could not be detected on PID 2, while moderate values were seen on PID 4 (94±304) and high values appeared on PID 6 (400±1024) and PID 7 (96±768). One calf killed PID 6 and one killed PID 7 were negative for BRSV in the nine tested sites. The amount of detectable antigens decreased to very low levels (0±80) on PID 8 (Fig. 1B).
Infectious BRSV could be transferred from the lungs of the calves to fetal calf lung cells from PID 2 to 6. On PID 7, only two of eight calves contained infectious BRSV and on PID 8 all calves were negative for infectious BRSV in the lung tissue, however one calf had infectious virus in the BAL¯uid (Table 1).
Immunocytochemistry was used to detect BRSV-antigen within the BAL-cells. BRSVantigens were located in neutrophils, alveolar macrophages and epithelial cells. The amount of neutrophils in BAL increased dramatically from PID 4±8. The actual percentage of each cell population was dif®cult to establish, because many of the neutrophils and epithelial cells appeared in clumps. On PID 2 BRSV-antigen was only detected in few cells (score 1). On PID 4, the amount of cells containing BRSV increased to 40±50% (score 3) of the cells in the cytospin, whereas at least 75% (score 4) of all cells were positive on PID 6. The score 5 was given to a single animal, on PID 6. The number of BRSV positive cells declined to around 50% on PID 8 (score 3). No BRSV positive signals appeared in mock-inoculated calves (Table 1). b Negative control calves (bold roman ®gures), which were given a mock inoculum. c The respiration rates were measured on PID 3. d The calves were treated with an antibiotic. Baytril 1 were administrated daily, from 2 days before inoculation with BRSV until the calves were killed on PID 6. e The temperature was taken PID 5. f The temperature was taken PID 7. g Infectious BRSV was found in the BAL¯uid, but not the lung tissue. h ND: not done.
Detection of TNF-a
Signi®cant amounts of TNF-a were found in the lungs of BRSV-infected calves compared to the uninfected calves (P`0X05). As demonstrated in Table 1 and Fig. 1C, detectable, but very low amounts (0.5±0.8 ng/ml) of TNF-a were found on PID 2 and 4. The cytokine level changed dramatically on PID 6 to values ranging from 11.2 to 94.6 ng/ ml (a mean of 38.6 ng/ml), and increased even more on PID 7, where BAL contained from 20.8 to 256.9 ng/ml (a mean of 82.7 ng/ml). High TNF-a values were found in BAL from three of the four calves treated with antibiotics. One calf (no. XIV-4) was not affected by the BRSV-infection as the rest of the calves in the group, and neither BRSVantigen nor TNF-a were detected in BAL from this animal. The TNF-a levels in BRSVinfected calves treated with antibiotic were not signi®cantly different from the TNF-a levels in BRSV-infected calves not treated with antibiotic on PID 6 (P b 0X05). Also, the TNF-a levels in calves from which P. multocida or H. somnus were isolated, were not different (P b 0X05) from those calves where no bacteria were detected (PID 6±8). Lung lavages performed on two animals on PID 8 contained 3.7 and 10.5 ng/ml, respectively. In general, calves which had high levels of BRSV-antigens in the lung tissue had a fairly large amount of TNF-a in the lung wash. Extremely high TNF-a values were measured in two calves on PID 7. P. multocida was isolated in one of these cases. In contrast, TNF-a was only found in a small amount (1.0 ng/ml) in one mock-inoculated animal on PID 4 and could not be detected in the remaining uninfected control calves.
Furthermore, BAL cells from four BRSV-infected calves killed on PID 6 were examined for the presence of cell-associated TNF-a by immunocytochemistry. Many BAL cells were positive for TNF-a where some cells stained intensively or moderate for the cytokine and others only weakly. Therefore, no subpopulation of cells producing TNF-a could be identi®ed by this method. BAL cells from calves, which stained intensively for BRSV-antigen, were strongly positive for TNF-a on parallel slides. The isotype matched control resulted in almost no background staining and no or few positive signals could be detected in a minority of the BAL-cells from non-infected calves.
Discussion
As several experiments have indicated that the proin¯ammatory and antiviral cytokine TNF-a is involved in the pathogenesis of RSV-infections in humans and mice (Panuska et al., 1990;Neuzil et al., 1996) we investigated if TNF-a is produced in the lungs of calves infected with BRSV. In agreement with what Hayes et al. (1994) found when they studied the experimental HRSV-infection in mice, we demonstrated that large amounts of TNF-a are produced in the lungs of calves experimentally infected with BRSV, but in contrast to their study, our RSV-studies were performed in the natural host.
Furthermore, the TNF-a measured in BAL of the BRSV infected calves is likely to be bioactive since results from the ELISA correlate well with the WEHI-164, clone 13, bioassay measurement of bioactive TNF-a (Ellis et al., 1993).
Being a proin¯ammatory cytokine, TNF-a is known to attract and activate neutrophil granulocytes and lymphocytes by itself or through the induction of other cytokines (Ohmann et al., 1990;Chiang et al., 1991;Persson et al., 1993). The cytokine has also been shown to be involved in the mechanisms leading to increased permeability of endothelium (Zeck-Kapp et al., 1990) and epithelium in lung in¯ammations (Li et al., 1995). Additionally, TNF-a has been shown to inhibit the alveolar type II epithelial cells production of surfactant phospholipid (Arias-Diaz et al., 1993) and surfactant proteins (Wispe et al., 1990;Pryhuber et al., 1996), a mechanism which has been suggested to contribute to the pathophysiology of adult respiratory distress syndrome (ARDS) (Arias-Diaz et al., 1993). Therefore, the biological effects induced by TNF-a could explain the presence of purulent exudate, edema and atelectasis in the BRSV-infected lung. Hence, we suggest that the high levels of TNF-a measured on PID 6±7 contribute to the severe lung lesions and clinical signs accompanying BRSV infection on PID 6±8.
Since both P. haemolytica and P. multocida are capable of inducing a harmful cytokineresponse in lungs (Bienhoff et al., 1992;Yoo et al., 1995), the TNF-a measured in the BRSV-infected calves coinfected with Pasteurellaceae spp. could have been caused by the bacteria. However, the data suggest that BRSV was capable of inducing a signi®cant secretion of TNF-a without any other contributing agents. Indeed there was no correlation between the presence of other infectious agents and the level of TNF-a.
In most cases, high levels of TNF-a were found in the calves which had large amounts of BRSV antigens in their lungs. Still, some calves which had very high levels of BRSV antigen, had only a moderate amount of TNF-a in their lung¯uids (individual data not shown). This might be explained by the kinetics of TNF-a, which is known to be produced and degraded rather quickly. Thus, the level of the cytokine easily can change within hours (Adams et al., 1990;Horadagoda et al., 1994).
Previously, the alveolar macrophages have been shown to be the main source of TNF-a in the lung environment (Warren et al., 1989;Van Nhieu et al., 1993). However, like Yoo et al. (1995), we were unable to identify the cells responsible for the production of TNF-a in the lungs, as many BAL cells including some cell debris stained positive for the cytokine by immunocytochemistry in the BRSV-infected animals. This might be explained by the fact that TNF-a also appears in a cell-associated manner, partly because the cytokine binds to cells via its receptors (Ohmann et al., 1990), and also because TNFa exists on the cell surface as a membrane-associated form (Nii et al., 1993). Compared to other bovine studies (Yoo et al., 1995), relatively high amounts of TNF-a were detected in the BAL¯uids in this study, possibly because cell-associated TNF-a was included, since the BAL-cells were not separated from the lung wash suspensions before they were frozen.
Another potential contributor to the TNF-a present in the BAL is the airway epithelium. Although alveoli and bronchioli epithelial cell cultures have been shown to secrete IL1-a, IL6, IL8 and granulocyte macrophage stimulating factor (Arnold et al., 1994;Jiang et al., 1998;Patel et al., 1998), when infected in vitro with HRSV, controversy exists whether or not these cells also secrete TNF-a (Arnold et al., 1994;Patel et al., 1995).
In most viral infections the speci®c immune response consists of a humoral and a cell mediated immune response, which are responsible for the clearance of virus. In BRSV infection speci®c antibodies have been shown to appear in the BRSV-infected calves from PID 8±10, but the signi®cance of both actively and maternally acquired antibodies in the clearance of BRSV is still unclear (Kimman, 1993). However, investigations performed by Taylor et al. (1995) indicate that cytotoxic T-lymphocytes could participate in the clearance of BRSV infections. In our study, the calves had few or no maternal antibodies (IgG 1 ) against BRSV when they were inoculated with BRSV and IgM was not detected before PID 8, and only in one calf (data not shown). The presence of cytotoxic Tlymphocytes in the lungs of BRSV-infected calves is currently being investigated within the group.
In addition to speci®c immune responses, studies performed on HRSV-infected mice and cell cultures indicate that TNF-a also may play a role in the recovery of the infection (Merolla et al., 1995;Neuzil et al., 1996). In our study, no infectious BRSV could be isolated from the lungs of six out of eight BRSV-infected calves on the same day (PID 7) when TNF-a reached its maximum level in BAL. These ®ndings imply that the cytokine could play an important role in the antiviral immune defense against BRSV infection in calves at a time where the speci®c immune response against BRSV has not been fully established.
The anti-viral effect of TNF-a is mainly established through the TNF-receptor p55, which induces cytotoxicity when it binds the cytokine (Wong et al., 1992;Tartaglia et al., 1993). This mechanism might play an important role in the RSV-infection since in vitro studies have shown that the TNF-receptor p55 is secreted in large amounts from lung epithelia cells infected with HRSV (Arnold et al., 1994). Interestingly, a homology was recently discovered between a conserved region in the G-protein of RSV and a domain within the TNF-receptor p55 (Langedijk et al., 1998). Therefore, one could speculate that RSV has evolved mechanisms by which it is able to interact with TNF-a, and thereby inhibit the antiviral effect of the cytokine. This interference may eventually result in the dramatic levels of TNF-a found in the lung.
Understanding of the pathogenesis of RSV infections is essential for ®nding a treatment to reduce the lung damage. Clearly, future experiments should involve inhibition of the TNF-a response early and late in the course of the viral infection. Intrapulmonary administration of recombinant bovine TNF-a or monoclonal antibodies against TNF-a or its receptors, is one way to continue these studies in calves. Moreover, drugs like dexamethasone and pentoxifylline which are known to inhibit the synthesis of TNF-a (Han et al., 1990;Balibrea et al., 1994) could be administered during the experimental BRSV infections. | 2018-04-03T00:17:41.677Z | 2000-10-19T00:00:00.000 | {
"year": 2000,
"sha1": "da154328172c07fd88c44a0203ffb4b6257c2a4b",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0165-2427(00)00214-2",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0110eaeb5103ac1d44cc2e7aa0f6c0f621e6acc7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118509429 | pes2o/s2orc | v3-fos-license | Generalised Kramers model
We study a particular generalisation of the classical Kramers model describing Brownian particles in the external potential. The generalised model includes the stochastic force which is modelled as an additive random noise that depends upon the position of the particle, as well as time. The stationary solution of the Fokker-Planck equation is analysed in two limits: weak external forcing, where the solution is equivalent to the increase of the potential compared to the classical model, and strong external forcing, where the solution yields a non-zero probability flux for the motion in a periodic potential with a broken reflection symmetry.
I. INTRODUCTION
This paper addresses the problem of the overdamped motion of independent particles in the external potential subjected to a random forcing in one spatial dimension. The model is an extension of the so-called Kramers model [1] in which a particle at the position x(t) executes creeping motion according to the following equation of motion: Here, η is the friction coefficient, U(x) is the potential, and f (t) is the stochastic force which is usually modelled as a rapidly fluctuating time-dependent random noise. We generalise this model by considering the random force f (x, t) which depends not only upon time, but also upon the position of the particle. This generalisation is proposed in the same way as the one discussed for a closely related model of inertial particles (the Ornstein-Uhlenbeck process [2]) studied earlier by the author of this paper in collaboration (see [3,4]). Such a generalisation of the Ornstein-Uhlenbeck process leads to a number of non-trivial results: non-Maxwellian stationary distribution of the velocity, anomalous diffusion of the velocity and position, and 'staggered ladder' spectra of the corresponding Fokker-Planck operator.
The model of Brownian particles in the external potential has a large number of important applications in physics and chemistry and below we briefly discuss two of them. First example is a model of chemical reaction processes, where the position of the particle represents the reaction coordinate which undergoes a noise-activated escape process driven by thermal fluctuations [5]. The reaction coordinate is a rather abstract notion in chemistry characterising the state of a chemical reaction. Typically, the coordinate wiggles around one of the minima of the potential energy profile, until a sequence of random 'kicks' induced by thermal fluctuations transports it over the potential barrier, so that its dynamics can be accurately described by the motion of the Brownian particle in the external potential.
The other interesting application of the Kramers model concerns a concept of the Brownian ratchet, which was originally introduced by Feynman [6] to illustrate laws of thermodynamics. In its simplest form, the device consists of a ratchet, which resembles a circular saw with asymmetric teeth, rotating freely in one particular (forward) direction. A pawl is attached to the ratchet, thus preventing it to rotate in the other (backward) direction.
The ratchet is connected to a paddle wheel by a massless frictionless rod and the whole mechanism is immersed in a thermal bath at a given temperature. It is assumed that the mechanism is so small that the paddle wheel can rotate in response to collisions with the molecules of the thermal bath, thus rotating back and forth. Because the pawl restricts the backward rotation, the ratchet slowly spins forward as the molecules hit the paddle-wheel.
If a weight is attached to the rod connecting the ratchet and the paddle wheel, it would be lifted by this forward rotation making the device 'perpetuum mobile' of the second kind.
The contradiction is resolved by noting that the device must be very small in order to react to individual collisions with the molecules. This means that the pawl itself must be influenced by the collisions, so that every now and then it would be lifted and fail to prevent the backward rotation. Since both the paddle wheel and the ratchet are immersed in the same thermal bath, the probability for the pawl to fail is the same as the probability for the ratchet to rotate forward, so that no net work can be extracted. The analogy with the model of the Brownian particle in the potential is evident. If the position of the particle represents the angle of rotation of the rod, then the dynamics is periodic and can be split up into two parts: random fluctuations induced by collisions of the paddle wheel with the molecules and motion in the potential representing the interaction between the pawl and teeth of the ratchet. The potential in this case is periodic and asymmetric (the so-called 'sawtooth' potential). The analysis of the classical model shows that there is no net transport (probability flux) of the Brownian particles moving in a periodic and asymmetric potential.
In many problems it suffices to know the probability density function (PDF) of the position of the particle in the steady state in order to understand all important properties of the Kramers model. The principal result of this paper is the PDF of the position of the particle in the generalised model in the limit of short correlation time of the random force.
We proceed as follows. We start by describing the generalised model and introducing properties of the stochastic force. In the limit of short correlation time of the stochastic force the PDF satisfies the Fokker-Planck equation, which we derive for the general case.
The stationary solution of the Fokker-Planck equation can be simplified in two asymptotic limits, corresponding to very large and very small values of the external potential force.
The generalised model in the weak external force limit was first considered in [7], where the PDF was found to be equivalent to a reduction of the potential compared with the classical Kramers model. Here, a more transparent analysis is used giving rise to many additional results. We find that in the weak forcing limit the generalisation leads to an effective increase of the potential, rather than a decrease derived in [7]. In the strong forcing limit we find the solution that corresponds to a non-zero probability flux in the case of the motion in a periodic potential with a broken reflection symmetry.
II. STOCHASTIC MODEL
Let us consider a very small particle moving in the potential U(x) and subject to the stochastic force f (x, t) in one spatial dimension. For a particle with a negligible mass the velocity is determined by the balance of the forces acting upon it, so that the equation of motion reads where U ′ (x) ≡ dU(x)/dx is the external potential force. The random force f (x, t) in (2) is assumed to be a stationary and translationally invariant Gaussian process with zero mean and correlation function where angular brackets denote average over noise realisations throughout. The noise has a typical magnitude σ, correlation length ξ, and correlation time τ . We assume that the correlation function is smooth and sufficiently differentiable and decays rapidly for |x| > ξ and |t| > τ . In the absence of the external potential the particle is not bounded and diffuses, so that the mean square displacement is given by [ constant D x ∼ σ 2 τ /η 2 for t ≫ τ . Relaxation towards a statistically stationary state is associated with the action of the potential. The corresponding relaxation time T depends upon particular properties of the potential, as well as properties of the random force, but in the general case it cannot be determined explicitly.
III. FOKKER-PLANCK EQUATION
If the correlation time of the random force is sufficiently short (that is τ ≪ T ), it is possible to define a time scale δt at which the stochastic force fluctuates appreciably, while the change of the dynamical variable x(t) is negligible on the length scale of the potential, L. Integrating the equation of motion (2) over the time period δt we obtain Following the standard procedure (see, e.g. [8]), we write the Fokker-Planck equation for the probability density function P (x, t) for the stochastic model given by Eq. (2) in the limit of short correlation time of the random force: Here, v(x) is the drift velocity and D(x) is the diffusion coefficient defined via the increment δx as follows: v(x) = δx δt , In the following sections we use stationary and translationally invariant properties of the noise and set t 0 = 0 and x(t 0 ) = 0 in Eq. (4) for calculating statistical properties of δx.
We are interested in the stationary solution of Eq. (5) satisfying ∂ t P (x, t) = 0. It is found by solving the differential equation where the stationary probability flux J 0 is determined from the boundary conditions. The solution of Eq. (8) can be readily written as where and N is the normalisation constant. We remark that in the case of a periodic potential P 0 (x) is normalised in the periodicity interval. The rest of the paper is concerned with simplifying the solution (9) in two asymptotic limits corresponding to very large and very small values of the external force U ′ (x).
It is not typical to have a non-zero flux J 0 in systems that are in thermal equilibrium.
The cases where the transport can be introduced by different mechanisms are of great interest. Feynman considered the case where the ratchet and the paddle-wheel are immersed in separate thermal baths at different temperatures. In this case, the transport is induced by the gradient of the temperature. The transport in the Kramers model may also be induced by an addition of another driving force that can be constant [9] or a function of time [10]. We also remark that the Fokker-Planck equation with the state-dependent diffusion coefficient was studied before in [11,12], where the transport in a symmetric periodic potential is a consequence of the non-uniform intensity of the stochastic force modelled as a multiplicative is periodic and h(t) is a rapidly fluctuating random noise. In this paper we show that it is possible to obtain a non-zero flux even for a model where the noise is additive and has translationally invariant statistics. δt. The first condition has already been mentioned earlier and reads τ /T ≪ 1, where T is the relaxation time. As for the small increment, the obvious condition would be στ ≪ L.
Again, this condition is only approximate, since the effective correlation time is not known explicitly.
IV. WEAK EXTERNAL FORCE LIMIT
Let us consider the increment δx in the limit when the motion of the particle is dominated by the stochastic force. First, we introduce some additional notation: Using this we can write the increment from Eq. (4) as follows: Expanding in the series the stochastic force about Averaging this expression we obtain We now simplify the problem by considering the case when the spatial dependence of the random force is weak or, equivalently, when the correlation length is sufficiently large. Let us introduce a quantity which measures a distance travelled by the particle due to the random force in the correlation time relative to the correlation length: We term this parameter the Kubo number. It has been used before in the similar context of motion of inertial particles (see e.g. [13]). We remark that the classical Kramers model corresponds to Ku = 0. When the Kubo number is small, we can write the firs moment of δx by expanding the stochastic force further: From the properties of the random force we have f (0, t) = 0 and ∂ x f (0, t) = 0, and using Next, we expand f ( We note that f (0, t 1 )∂ x f (0, t 2 ) = 0 for any t 1 and t 2 , and after dropping terms of order higher than For the two-point correlation function in this expression we use the following identities which hold for any stationary Gaussian noise: Using this we obtain The integrand in the last term depends only upon t − t ′ and is therefore linear in δt. In the remaining part of the paper we shall deal with similar double and quadruple integrals, so now we discuss the last term in more details. Let us consider a double integral We denote T 1 = t − t ′ and obtain In Fig. 1 we illustrate this transformation of variables. For δt ≫ τ the integrand is significant around T 1 = 0 and decreases rapidly for T 1 increasing. Thus, if we integrate for T 1 from 0 to ∞, we would only make a small error of order τ 2 . Using this we may write Assuming that the last integral is convergent we obtain We return to the calculation of δx and obtain The drift velocity then reads where The sign of α can be deduced as follows. If we can write the correlation function in the form where C t (t) > 0, then the sign of α is determined by the sign of C ′′ x (0). If the random force de-correlates as x increases, then x = 0 is a local maximum of C x (x).
We now calculate δx 2 and the diffusion coefficient. After squaring and averaging The second term on the right hand side in this expression is at least O(δt) 2 . This becomes obvious if we notice that the correlation function in the integrand depends on t 1 − t 2 , but due to the factor t 2 the whole integrand cannot be expressed as a function of t 1 − t 2 only.
The diffusion coefficient is therefore given by If we proceed to expand the stochastic force further, we would obtain terms which are at We therefore conclude that the diffusion coefficient in this case is constant and is the same as in the model of free diffusion given by the equatioṅ Let us now consider a case of small Kubo number similarly to the calculation of the drift velocity. We shall consider this case as a separate problem and discuss it in the appendix.
We obtain that in the limit of small Ku (or small α) the diffusion constant is given by where D 0 is the diffusion constant for the model in the absence of the spatial correlation corresponding to Ku = 0. It is given by The factor γ > 0 is given by Thus, the diffusion constant is reduced by the factor 1 − α(2 + γ) compared to the case of Ku = 0. Using Eqs. (26) and (31) we obtain the solution of the Fokker-Planck equation in the weak forcing limit corresponding to small Ku: where We now concentrate on the form of the solution (34) for particular choices of the potential illustrated in Fig. 2. First example is a symmetric double-well potential used in modelling two-way chemical reactions, and the other is a periodic potential with period L. For the double-well potential illustrated in Fig. 2a the natural boundary conditions are applied [14]: Such a potential does not allow the particles to escape to infinity, so that we expect that the probability flux vanishes. We note that Y (x) goes to zero for very large x and the second term in the brackets multiplied by Y (x) approaches a non-zero constant. Thus, the boundary conditions are satisfied only when J 0 = 0.
For the periodic potential, if we require that P 0 (x) is bounded for the increasing x, it follows that P 0 (x) is periodic [14]. We use U(x + L) = U(x) to obtain Y (x + L) = Y (x) and therefore the condition of periodicity reads The integral in the last term is non-zero, therefore we again put J 0 = 0 to satisfy the boundary conditions. The important consequence of this result is that the flux vanishes regardless of the shape of the periodic potential. In the studies of Brownian ratchets it is often assumed that the periodic potential has an asymmetric form (such as the 'sawtooth' potential illustrated in Fig. 2c), so that the particles are expected to favour the slope with a smaller inclination to escape the potential minimum. The result shows, however, that the probability flux vanishes, which agrees with the discussion of the Brownian ratchet in the introduction.
We conclude that in both examples the solution in the weak external force limit is given by This is the Maxwellian density with the potential increased by the factor 1 + α(1 + γ) compared with the classical Kramers model, which corresponds to α = 0. The solution is consistent with the idea that in the presence of spatial correlations the noise experienced by the moving particle de-correlates more rapidly than for the case of an infinite correlation length in the classical Kramers model. This means that the particle experiences more uncorrelated kicks along its trajectory decreasing the probability to travel far against the systematic force −U ′ (x). Therefore, we expect to see the density function becoming sharper around the minima of the potential as the correlation length decreases. Our result differs from the one obtained in [7], where the effective decrease of the potential in the solution is attributed to the reduction of the drift velocity given by Eq. (26), but the corresponding reduction of the diffusion coefficient is not considered.
We remark that in the general case, when the potential force is weak, the drift reduction remains linear in U ′ (x) and the diffusion coefficient remains constant, even when Ku is not small. The actual values of α and γ in the case of arbitrary Kubo number are not known, but the density still remains Maxwellian around stagnation points of the potential, provided that the Fokker-Planck approach remains valid.
V. STRONG EXTERNAL FORCE LIMIT
In this section we analyse the limit when the motion of the particle is dominated by the external potential force. In this case we can expand the stochastic force in the series about x = s(x, t). The increment δx in this case reads We first calculate δx 2 . The term [s(x, δt)] 2 is obviously of order δt 2 and so is the mixed product of s(x, δt) and the integral terms in the expression above. The rest of the terms require some careful considerations. We have For the first term we obtain The integrand in the last expression depends only on t 1 −t 2 and is therefore of order δt when δt ≫ τ . Similarly to the cases considered in the previous section (see Eq. (21)) we obtain We now proceed a step further and calculate this term expanding for large U ′ (x). Using the definition of s(x, t) in Eq. (11) we can write this by changing the variable from t to The modulus sign is used to ensure that the expression remains positive. Thus, in the limit of strong external forcing the first term in δx 2 is inversely proportional to |U ′ (x)|. Now we return to the starting point (Eq. (40)) and consider for instance the term The four-point correlation function for a Gaussian random process can be expressed as the sum of all possible non-repeating combinations of products of two-point correlation functions.
A typical combination in this case may look as follows: If we proceed in the same way as for the previous term expanding for large U ′ (x), each of the factors would contribute at least [U ′ (x)] −1 , so that the overall contribution would be of order [U ′ (x)] −2 , and therefore may be neglected. Similarly, the remaining term in Eq. (40) may also be neglected. We conclude that the diffusion coefficient is determined by Eq. (43) and reads We rewrite this as follows: We now consider δx : Here, the second term vanishes because effectively the average is taken over a deterministic trajectory, since the potential is assumed to be varying slowly. For the second term we neglecting terms of higher orders in x (0) (t). We then obtain If we proceed further and expand this expression for strong external force, we obtain the term which is inverse proportional to U ′ (x), similarly to the calculation of δx 2 . Since s(x, δt) ∼ U ′ (x), we therefore conclude that the first moment of δx in the limit of strong external force reads The drift velocity is therefore given by Substituting (47) and (52) into (9) we obtain the solution in the strong external force limit: where For the non-periodic potential, if P 0 (±∞) = 0 we can again show that J 0 = 0. We note that I(x) diverges for large x, whereas the integral term in the brackets multiplied by exp[−I(x)] converges to a constant for large x. The solution corresponding to J 0 = 0 is given by For the case of a periodic potential with the period L we find J 0 by writing P (L) = P (0) as Using U ′ (0) = U ′ (L) we obtain
VI. NUMERICAL SIMULATIONS AND DISCUSSION
We perform a number of numerical experiments in order to illustrate our analytical results.
Numerical simulations are done by integrating the original equation of motion (2) using a small time step (typically about τ /50). In the simulations we use the following correlation function of the random force: The diffusion constant is reduced according to Eq. (31): In Fig. 3 Finally, we comment on the results for the strong external force limit, summarised in Fig. 5. We remark that the probability for the particle to propagate to the regions where U ′ (x) is large is typically very small. Thus, we expect our theoretical result to give a good The case of an asymmetric periodic potential is particularly interesting, since it exhibits a non-zero probability flux. As we have already discussed in section V, if the potential is asymmetric, the particles are expected to favour a slope with a smaller inclination to escape the minimum. The direction of the transport in the generalised model can be deduced from Eq. (54). We note that the sign of I(L) is determined by the sign of the steepest of two slopes of the potential. In Fig. 5, it is the slope to the right of the minimum that corresponds to U ′ (x) > 0 and U(L) > 0. From Eq. (57) we obtain J 0 < 0, implying that it is easier for particles to escape from the minimum using a left slope, as expected.
In the discussion below we shall drop the spatial argument of the correlation function implying that C(t) ≡ C(0, t). We have We note that the following relation holds for any a and the correlation function which can be written in the form C(x, t) = C x (x)C t (t): | 2019-04-12T18:33:36.950Z | 2008-12-03T00:00:00.000 | {
"year": 2008,
"sha1": "b897a7e48c200ac0c01677a0e309b2bb8934f244",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b897a7e48c200ac0c01677a0e309b2bb8934f244",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
40240042 | pes2o/s2orc | v3-fos-license | Sharp magnetization step across the ferromagnetic to antiferromagnetic transition in doped-CeFe$_2$ alloys
Very sharp magnetization step is observed across the field induced antiferromagnetic to ferromagnetic transition in various doped-CeFe$_2$ alloys, when the measurement is performed below 5K. In the higher temperature regime (T$>$5K) this transition is quite smooth in nature. Comparing with the recently observed similar behaviour in manganites showing colossal magnetoresistance and magnetocaloric material Gd$_5$Ge$_4$ we argue that such magnetization step is a generalized feature of a disorder influenced first order phase transition.
I. INTRODUCTION
Recent studies of magnetic field induced first order antiferromagnetic (AFM) to ferromagnetic (FM) transition in various manganite compounds showing colossal magnetoresistance (CMR) have revealed ultra sharp magnetization steps when the measurements are performed below 5K 1,2,3,4,5,6,7,8 . Such steps are observed in both single crystal and polycrystalline samples 7 . A catastrophic relief of strain built up during the field induced first order transition between AFM and FM phase, has been suggested as a possible cause of such striking behaviour 7 . Very similar magnetization step has also been reported for the magnetocaloric material Gd 5 Ge 4 across the field induced AFM-FM transition 9,10,11 . Although belonging to different classes of materials these two systems have the common features of phase-coexistence and magneto-elastic coupling associated with the AFM-FM transition. To highlight the generality of the observed phenomenon we report here the existence of very sharp magnetization step across the field induced AFM-FM transition in Ru and Re-doped CeFe 2 alloys belonging to an entirely different class of materials. We argue that such magnetization step is a characteristic feature of disorder influenced first order magneto-structural phase transition.
CeFe 2 is a ferromagnet with Curie temperature T C ≈ 230K 12 . A small substitution (3-6%) of selected elements such as Co, Al, Ru, Ir, Os and Re induces a low temperature AFM state in this otherwise FM compound 13,14 . The ferromagnetic FM-AFM transition in these alloys is accompanied by a structural distortion and a discontinuous change of the unit cell volume 15 . Inside the AFM state an application of external magnetic field (H) induces back the original FM state, while at the same time erases the structural distortion and recovers the original cubic structure. The first order nature of this AFM-FM transition has been emphasised with various kinds of measurements 16,17,18 .
II. EXPERIMENTAL
We use a 4%Ru and a 5%Re doped CeFe 2 sample for our present study. The samples were prepared by argon-arc melting starting from metals of at least nominal 99.99% purity. These polycrystalline samples were characterized with metallography, X-ray diffraction(XRD) and neutron scattering studies 13,14,15 . Due to the peritectic reaction during the solidification process, one expects to find in the as-cast structure cores of Ce 2 Fe 17 with perhaps some iron-solid solution at the centre, surrounded by shells of CeFe 2 and the eutectic material.
Normally, with adequate heat treatment the first formed solid should disappear. However, in practice there is almost always some trace of second phase in the annealed samples.
Indeed the traces of impurity phases were still found after annealing the present samples at 600 o C for seven days. With various heat treatments it was found that the sequence of annealing at 600 o C for two days, 700 o C for 5 days, 800 o C for 2 days and 850 o C for one day improved the quality of the samples a great deal 13,14 . Combination of metallography 14 , XRD 14 and neutron scattering study 15 indicates that the amount of second phase in these samples is less than 2%. CeFe 2 forms in cubic Laves phase structure. In this structure all of
III. RESULTS AND DISCUSSION
In Fig.1 we present the M vs T plots for 4%Ru and 5%Re doped CeFe 2 samples. The sharp rise and fall as a function of decreasing temperature indicates the onset of the paramagnetic (PM)-FM and FM-AFM transitions respectively. These results are already known 18,19 , but reproduced here to make the present work self contained.
Actually there is a finite non-linearity in the very low field regime which is more visible in with applied magnetic field at T= 10K. The field cycle and the field ramp rate is the same as in Fig.2 and 3. The M-H loop in the negative field direction is not shown here for the sake of conciseness.
Note that the large magnetization steps seen in Fig.2 and 3 are not visible here. scale 25 . Very recently the effect of strain-disorder coupling across such disorder influenced first order transition has been studied theoretically and phase-coexistence in the micrometer scale has actually been predicted 26,27 . While some evidence of magneto-elastic coupling already existed in the earlier studies on doped-CeFe 2 alloys 15,28 , recent magnetostriction measurement 29 has further established the role of magnetoelastic coupling on the first order AFM-FM transition. At this stage it becomes natural to put forward the argument that the magnetization steps are linked to the catastrophic relief of strain build up during the first order magneto-structural transition. However, the absence of the steps in the isothermal magnetization measurements in higher T regime still needs to be explained.
We shall now attempt to understand the observed behaviour at all temperature regimes within a framework of local distribution of transition temperature/field. In the absence of any disorder mediated heterogeneous nucleation a system reaches a metastability limit 30 well beyond the thermodynamic transition point before a jump takes place from one phase to the other. In a rough landscape picture such jumps give rise to a series of steps in the measurable quantities like magnetization and specific heat. However, in the presence of active nucleation centres these steps are replaced by a continuous change giving the impression of a broadened transition. The fluctuational development of nuclei in a size range around a critical size determined by the material parameters is an essential part of the kinetics of first order phase transition 31 . The distribution function for nuclei of various sizes actually broadens with the increase in temperature 31 . This picture can explain the smooth behaviour across the metamagnetic transition observed in our present samples above 5K (see Fig.4). In the T regime below 5K the intrinsic thermal energy fluctuation arising from the k B T term becomes quite small making many of the nucleation barriers insurmountable, hence effectively reducing the number of nucleation centres. This will increase the possibility of the step like features in various observables in this lower temperature regime. It is worth recalling here that the quenched disorder in our present system mainly consists of purely statistical compositional disorder 22 . Hence in the proposed landscape picture the distribution of transition temperature/field will have peak at the target compositional value of the sample with tails on either side. This will lead to a big step in global measurements of magnetization with smaller steps on either side. In addition in systems like doped-CeFe 2 alloys with appreciable coupling between electronic and elastic degrees of freedom, an applied magnetic field can lead to magneto-elastic coupling between different regions in the sample. This is also likely to encourage a single big step in the field dependence in M across a magneto-structural transition. While we do see this big step in magnetization below 5K, we are unable to resolve the smaller steps in our polycrystalline sample. We believe these smaller steps can be observed in a single crystal sample with a landscape of less roughness and going further down in temperature. It is to be mentioned here that the measurement procedure to obtain the results in Fig.2 and 3 did not engage any heater in the sample chamber. A heater was operational in the present study for active temperature control in the temperature regime T≥5K. Thus, considering how the temperature of the exchange gas is controlled by a temperature controller (i.e., a heater and a feedback loop), and taking 8 into account that critical magnetic field for the onset of AFM-FM transition in CeFe 2 alloys is strongly dependent on temperature one can not rule out the possibility that temperature fluctuations on the order of 0.1 K trigger a transformation of a large fraction of a material even when H is held constant. Hence this extrinsic source of temperature fluctuations is likely to add to the intrinsic thermal fluctuations (i.e. k B T term) in smoothening out the M-H curve across the AFM-FM transition in the higher T regime.
There exist some theoretical studies of field driven first order transition based on both random bond 32 and random field Ising models 32,33 with quenched disorder. With varying amount of disorder the nature of the non-equilibrium transition changes from a discontinuous one with one or more large avalanches to a smooth one with only tiny avalanches. Projecting to our present experimental studies it can be argued that at higher T most of the available quenched disorder sites remain active. At lower T the k B T term is smaller than the local nucleation barriers at many of the quenched disorder site. This renders such disorder sites ineffective for nucleation. We would also like to mention here that the discrete steps in the magnetization observed across the AFM-FM transition in good quality polycrystalline samples of Gd 5 Ge 4 , disappears in more disordered samples giving rise to a smooth change 34 .
IV. CONCLUSION
In conclusion we have observed a very sharp magnetization step across the field induced AFM-FM transition in Ru and Re doped-CeFe 2 alloys, when the measurement is performed in the temperature regime below 5K. We have tried to understand this interesting feature within the frame work of a disorder-influenced first order magneto-structural phase transition. The observed magnetization step is markedly similar to the step observed across the field-induced AFM-FM transition in various CMR-manganite systems and magnetocaloric material Gd 5 Ge 4 . It is now well known that a structural transition accompanies the first order AFM-FM transition in these classes of materials. We have earlier highlighted that the phase-coexistence and metastability are common features across the field/temperature induced AFM-FM transition in CMR-manganites, Gd 5 Ge 4 and doped-CeFe 2 alloys, and argued that those arise due to the influence of disorder on a first order magneto-structural phase transition 25,34 . Combining our present experimental results on doped-CeFe 2 alloys with the existing results on various CMR-manganite systems and magnetocaloric material | 2017-09-10T01:47:26.509Z | 2005-05-19T00:00:00.000 | {
"year": 2005,
"sha1": "80d5f94b8d262b6c5ad82c0b42610442c2b1b9ec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0505609",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "80d5f94b8d262b6c5ad82c0b42610442c2b1b9ec",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
244682067 | pes2o/s2orc | v3-fos-license | Effect of Follicle Diameter and Culture Medium on The in Vitro Maturation of Sheep Oocytes Using Culture Media Supplemented With Different Concentrations of Sucrose in Local Sheep (Ovis Aries)
The current study was conducted in the postgraduate laboratory of the college of Agriculture/University of Al-Muthanna/Department of Animal Production, which aimed to find out the effect of follicle diameter and the culture medium on the percentage of in vitro maturation of sheep oocytes (Ovis Aries). The experiment included in vitro maturation of sheep oocytes aspirated from large, medium and small follicles in different culture media (named A, B and C). In vitro maturation of oocytes was carried out using three culture media that differed only in the concentration of sucrose, which were 0. M, 0.25 M and 0.5 M for A, B and C culture media respectively. The results of the current study showed significant superiority (P ≤ 0.05) of culture medium C than the two media A and B in the percentages of in vitro maturation, which were to 38.0 ± 1.71 %, 27.37±1.47 % and 21.902±0.76 % for C, A and B respectively. The results also indicated significant effect (P ≤ 0.05) of follicle diameter in the percentages of in vitro maturation of sheep oocytes which were 38.0±1.71 %, 29.57±2.06 % and 18.5± 0.27 % for large, medium and small follicles respectively. It could be conclusion from the present study that the follicle diameter and culture medium had a significant effect on the percentage of in vitro maturation of sheep oocytes.
Introduction
Sheep have a great economic importance because they are inexpensive animals to raise and have the ability to convert lowvalue materials into high-value materials such as meat, milk and wool, as well as having a relatively fast capital cycle [1]. [2,3], indicated that the decline of fertility rates of Iraqi sheep affects their reproductive efficiency, and the increase in fertility rates in local sheep is reflected in improving the efficiency of sheep production. There are many important and necessary modern techniques to improve the reproductive efficiency of sheep, such as in vitro fertilization, embryo transfer and genetic selection [2,4]. [5], indicated that in vitro matured oocytes are able to play a role in improving productivity of sheep, so that the process of oocytes maturation has a strong positive correlation with the diameter of the follicle [6], and the growth and development of oocytes are completed with an increase in the size of the follicle [7,8].
The aim of the current study is to study the effect of follicle diameter and culture media on the percentage of in vitro maturation of oocytes and to know the effect of the interaction between culture media and follicle diameter on the percentage of in vitro maturation of sheep oocytes.
Materials and Methods
The current study was conducted in the laboratory of postgraduate studies and included the maturation of sheep oocytes after collecting the follicular fluid from the ovaries, which collected immediately after slaughter from the Samawa slaughterhouse. IOP Conf. Series: Earth and Environmental Science 923 (2021)
Ovaries collection
The ovaries were collected immediately after slaughter according to the method of [8] and transported to the laboratory inside a plastic container, containing warm physiological solution (0.9% Nacl at 37°C) supplemented with antibiotics (streptomycin 100 IU/ml, pencillin 100 IU). /ML), then the samples were placed inside a thermos bottle and transported within less than an hour after slaughter to the laboratory, washed at least three times with warm physiological solution at a temperature of 35-30 (C) to remove the clotted blood and reduce contamination on the ovarian surfaces and to get rid of the impurities suspended in the ovaries.
Oocytes collection
Sheep Oocytes were collected from ovaries by oocyte aspiration method, through aspiration of follicular fluid using a 5ml medical syringe, from large, medium and small follicles each individually, and before oocytes aspiration, 0.5 ml of the culture medium was added to a medical syringe with 20 IU/ML of anticoagulant (Heparin) to prevent the oocytes from sticking together, then the oocytes were placed in a petridish in SMART Medium, then placed under the dissected microscope, transported by a Pasteur pipette to the culture medium three times for washing the oocytes and removing the remains of suspended cells according to the method [9].
Oocyte classification
The oocytes were classified according to their external appearance after washing three times in the Smart culture medium, to mature, immature and atretic oocytes. Oocytes with ooplasm shrunken away from the zona pellucida or not filling the zona pellucida , these types and mature oocytes were removed from the experiment after conducting a viability examination using a dye Trypan blue also classified the oocyte that accept the dye as dead, and those that do not accept the dye as live, according to the method ([10].
In vitro maturation
The recovered oocytes were subjected to the in vitro maturation program, each group individually in three different culture media A, B, and C. The three media consisted of smart medium supplemented with hormones, A medium was left without sucrose as a control group, while both B and C medium contained sucrose at a concentration of 0.25 M and 0. 5 M respectively. The oocytes were cultured in in four-well Petri dishes containing 0.5 ml the culture medium, covered with a layer of paraffin oil, and incubated in a 5 % co 2 incubator for 24 hours at a temperature of 38.5 and relative humidity 95%.
Statistical analysis
The data were statistically analyzed by using the complete random design in global experiments and the analysis of variance (Anova) test was used in the study of the significant differences and to study the significant differences between the means, the statistical program spss [11] was used in the statistical analysis with a significant level (p≤ 0.05).
Results and Discussion
The results of the current study showed a significant effect of follicle diameter on the percentage of in vitro maturation, the oocytes aspirated from large diameter follicle were significantly superior than the oocytes aspirated from small and medium diameter follicle in the percentage of in vitro maturation. The percentages were 38.0 ±1.71, 29.57±2.06 and 18.5 ±0.27 % for oocytes aspirated from large, medium and small diameter follicle respectively (figure 1). The results were in agreement with [12,13] they found that the oocytes of large follicles contain more layers of cumulus cells, which are considered a link between the oocytes and its external surroundings and increase the contact between oocytes and the components of the culture medium for transfer of nutrients and factors for oocytes growth and development, therefore, the cumulus cells contribute to supporting nuclear and cytoplasmic maturation, and the percentage of in vitro maturation of oocytes in small follicles decreases due to the low rate of development in small follicles and the lack of protein synthesis needed in the development of oocytes. The results also agreed with [14,15,16] in sheep and cattle, where they found that the superiority of oocytes in large follicles due to that the large follicles containing a high level of 17-B estradiol as well as the presence of influence factor within the cytoplasm of aspirated oocytes from large follicles has a role in the oocyte developmental competence, since the development competence of oocytes depends on the size of the follicles and that the oocytes must acquire integral development to reach in vitro maturation. The results of the study also showed a significant effect of the culture media on the percentage of in vitro maturation of sheep oocytes, and the superiority of C and B mediums than A, the percentages of in vitro maturation were 35.46 ± 2.9, 27.37±1.47 , 21.902±0.76 for matured oocytes in C, B and A medium respectively (figure 2). The results were agreed with ( [17].in sheep, when it was found that the reason for the superiority is due to the role of sucrose in the C and B culture medium. And sucrose contributes in the activation of the oocytes and increases the rate of division and development, sucrose also plays an important role in the development of the efficiency of the oocytes to reach maturity [18].while this result did not agree with [19].in cows and where they found that sucrose only maintains the normal morphology of the oocytes. [20].reported that the culture media supplemented with hormonal additives increases the nuclear maturation of the oocytes, activates the cells and leads to the extension and expansion of the cumulus cells. C *small letters within column differ significantly at the level of probability (P ≤ 0.05). *capital letters within row differ significantly at the level of probability (P ≤ 0.05).
These results are in agreement with [21]. where they found that large-sized follicles complete their development inside the body, and this results in highly developed oocytes in these follicles. And the oocytes in the large-sized follicles produce highly developed embryos. The results were in agreement with [22] which achieves the that sucrose plays a great role in in vitro maturation, as it helps to increase the percentage of maturation in the oocytes, including the oocytes obtained in the large follicles. Also, this result was in agreement with [23,24] in sheep and cattle, where they found that the maturation rate of oocytes are higher in large-diameter follicles compared to oocytes in small-diameter follicles, since the large follicles are containing oocytes surrounded by the largest number of cumulus cells layers and has a high ability to develop, Therefore, the large follicles produce high rates of oocyte maturation in vitro and the increasing the follicles size gives a better indication of the maturation of the oocytes. It could be conclusion from the present study that the follicle diameter and culture medium had a significant effect on the percentage of in vitro maturation of sheep oocytes. | 2021-11-27T20:07:18.080Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "ece0cc45b0e6f4ecea79aaa78ade0b7b4fec73f1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/923/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ece0cc45b0e6f4ecea79aaa78ade0b7b4fec73f1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
248325399 | pes2o/s2orc | v3-fos-license | SSA-Net: Spatial self-attention network for COVID-19 pneumonia infection segmentation with semi-supervised few-shot learning
Graphical abstract
Introduction
Since the end of 2019, coronavirus disease 2019 , an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 1 , has spread in the worldwide, which can cause acute respiratory illness and even lead to fatal acute respiratory distress syndrome (ARDS) . So far (Central European Time of December 20, 2021), the number of confirmed cases of COVID-19 has been more than 273.9 million, with more than 5.3 million deaths, according to the COVID-19 situation dashboard in the World Health Organization (WHO) website 2 , and the number is continuing to increase. The health of human being all over the world are threatened and everyones life has been greatly affected due to the outbreak of the virus. Since it is highly contagious and we still lack appropriate treatment and vaccines, early detection of COVID-19 is essential to prevent spreading in time and to properly allocate limited medical resources. Among all virus detection methods, antigen testing is fast, but the sensitivity is also poor ( Fang et al., 2020 ). Reverse transcription polymerase chain reaction (RT-PCR) has been considered as the gold Table 1 A summary of the Datasets in our experiments. Sum denotes the total number of COVID-19 slices. Class denotes the number of labeled infection categories. Lung , GGO , Con , G + C denote the percentage of pixels of lung area, GGO, consolidation, the total infection of GGO and consolidation, respectively. COVID-19 CT Segmentation dataset is available at https://medicalsegmentation.com/ covid19/ . standard for COVID-19 screening ( Ai et al., 2020 ), which detects viral nucleic acid using nasopharyngeal and throat swabs ( Bai et al., 2020 ). However, the results of RT-PCR testing are susceptible to low viral load or sampling errors, and result in high false negative rates . Meanwhile, the requirements for the testing laboratory environment are extremely strict and there is always a shortage of equipment under the epidemic ( Liang et al., 2020 ), which would greatly limit and delay the diagnosis of suspected subjects. To find a fast and sufficiently accurate patient screening way becomes an unprecedented challenge to prevent the spread of the infection. Since most patients infected by COVID-19 are diagnosed with pneumonia, radiological examinations have also been used to diagnose and assess disease evolution as important complements to RT-PCR tests ( Rubin et al., 2020 ). X-ray and computed tomography (CT) are two typical imaging methods for patients in the COVID-19 study . CT has a 3D view and the ribs in X-ray images may affect the lesion detection. The diagnostic accuracy of CT is much higher than that of X-ray in the early stage of the disease . Furthermore, chest CT screening on clinical patients has showed that its sensitivity outperforms that of RT-PCR ( Fang et al., 2020 ) and it can even confirm COVID-19 infection in negative or weakly-positive RT-PCR cases . Therefore, in view of the particularity of prevention and control during the COVID-19 epidemic, it is suggested that CT should be the first choice for screening COVID-19 under the condition of limited nucleic acid detection Chung et al., 2020;Lei et al., 2020 ). Although imaging features alone cannot make a definite diagnosis, combined with epidemiological history, clinical manifestations and imaging examinations, CT can greatly improve the accuracy of screening, especially for suspected patients and asymptomatic infections. This can help to effectively discover and isolate the source of infection as soon as possible and cut off the route of transmission, which has a positive effect on controlling the development of whole epidemic. In short, chest CT plays a key role in the diagnostic procedure for suspected patients and some recent reports have emphasized its performances . However, image reading in severe epidemic areas is a tedious and time-consuming task for radiologists, and the visual fatigue would increase the potential risk of missed diagnosis of some small lesions. In addition, radiologists' judgement is usually influenced by personal bias and clinical experience. Thus, Artificial Intelligence (AI) technology is playing an increasingly important role in the struggle against COVID-19 .
In recent years, with the gradual deepening study of artificial intelligence technology, image segmentation has been developed rapidly, but it is still challenging to automatically segment the COVID-19 pneumonia lesions from CT scans, especially for multiclass pixel-level segmentation. First, the typical signs of infected lesions observed from CT slices have various complex and changeable appearances, irregular shapes and fuzzy borders. For example, as shown in Fig. 1 , the boundaries of ground-glass opacity (GGO) have low contrast and blurred appearance, and the blurring of boundaries also increases the difficulty of labeling. Second, the Two examples of COVID-19 positive CT scans from two different datasets and their corresponding segmentation results. The first row is a single-class lesion segmentation with lesions labeled in blue, and the second row is a multi-class lesion segmentation with ground-glass opacity (GGO) in red and consolidation in green. We can clearly see the fuzzy boundaries of the infected areas, highlighted with orange arrows. The red number and the green number marked in the last graph represent the proportions of GGO and consolidation respectively, which shows the issue of imbalanced class distribution. It can be seen that SSA-Net performs better in complicated lesion segmentation and the proposed semi-supervised few-shot learning framework outperforms other state-of-the-art algorithms in multi-class COVID-19 infection segmentation with limited training data, especially in regions labeled with orange boxes. successful performance of popular deep convolutional neural networks (CNN), the core technology of the rising AI, is largely depended on the availability of large-scale, well-annotated data sets in the real world. However, it is quite difficult to collect sufficient training data from patients systematically due to the urgent nature of the pandemic, and high-quality annotations of multi-category infections are specially limited. Third, for screening, most of the pneumonia symptoms of collected patients are usually at the early stage, and the proportion of infected lesions in available image samples is small and uneven, which leads to the problem of longtailed data distribution. In Table 1 , we can see that the number of annotated lesion pixels are far fewer than the background pixels, particularly the proportion of pulmonary consolidations in the data is quite small. In this paper, we deal with the above issues and propose a novel semi-supervised framework for COVID-19 lung infection segmentation from limited and incompletely annotated CT datasets.
The main contributions in our work are threefold: (1) We present an encoder-decoder based deep neural network named spatial self-attention network (SSA-Net) for lesion segmentation. To take full advantage of the context information between the encoder layers, a self-attention distilling method is utilized, which can expand the receptive field and strengthen the selflearning without extra training time. For the sake of obtaining the low contrast and fuzzy boundary area effectively, spatial convolution is introduced for slicing the feature map and then convoluting slicer by slicer, so that the features can be effectively transferred in the direction of row and column.
(2) According to the long-tailed distribution of COVID-19 datasets and limited labeled data, we provide a semi-supervised few-shot iterative segmentation framework for multi-class infection segmentation, which leverages a large amount of unlabeled data to generate their corresponding pseudo labels, thereby augmenting the training dataset effectively. A re-weighting module is introduced to rebalance the category distribution based on the number of pixels for each category, and a trust module is added to select high confidence values and improve the credibility of pseudo labels.
(3) We conducted extensive experiments on two publicly available datasets. Ablation studies have demonstrated that both the spatial convolution and self-attention distilling are beneficial to improve the performance of infection segmentation. And comparative studies have revealed that SSA-Net with our semi-supervised few-shot learning strategy outperformed the state-of-the-art segmentation models and showed competitive performance compared with the state-of-the-art systems in COVID-19 challenge.
Related work
In this section, we mainly talk over three aspects of works closely related to our work, including context-enhanced deep learning for segmentation, few-shot learning and class balancing and COVID-19 pneumonia infection segmentation.
Context-enhanced deep learning for segmentation
In order to segment lesions in medical images, deep learning technology is widely used. U-Net is commonly used for lung region and lung lesion segmentation . U-Net is a full convolution network proposed by Ronneberger et al. (2015) , which has a U-shaped architecture and symmetric encoding and decoding paths. Skip connections connect the layers of the same level in the two paths. Therefore, the network can learn more semantic information with limited data, and it is widely used in medical image segmentation. Thereafter, many variants of networks based on U-Net have been proposed, such as no-new-U-Net (nnU-Net) ( Isensee et al., 2019 ), which is based on 2D and 3D vanilla U-Nets and can adapt the preprocessing strategy and network architecture automatically without manual tuning. Milletari et al., 2016 propose V-Net, which uses residual blocks as the basic convolution blocks for 3D medical images. He et al. (2016) put forward a new encoder-decoder network structure, ResNet, by introducing the residual blocks. Compared with the U-Net and other variants, ResNet can avoid the gradient vanishing and accelerate the network convergence, so we prefer to use ResNet as backbone. However, lesions in medical images are sometimes subtle and sparse, and the number of annotated lesion pixels is much fewer than the background pixels, which brings new challenges. Therefore, we need more contextual and spatial information to train deep models for the task.
Several schemes have been proposed to reinforce the representation ability of deep networks, e.g. some researches improve performance by deepening the network. The UNet++ ( Zhou et al., 2018 ) inserts a nested convolutional structure between the encoding and decoding paths. In order to detect the ambiguous boundaries in medical images, Lee et al. (2020) present a structure boundary preserving segmentation framework, which uses a key point selection algorithm to predict the structure boundary of target. Indeed, the deeper the network is, the more information we can get. However, deepening the network is inefficient, and as the network deepens, it is easy to cause gradient explosion and gradient disappearance, and the optimization effect degrades. Meanwhile, these methods can greatly improve the performance of segmenting large and clustered objects, but they are easy to fail when encountering small and scattered lesions.
Another way is to exploit attention mechanism to optimize the deep learning. For example, Wang et al. (2020b) combine two 3D-ResNets with a prior-attention residual learning block to screen COVID-19 and classify the type of pneumonia. Ren et al. (2020) present a strategy with hard and soft attention modules. The hard-attention module generates coarse segmentation map, while the soft-attention module with position attention can capture context information precisely. Zhong et al. (2020) propose a squeeze-and-attention network which imposes pixel-group attention to conventional convolution to consider the spatialchannel interdependencies. However, the above methods need additional computation cost. Gao et al., 2020 propose a dual-branch combination network for COVID-19 classification and total lesion region segmentation simultaneously. A lesion attention module is used to combine classification features with corresponding segmentation features. Hou et al. (2019) propose a self-attention distillation (SAD) approach, which makes use of the networks own attention maps and perform top-down and layer-wise attention distillation within the network itself. Through the feature maps between the encoder layers, the model can learn from itself without extra labels and consumptions. The intuition of SAD is that useful contextual information can be distilled from the attention maps of successive layers through those of previous layers. The timepoint of training to add SAD to an existing network may affect the convergence time, and it is recommended to use SAD in a model pretrained to some extent. In this paper, we introduce the selfattention learning mechanism into a strengthened U-shaped segmentation network without pre-training. Then stronger labels will be generated from the feature maps of lower layers to guide the deeper layers for further representation learning. And our method is helpful to strengthen some obscure and scattered objects.
Many studies have confirmed that more information can be obtained at the encoder and the bottleneck of network. CE-Net ( Gu et al., 2019 ) presents two modules at the bottleneck. One module uses multi-scale dilated convolution to extract rich features, while the other uses multi-scale pooling operation to further obtain context information. Besides, Shan et al. (2020) propose VB-Net, which is based on V-Net, to achieve more effective segmentation by adding bottleneck blocks by convolutions, but such models are computationally expensive. To utilize spatial information in neural networks, Pan et al. (2017) propose Spatial CNN (SCNN), in which slice-by-slice convolutions within feature maps are employed instead of traditional deep layer-by-layer convolutions, so that messages are transferred between pixels across rows and columns in the layer. In this paper, we attempt to introduce a spatial convolution block into the bottleneck of the encoderdecoder network by using a sequential message passing scheme similar to SCNN. This kind of message passing mechanism helps to propagate the information between neurons, avoid the influence of sparse and subtle supervision, and make better use of the contextual relationships of pixels. Therefore, the U-shaped neural network is strengthened and the training convergence of network can also be accelerated.
Few-shot learning and class balancing
Because manual labeling is time-consuming, laborious and expensive, many researchers have conducted studies in few-shot learning. Some researchers choose transfer learning ( Raghu et al., 2019;Minaee et al., 2020 ), which refers to applying the learned knowledge to other problems in different but related fields to solve new tasks. In addition, many studies augment the data through Generative Adversarial Networks (GAN) ( Goodfellow et al., 2014 ) or its extensions ( Mahapatra et al., 2018;, which create new images and corresponding masks, and then add the synthesized data to the training set to expand the training image. Mahapatra et al., 2018 propose a model to generate many synthetic disease images from real disease images by Conditional GAN. These algorithms are computationally intensive and may re-quire additional annotation data. Apart from that, most advanced methods use class activation map (CAM) ( Zhou et al., 2016 ) and gradient-weighted class activation map (Grad-CAM) ( Selvaraju et al., 2017 ) for object localization and image-level weakly supervised semantic segmentation, which get results from feature heatmaps of the network. Sometimes, these methods are used as a basic step for semantic segmentation of large and clustered objects. For instance, Wang et al. (2020d) propose a self-supervised equivariant attention mechanism, in which CAM is combined with pixel correlation module to narrow the gap between full and weak supervisions. So that, for the segmentation of COVID-19 lesions, which are small and scattered, these methods are not ideal. Furthermore, Lee (2013) propose a semi-supervised framework to learn from limited data, which utilize the segmentation results with pseudo labels generated from the model to retrain the model. Then by continuous iterations, this strategy can use few labeled data and pseudo data to improve the performance of network, which is also confirmed in Fan et al., 2020 . In this work, we build a similar iterative framework and add a trust module after each iteration to make the pseudo labels more reliable.
The issue of long-tailed training datasets has attracted a lot of attention in machine learning. Zhou et al. (2020b) propose a deep learning algorithm to solve the large-scene-small-object problem. In addition, Cui et al., 2019 present that as the sample number of a class increases, the penalty term of this class decreases significantly. Therefore, through theoretical derivation, they design a re-weighting scheme to re-balance the loss, so as to better achieve long-tailed classification. Kervadec et al. (2019) propose a boundary loss for highly unbalanced segmentation, which uses integrals over the interface between regions rather than using unbalanced integrals over regions. Wu et al. (2020) also present a new loss function called distribution-balanced loss for the multi-label recognition of long-tailed class distributions. This loss re-balances the weights considering the impact of label co-occurrence, and mitigates the over-suppression of negative labels. Different from these methods, we introduce a re-weighting module before the training of each iteration to balance the class distribution.
COVID-19 Pneumonia infection segmentation
Due to the lack of high-quality pixel-level annotation, a large number of AI-based studies are aimed at solving the issue of COVID-19 diagnosis ( Kang et al., 2020 ) and lesion segmentation from the perspective of using limited training datasets. For example, Oh et al. (2020) provide a method of patch-based convolutional neural network, which has less trainable parameters for COVID-19 diagnosis. He et al. (2020) not only build a publicly-available dataset, but also propose a self-trans method to combine contrastive self-supervised learning with transfer learning to learn strong and unbiased feature representations. Wang et al. (2020c) propose a weakly-supervised deep learning framework for COVID-19 classification and lesion localization by using 3D CT volumes. The 3D deep neural network is used to predict the probability of infections, while the location of COVID-19 lesions is the overlap of the activation region in classification network and the unsupervised connected components. These works are much concerned about the detection of infectious locations and cannot obtain the shape and classification.
Certainly, many deep learning networks have been established to segment COVID-19 lesions. However, most of them are based on adequate data and supervised learning. Yan et al. (2020) introduce a deep CNN, which provides a feature variation block to adjust the global properties of features for the segmentation of COVID-19 lesions. Shan et al. (2020) use the human in the loop strategy for efficient annotation, which can help radiologists improve the automatic labeling of each case. In terms of public datasets, pixel-level annotations are often noisy. Wang et al. (2020a) present an adaptive mechanism to better deal with noisy labels. In their work, they propose a COVID-19 pneumonia lesion segmentation network and a noise-robust dice loss to better segment lesions of various scales and appearances as well. Although the adaptive mechanism can effectively obtain more high-quality annotations, it is very complicated to implement. Fan et al., 2020 propose Inf-Net to automatically segment infected area from CT images. A parallel partial decoder is used to aggregate high-level features and generate global features. Then, a reverse attention module and an edge attention module are used to enhance the representation of boundary. Meanwhile, a semi-supervised training strategy is also introduced.
Nevertheless, most research work ignores the imbalance of infection categories in datasets. In fact, whether it is GGO or consolidation, for doctors, better identification of the distribution of lesions in different stages is more conducive to understand patients condition and make treatment. Therefore, it is necessary to segment not only the total infected regions but also multi-class pneumonia infections with limited data.
Method
In this section, we first present the details of our proposed spatial self-attention network in terms of network architecture, self-attention learning, spatial convolution and loss function. We then present the semi-supervised few-shot learning framework for COVID-19 lesions segmentation based on the re-weighting module and the trust module.
Spatial self-attention network (SSA-Net)
For the sake of obtaining more contextual and spatial information in the learning network and extracting the complex and obscure COVID-19 lesion areas effectively, we propose an encoderdecoder based deep neural network named Spatial Self-Attention network (SSA-Net) for lesion segmentation. As shown in Fig. 2 , the proposed SSA-Net consists of three major parts: a feature encoder with self-attention learning, a feature re-extractor with spatial convolution, and a feature decoder. Each CT slice is concatenated with its lung mask as the input of our proposed network to remove the background except the lungs. In this proposed method, we use ResNet34 ( He et al., 2016 ) as the backbone approach in feature encoder module. Herein, a self-attention learning module is added after four residual blocks to enhance the representation learning by distilling layer-wise attention and useful contextual information from deeper layers. The feature map obtained from the fourth residual block is fed to perform spatial convolution in the feature re-extractor to transmit spatial information. Skip connections are used to concatenate the encoder and the decoder. Meanwhile, for the sake of improving the decoding performance, we use upscaling and deconvolution ( Apostolopoulos et al., 2017 ) operations. Finally, after the sigmoid activation function, the result generated from the feature decoder has the same size as input.
Feature encoder
In this work, the feature encoder consists of four residual blocks for down-sampling operations, which is the same as the encoder of ResNet34. To strengthen the representation, we introduce a selfattention learning module after each residual block, and then the attention maps of previous layers can distil useful contextual information from those of successive layers, and the better representation learned at lower layers will in turn benefit the deeper layers. Through this kind of self-learning, the representation can be strengthened without extra training time and additional labels. The architecture of Spatial Self-Attention network (SSA-Net), which consists of three major parts: feature encoder, feature re-extractor and feature decoder. Each CT slice is concatenated with its lung mask as the input of network. In the feature encoder, a self-attention learning module is added after four residual blocks to enhance the representation learning by distilling layer-wise attention and useful contextual information from deeper layers.The feature map obtained from the fourth residual block is fed to perform spatial convolution in the feature re-extractor using a sequential scheme to transmit spatial information. Skip connections are used to concatenate the encoder layers with four decoder layers with upscaling and deconvolution operations. Finally, after a sigmoid activation function, the result is generated from the feature decoder.
Self-Attention Learning: Several works ( Hou et al., 2019;Ren et al., 2020;Zhong et al., 2020 ) have shown that attention mechanism can provide useful contextual information for segmentation. Thus, we introduce a self-attention learning mechanism to exploit attention maps derived from its own layers of network, without the need of additional labels and external supervisions. The attention maps used in this paper are activation-based attention maps. Specifically, A m ∈ R C m ×H m ×W m is used to denote the output of m -th residual blocks ( m = 1 , 2 , 3 , 4 ), where C m , H m , W m denote the channel, height and width of output, respectively. The attention map is to map the three-dimensional feature of channel, height and width into a two-dimensional feature of height and width, namely The distribution of spatial features is determined by considering the activated eigenvalues of each channel. The importance of each element on the final output depends on its absolute value in the map. Therefore, the attention map can be generated by a mapping function designed to calculate statistics of all the absolute values of elements across the channel dimension as follows: where A mi denotes the i -th slice of A m in the channel dimension, and z can be a natural number greater than 1. The larger the z, the more attention will be paid to these highly activated regions. In our experiment, z is set to 2, because it has been verified that this can maximize the performance improvement ( Hou et al., 2019 ).
And then we perform spatial softmax operation ( S) on Generator 2 sum (A m ) . The size of attention map is different between two adjacent layers, so bilinear upsampling operation ( B) is used to make the original feature and the target feature the same size. Formally, the whole process is represented by a function: Finally, we use mean square difference loss ( L mse ) function to calculate the attention loss ( AT _ Loss , which is shown in Fig. 2 ) between the four adjacent features after each residual blocks. The formulation is: So the total loss of self-attention learning is formulated as follow: where N is the number of samples, M is the number of residual blocks, and M is equal to 4 in this paper.
Feature re-extractor
The feature extractor is a newly spatial convolution module at the bottleneck of our encoder-decoder network. By using a sequential message passing scheme, this module is aimed to extract more spatial information between rows and columns in the feature map and strengthen the training.
Spatial Convolution: Several works ( Gu et al., 2019;Pan et al., 2017 ) have made innovations at the bottleneck of encoder-decoder structures and achieved effective results. In order to improve the ability to explore spatial information of the network and better interpret the common low contrast and fuzzy boundary areas in COVID-19 CT images, we add a spatial convolution module to obtain the feature maps through channel wise convolutions with large kernels.
Specifically, the feature map obtained from the feature encoder is a 3D tensor T with the size of C × H × W , where C, H and W is the number of channel, height and width respectively. As shown in Fig. 3 , taking the H dimension as an example, that is, passing the message from top to bottom, the feature map would be cut into H slices. k denotes the kernel width. It represents that a pixel in the next slice can receive messages from k × C pixels in the current slice. The first slice is convoluted by a 1 × k × C convolution layer, and the output is added to the second slice, then the new output is then fed to the next 1 × k × C convolution. This process is iterated for H times to get the final output. The above operations are carried out in four directions, including downward, upward, leftward and rightward, to complete the spatial information transmission.
Further, T i, j,k denotes the element of a 3D tensor T , and i , j, k represent the indexes of channel, height and width respectively. Thus, the spatial convolution function is where T denotes the update of the element, L is the nonlinear activation function of ReLU. K m,i,n denotes the weight between an element in channel m of the last slice and an element in channel i of the current slice, with an offset of n columns between the two elements.
Feature decoder
The feature decoder is designed for constructing the segmentation results from feature encoder and feature extractor. Through skip connections, the feature decoder can get more details from encoder to make up for the loss of information after pooling and convolutional operations. Each decoder layer includes a 1 × 1 convolution, a 3 × 3 transposed convolution and a 1 × 1 convolution. Based on skip connections and the concatenations of decoder layers, the output has the same size as input. In the end, we adopt the Sigmoid function as the activation function to generate the segmentation result.
Loss function
The total loss comprises of two terms. One term is a segmentation loss, and the other is a self-attention loss. The COVID-19 infected areas at an early stage, shown as GGO, are often scattered and occupy only a small region of image. When the proportion of foreground is too small, the Dice loss function proposed in Milletari et al., 2016 has been proved to be effective, so we prefer to consider Dice loss function as a segmentation loss in our task. All the networks for comparison are trained with the same loss function (Dice loss), so all the experiments were carried out under the same experimental settings. The Dice loss function is defined as follows: where G denotes the ground truth and S represents the segmentation. The self-attention loss is mentioned in Eq. (4) . Thus, as shown in Eq. (7) , the sum of segmentation loss and self-attention loss is regarded as the total loss of the network.
Loss sum = Loss seg + αLoss SA , where α is the weight of self-attention learning loss to balance the influence of attention loss on the task, and set to 0.1 in our experiment.
Semi-supervised few-shot learning
Due to the class unbalanced and limited labeled data of COVID-19 datasets, we propose a semi-supervised few-shot learning framework, which consists of two major parts: the lung region segmentation, and multi-class infection segmentation, as shown in Fig. 4 .
Lung region segmentation
The lung region segmentation is an initial step of our COVID-19 lesion segmentation. First, we use a trained U-Net model provided by Hofmanninger et al. (2020) for the segmentation of lung region. Then, all unlabeled CT slices are segmented by the pre-trained U-Net to obtain all the boundaries of lung.
Multi-class infection segmentation
Because the manual labeling of professional doctors is not only time-consuming but also expensive, there are limited labeled public datasets, and fewer labels for multi-class infection areas. In this work, we present a semi-supervised few-shot learning strategy, which leverages a large number of unlabeled CT images to effectively augment the training dataset. Moreover, we introduce a re-weighting module and a trust module to balance the distribution of different lesion classes and to obtain more reliable pseudo labels.
An overview of our semi-supervised few-shot learning framework is shown in Fig. 4 . Our framework is based on a random sampling strategy and uses unlabeled data to gradually expand the training dataset and generate pseudo labels. Each CT slice is concatenated with its lung mask generated by lung region segmentation as the input of our proposed SSA model. When training, we exploit a re-weighting module, which is a class re-balancing strategy based on the number of pixels for each class. And more reliable pseudo labels can be obtained from trust module by selecting high confidence values.
Specifically, the labeled dataset D l abel ed is divided into an original training set D training , a validation set D v alidation and a test set D test . We firstly pretrain a SSA model M 1 with reweighting module using original labeled dataset D training . Meanwhile, we use the vali- Fig. 4. The architecture of the semi-supervised few-shot learning framework, which consists of two major parts: the lung region segmentation and iteration infection segmentation. A trained U-Net model is used to segment the lung region in each CT image as an initialization of multi-class infection segmentation. Then, each lung mask is concatenated with its CT image as the input of our multi-class infection segmentation. In this part, we firstly train the model with SSA-Net, and we introduce a re-weighting module to rebalance the class distribution. The unlabeled data are test by the pre-trained SSA model with a trust module to obtain more reliable pseudo labels. Secondly, we take the original data and generated pseudo data as new trainning dataset. Thirdly, we train a new SSA model using this dataset in the same way. Follow this method until all unlabeled images are predicted and the latest model is no longer improved. dation set with true labels to measure the performance of the new trained SSA model, and images in the unlabeled dataset D unl abel ed are tested by the pre-trained M 1 with trust module to generate pseudo labels. Next, we randomly select t generated pseudo samples, and add them into the original training set to make a strengthened training dataset. Then, we use this dataset to train a new SSA model in the same way and repeat this process. Therefore, the strengthened training set consists of the original training set D training and the pseudo-label training set D pseudo . Once a new SSA model M j is generated, all the pseudo labels of images in D pseudo will be renewed. If the number of unlabeled images in D unl abel ed is reduced to be less than t, we add all the remaining images in D unl abel ed with their pseudo labels into the strengthened training set. If all the images of D unl abel ed has been used, we will only update the pseudo labels of D pseudo during the iteration. The iteration will stop when the DSC of validation set D v alidation is no longer improved.
Re-weighting module: In this module, we introduce the cross entropy loss, which is more suitable for the condition of class imbalance in multi-class training tasks. However, because not only the number of consolidation samples in datasets is small, but the consolidation proportion of pixels in each image is also very small. In view of this kind of class imbalance, we calculate the pixel ratios of the two categories (GGO and consolidation) in all training data, and exploit the result to set their weights of the cross entropy loss. Therefore, the final loss function is defined as follow: where C denotes the number of total categories, and P c represents the pixel proportion of class c in the training set, which consists of the original labeled training set D training and the current pseudolabel training set D pseudo . The initial labeled training set is a multiclass data set. To guide the model to identify different types of lesions, we need to ensure that the original labeled training set contains samples of all categories. If the category is the same as the class of sample i , y ic is 1, otherwise it is 0. p ic is the prediction probability of class c of sample i . In this way, it can ensure that the weight of a class with small proportion is more than 1, and the weight of a class with large proportion is less than 1, so as to achieve a balance of categories.
Trust module: Usually, we only pick up a class label which has the maximum predicted probability for each unlabeled sample. However, not all the predicted values are true values, and false values will guide the model to errors during the iterative process. The work in Lee (2013) has proved that the pseudo labels with high confidence are more effective. Hence, we add a trust model to re-evaluate the pseudo infection class labels obtained from current SSA model, by setting a threshold η to select high confidence values, and the predicted pseudo label with credibility is defined as: where p denotes the final pseudo label after re-evaluation, c represents the predicted infection category from the current SSA model, and p c denotes the maximum predicted probability of an unlabeled pixel. The pseudo label is set to 0 and the pixel is treated as uninfected lung region when the probability of predicted category is less than the threshold. The setting of η is quite important. η is the threshold for re-evaluating the pseudo infection class labels and selecting high confidence values. The higher η is, the more confident of the pseudo label will be. Based on the experience, we try several values and η is set to 0.95 in our experiments.
COVID-19 pneumonia infection datasets
At present, many public datasets on COVID-19 are available for free. However, as mentioned above, due to the difficulty of manual labeling, most of the data only have image-wise labels for COVID-19 detection, and only a few datasets are labeled precisely for segmentation. Clinical CT scans collected from currently published COVID-19 CT datasets are used for our experiments.
One of the datasets is the COVID-19-CT-Seg dataset, which has been publicly available at here 3 with CC BY-NC-SA license, and contains 20 public COVID-19 CT scans from the Coronacases Initiative and Radiopaedia. The corresponding annotations ( Jun et al., 2020 ) including left lung, right lung, and infection can be freely downloaded at here 4 . In Ma et al. (2020) , we know that the last 10 cases in this dataset from Radiopaedia have been adjusted to lung window [-1250,250], and then normalized to [0,255]. While the other, the COVID-19 CT Segmentation dataset and its annotations are available at here 5 , which includes 100 axial CT images from more than 40 patients with COVID-19 collected by the Italian Society of Medical and Interventional Radiology and 9 axial volumetric CT scans from Radiopaedia 6 . In this dataset, the lung masks are contributed by Hofmanninger et al. (2020) , and the images and volumes were segmented using three labels: ground-glass, consolidation and pleural effusion.
We use three datasets ( Dataset 1 , Dataset 2 , Dataset 3 ) for our experiments as shown in Table 2 . Firstly, the COVID-19-CT-Seg dataset consists of 1848 slices with lesion, which have been segmented by experienced radiologists. This dataset is used to demonstrate the effectiveness and stability of our proposed segmentation network. We consider these 1848 slices as Dataset 1 . Same as the experiment in Ma et al. (2020) , we split the twenty cases in Dataset 1 into five groups randomly for 5-folder cross validation. Secondly, Dataset 2 consists of 98 slices from the COVID-19 CT Segmentation dataset and we divide them into the same training set and validation set in the experiment of Fan et al., 2020 . Finally, from the COVID-19 CT Segmentation dataset, we can obtain 468 slices with multi-class infection labels in total as Dataset 3 which is used to confirm that our multi-class semi-supervised few-shot model is feasible and effective.
Experimental settings
Data preprocessing: In Dataset 1 , in the light of the suggestions from instructions of the COVID-19-CT-Seg dataset 7 , we preprocessed the image data, including adjusting the gray values to lung window [-1250,250], and then normalizing it to [0,255] for the previous ten groups of volumes. Besides, we cropped the last ten groups of images from 630 × 630 to 512 × 512, making them the same size as the previous ten groups. And we also performed the same operations in Dataset 3 as well. The operating procedure of cropping is to calculate the center of gravity by using the lung label available in the corresponding dataset, and then calculate the cutting position by using the center of gravity.
Evaluation metrics: We used four metrics for quantitative evaluation between segmentation results S and the ground truth G , i.e., the Dice similarity coefficient (DSC), the 95-th percentile of Hausdoff Distance (HD), the Mean Absolute Error (MAE) and Normalized surface Dice (NSD). The first three measures are widely used in the evaluation of medical image processing, and the last one can better evaluate the situation of edge segmentation. For the measurements based on DSC and NSD ( Nikolov et al., 2018 ), the higher the scores are, the better the segmentation performs. While on the contrary, for metrics of HD and the MAE, lower scores are supposed to be the better segmentation.
1) Dice Similarity Coefficient (DSC): This was first proposed in Milletari et al., 2016 , and then widely used in medical image segmentation. The DSC is a similarity measure function, which is usually used to calculate the similarity of two samples. The formulation is as follows: 2) Hausdoff Distance (HD): This is also a commonly used measure to describe the similarity between segmentation result and the ground truth. DSC is sensitive to the inner filling of mask, while HD is sensitive to the boundary. HD is defined as follows: The 95-th percentile of Hausdoff Distance ( HD 95 ) is the final value multiplied by 95% in order to eliminate the effect of a very small subset of outliers.
3) Mean Absolute Error (MAE): This is the average of absolute errors, which can better reflect the prediction error and it is defined as: 4)Normalized Surface Dice (NSD): Unlike the DSC, this measure assesses the overlap of the segmentation and ground truth surfaces with a specified tolerance ( τ ) instead of the overlap of these two volumes. The surface here is represented by the boundary of mask. Then the segmentation surface and ground truth surface are expressed by G' and S' respectively, where G' = ∂G and S' = ∂S. And the border region of these two surfaces at tolerance τ are where τ is set to 3 mm in our experiment, which is the same as
Ablation study
In this subsection, we evaluate different variants of the modules presented in Section 3 in order to prove the effectiveness of key components of our model, including the self-attention learning module and spatial convolution module in SSA-Net, and the reweighting module and trust module in semi-supervised few-shot model.
Ablation experiments of SSA-Net
In order to investigate the importance of each component in SSA-Net, we combine spatial convolution (SC) and selfattention learning (SA) with backbone to get new models and use Dataset 1 to train these models, which are devised as follows: backbone ( M 1 ), backbone+SA ( M 2 ), backbone+10episodes+SA Effectiveness of self-attention learning: We compare M 3 and M 4 in Table 3 to evaluate the contribution of self-attention learning mechanism. The results clearly show that spatial convolution together with self-attention learning mechanism are useful to drive up performance. However, from model M 1 to model M 2 , by adding self-attention learning directly, we can also notice a drop in accuracy. As mentioned in Hou et al. (2019) , the self-attention learning is assumed to be added to a half-trained model and the time to add the SA module has an effect on the convergence speed of the networks. Here, we also train the backbone by adding the single SA module at different timepoints (from 10 episodes to 50 episodes) and get new models ( E 1 -E 6 ) of M 2 . Table 3 displays the segmentation results in dataset 1 and all the networks are trained up to 150 episodes. The backbone with single SA module can achieve the best segmentation results when introducing the single SA started from the 40 episodes. It proves from one aspect that valuable self-attention contextual information can only be extracted from a model trained to a reasonable level. This accuracy decline reflects the effectiveness of the spatial convolution module as well, which strengthens the network and accelerates the training convergence. Fig. 5 displays two segmentation examples from Dataset 1 . From the visual comparisons of M 2 and M 4 , we can obviously observe that the segmentation results, which is highlighted with orange boxes, show better performance in the model after introducing self-attention learning. It proves that the context information generated from self-attention learning is able to guide the network for better extracting more complex regions.
Effectiveness of spatial convolution: From Tabel 3 , all the metrics show that the models with spatial convolution make a better performance than models without this module. This clearly demonstrates that the use of spatial convolution can make the Fig. 5. Ablation studies of different modules for segmentation of COVID-19 pneumonia lesions. The model results show more details similar to ground truth after introducing spatial convolution, while after introducing self-attention learning, the contextual information generated is able to guide the network for better extracting more complex and scattered regions. The segmentation results highlighted with orange boxes show best performance in the model trained with both self-attention learning and spatial convolution. model segment the lesions more accurately. Furthermore, as shown in Fig. 5 , we observe that the model shows more details similar to ground truth after introducing spatial convolution, especially the highlighted part in the orange box. Compared with the results of M 3 and M 4 , it also demonstrate that the spatial convolution module can not only help transfer the information between cows and columns in the backbone network, but also make better use of the context information to detect scattered and obscure lesions after introducing self-attention learning.
Ablation experiments of semi-supervised few-shot model
We further extend our SSA-Net to the segmentation of small samples multi-class lesions (GGO and consolidation). We use 98 slices in Dataset 3 to train the semi-supervised models and the rest data is used for validation. The baselines we devised are as follows: SSA-Net with iteration ( S 1 ), SSA-Net based on re-weighting with iteration ( S 2 ), SSA-Net based on trust module with iteration ( S 3 ) and SSA-Net based on re-weighting module and trust module with iteration ( S 4 ).
Effectiveness of re-weighting module: As shown in Table 4 , some evaluation metrics of S 2 reduce slightly compared with S 1 . The main reason is pseudo labels generated from the iteration model may contain more inaccurate results, so the re-weighting module can be affected and cannot work effectively in the following iterations. Therefore, we derive S 3 and S 4 based on trust module. The DSC of GGO and consolidation increase at the same time after introducing the re-weighting module. Although the HD 95 and NSD of GGO have a faint decline, the average of most evaluation metrics have improved. The DSC and NSD raise to 0.5608 and 0.5128 respectively, while the HD95 descends to 0.0071.
Effectiveness of trust module: From these results of S 1 and S 3 in Table 4 , it is evidential that trust module boosts the segmentation performance both in GGO and consolidation. Generally, we boost the performance by 3.28% and 1.07% in terms of the average DSC and average NSD, and reduce the average HD 95 to 4.2751, the average MAE to 0.0072. Furthermore, we can observe from S 2 and S 4 that trust module is the basis of the re-weighting module. The re-weighting module can be effective under the condition of the trust module which is able to make pseudo labels more reliable.
Comparison of different deep learning networks
We compare our SSA-Net with two state-of-the-art deep learning networks, U-Net and nnU-Net, for semantic or medical image segmentation performance, and with Inf-Net, a COVID-19 infection segmentation network.
From the quantitative comparison shown in Table 5 , we can observe that nnU-Net, as an improved version of U-Net, has a better performance in segmentation tasks. This is mainly because nnU-Net has a more robust structure to adapt to a variety of datasets. Furthermore, the proposed SSA-Net is slightly better than nnU-Net in terms of DSC, HD 95 and NSD in both Dataset 1 and Dataset 2 . Our SSA-Net improves the average DSC from 0.6447 to 0.6522, the average NSD from 0.5347 to 0.5643 and reduces the average HD 95 from 5.7383 mm to 5.5260 mm in Dataset 1 . While in Dataset 2 , our SSA-Net improves the average DSC from 0.7500 to 0.7540, the average NSD from 0.5862 to 0.5876 and reduces the average HD 95 from 7.1841 mm to 7.0464 mm. The improvements demonstrate that spatial convolution has the ability to obtain more information between rows and columns in images, and on this basis, the self-attention learning mechanism can offer more reliable context information. Compared with Inf-Net, the advantage of SSA-Net in Dataset 1 is not obvious. In terms of DSC and NSD, our proposed SSA-Net outperforms by 1.14% and 1% respectively. But in Dataset 2 , it is evident that all evaluation metrics of all networks increase significantly. However, our proposed SSA-Net has more advantages in this dataset. We observe that most patients represented by the CT images are in moderate or severe conditions, the lesion includes not only the fuzzy GGO in the early stage, but also the consolidation in the later stage in this small sample Dataset 2 . Although the segmentation task in Dataset 2 is more challenging than that in Dataset 1 , our proposed SSA-Net can obtain more spatially complex information in a limited data set. And even if the lesions have a complex structure, it can perform better as well. The DSC, HD 95 and NSD are better than others, reaching 0.7540, 7.0464 and 0.5876, respectively. Fig. 6 shows a visual comparison of the results obtained from different networks in two different datasets. It can be observed that most of the current methods have improved the results, but they still perform poorly in the case of fuzzy areas and irregular shapes of COVID-19 lesions. However, our SSA-Net effectively alleviates this problem. Specifically, the segmentation results of SSA-Net are close to the ground truth, and there are fewer incorrectly segmented regions as well, especially for misty and scattered regions, which is attributed to the strengthened representation ability for fuzzy boundaries and irregular shapes of spatial convolution. Meanwhile, even in case of limited data in Dataset 2 , SSA-Net can perform well, which is due to the role of self-attention learning to enable the model learn from itself, thereby further enhancing the ability of contextual expression.
Results of semi-supervised few-shot learning
From Table 6 , our proposed SSA-Net has shown more competitive performance than other baseline methods. Besides, our proposed semi-supervised few-shot model (SSA-Net(I)) outperforms other algorithms in all evaluation metrics. By introducing the reweighting module for class balancing and the trust module for generating more credible pseudo labels, our SSA-Net based semisupervised learning framework enables the limited data to be utilized as much as possible. Compared with SSA-Net, in terms of GGO, SSA-Net(I) boost the performance by 5.02% in average DSC, 3.01% in NSD, and decrease the HD 95 and MAE to 5.9266 and 0.0116 respectively. While in terms of consolidation, SSA-Net(I) still shows the best performance. The reason is that SSA-Net can obtain stronger receptive field and contextual information, which helps to detect scattered and complex lesions. In addition, the training of SSA-Net is a process of continuous reinforcement of spatial information, so SSA-Net can improve the self-learning ability of the network in the case of few training samples. Fig. 7 shows the multi-class lesion segmentation results. Due to the small amount of training dataset, it is more prone to obtain wrong segmentations. Therefore, the baseline methods generate more incorrect results. On the contrary, the results of SSA-Net(I) are closer to the ground truth, because we set a threshold to get high confidence values and drop off the incorrect values. In addition, as can be observed in Fig. 7 , the proportional distribution of classes in the last column shows that the data categories in dataset are unbalanced. Among them, lesions containing GGO and consolidation only account for a small proportion of the image, and the most part of images are uninfected lung regions. For small consolidations are quite difficult to segment correctly, but also easily affect the segmentation of GGO. However, our proposed small samples semi-supervised learning model based on SSA-Net can segment lesions more accurately, even if the lesions are small or the boundary is blurred. We can also draw the conclusion that our model can get the results more correctly, which is contributed to the effect of re-weighting module.
Conclusion and future work
In this paper, we have proposed a novel COVID-19 pneumonia lesion segmentation learning network called Spatial Self-Attention network (SSA-Net), which exploits self-attention learning and spatial convolution to obtain more contextual information and can improve the performance in challenging segmentation task of COVID-19 infection areas. Furthermore, we have introduced our SSA-Net for multi-class lesion segmentation with small samples datasets. And we have presented a semi-supervised few-shot learning framework, in which a re-weighting module is utilized to rebalance the loss of different classes and solve the issue of longtailed distribution of training data, and also a trust module is used to select high confidence values. Extensive experiments on public datasets have demonstrated that our proposed SSA-Net outperforms state-of-the-art medical image segmentation networks. At the same time, our semi-supervised iterative segmentation model also achieves higher performance by training limited data.
The proposed deep learning network can identify scattered and blurred lesions in complicated backgrounds, and which usually happens in medical images. In the future, we will apply it to other related tasks. In addition, due to the urgent nature of the COVID-19 global pandemic, it is difficult to systematically collect large datasets and annotations, especially multi-class annotations, for deep neural network training. Our few-shot multi-class semisupervised training model only improves the model in process of getting more credible labels. In the near future, we plan to design a comprehensive system to detect, segment and analyze the COVID-19 pneumonia lesions automatically. Besides, we can get initial segmentation results to utilize class activation maps ( Zhou et al., 2016;Selvaraju et al., 2017 ) generated from the feature maps of the network for data augmentation.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-04-23T13:13:21.004Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "e389a39e03108a6c3c4711e57205c26b55218a95",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9027296",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "85c9659462a3d20f30b3609ce9a7f7f4277220a8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
17307806 | pes2o/s2orc | v3-fos-license | Tumor suppressor p53 binding protein 1 (53BP1) is involved in DNA damage-signaling pathways.
The tumor suppressor p53 binding protein 1 (53BP1) binds to the DNA-binding domain of p53 and enhances p53-mediated transcriptional activation. 53BP1 contains two breast cancer susceptibility gene 1 COOH terminus (BRCT) motifs, which are present in several proteins involved in DNA repair and/or DNA damage-signaling pathways. Thus, we investigated the potential role of 53BP1 in DNA damage-signaling pathways. Here, we report that 53BP1 becomes hyperphosphorylated and forms discrete nuclear foci in response to DNA damage. These foci colocalize at all time points with phosphorylated H2AX (gamma-H2AX), which has been previously demonstrated to localize at sites of DNA strand breaks. 53BP1 foci formation is not restricted to gamma-radiation but is also detected in response to UV radiation as well as hydroxyurea, camptothecin, etoposide, and methylmethanesulfonate treatment. Several observations suggest that 53BP1 is regulated by ataxia telangiectasia mutated (ATM) after DNA damage. First, ATM-deficient cells show no 53BP1 hyperphosphorylation and reduced 53BP1 foci formation in response to gamma-radiation compared with cells expressing wild-type ATM. Second, wortmannin treatment strongly inhibits gamma-radiation-induced hyperphosphorylation and foci formation of 53BP1. Third, 53BP1 is readily phosphorylated by ATM in vitro. Taken together, these results suggest that 53BP1 is an ATM substrate that is involved early in the DNA damage-signaling pathways in mammalian cells.
Introduction
Cells have evolved various sophisticated pathways to sense and overcome DNA damage as a mechanism to preserve the integrity of the genome. Environmental attacks like radiation or toxins, as well as spontaneous DNA lesions, trigger checkpoint activation and consequent cell cycle arrest and/or apoptosis. One key protein that coordinates DNA repair with cell cycle progression and apoptosis is the tumor suppressor protein p53. P53 is activated and posttranslationally modified in response to DNA damage (Appella and Anderson, 2000). These modifications include phosphorylation by ataxia telangiectasia mutated (ATM), 1 a protein kinase implicated in DNA damage-signaling pathways (Canman et al., 1998;Khanna et al., 1998). By transcriptionally activating genes involved in cell cycle control, DNA repair, and apoptosis, p53 participates in the maintenance of the genomic integrity after DNA damage.
P53 interacts with p53 binding protein 1 (53BP1). 53BP1 has been identified in a yeast two hybrid screen as a protein that interacts with the central DNA-binding domain of p53 (Iwabuchi et al., 1994). Similar to breast cancer susceptibility gene 1 (BRCA1; Ouchi et al., 1998;Zhang et al., 1998a;Chai et al., 1999), 53BP1 enhances p53-dependent transcription (Iwabuchi et al., 1998). Interestingly, the COOH terminus of 53BP1 contains tandem BRCA1 COOH terminus (BRCT) motifs. This motif was first identified in the COOH-terminal region of BRCA1 and has since been found in a large number of proteins involved in various aspects of cell cycle control, recombination, and DNA repair in mammals and yeast (Koonin et al., 1996;Bork et al., 1997;Callebaut and Mornon, 1997). The func-tion of the BRCT domain is not known. However, evidence suggests that BRCT domains may mediate proteinprotein interactions (Zhang et al., 1998b).
The presence of BRCT domains in 53BP1 and the reported interaction with p53 prompted us to investigate whether 53BP1 is involved in DNA damage-response pathways. Here we report that 53BP1 becomes hyperphosphorylated and rapidly relocates to the sites of DNA strand breaks in response to ionizing radiation. 53BP1 foci formation is reduced in ATM-deficient cells and can be inhibited by wortmannin in ATM wild-type cells. Moreover, radiation-induced hyperphosphorylation of 53BP1 is absent in cells treated with wortmannin, as well as in ATMdeficient cells. Taken together, these results strongly suggest that 53BP1 participates in DNA damage-signaling pathways and is regulated by ATM after ␥ -radiation.
Cell Culture and Treatments with DNA-damaging Agents
Cells were grown in RPMI 1640 medium supplemented with 10% fetal bovine serum at 37 Њ C with 5% CO 2 . FT169A and YZ5 cells were provided by Dr. Y. Shilon (Tel Aviv University, Ramat Aviv, Israel). Cells grown on coverslips were irradiated in a JL Shepherd 137 Cs radiation source at a rate of 1 Gy/min for doses of 1-5 Gy or 10 Gy/min for a dose of 10 Gy. UV light was delivered in a single pulse (50 J/m 2 ) using a Stratalinker UV source (Stratagene). Before UV irradiation, the culture medium was removed and the medium was replaced immediately after irradiation. All cells were returned to the incubator for recovery and harvested at the indicated times. Genotoxic agents and other drugs were used at the indicated concentrations. After a 1-h exposure, the cells were harvested for immunostaining.
ATM Kinase Assay
ATM was immunoprecipitated from K562 cells using anti-ATM antibody Ab3 (Oncogene Research Products). Aliquots of the ATM-protein A Sepharose immunocomplexes were resuspended in 25 l kinase buffer (10 mM Hepes, pH 7.4, 50 mM NaCl, 10 mM MgCl 2 , 10 mM MnCl 2 , 1 mM DTT, 10 nM ATP) and incubated for 20 min at 30 Њ C with 10 Ci of [ ␥ 32 ]P-ATP and 1 g of various affinity-purified GST fusion proteins containing different fragments of 53BP1.
53BP1 Forms Nuclear Foci in Response to Various Types of DNA Damage
Several proteins, including BRCA1 and Mre11/Rad50/ Nbs1, form DNA damage-regulated, subnuclear foci in the cell. To determine whether 53BP1 participates in DNA damage-signaling pathways, we examined 53BP1 localization after various types of DNA damage using several anti-53BP1 polyclonal and monoclonal antibodies generated for this study. All antibodies specifically recognize endogenous, as well as HA-tagged, full-length 53BP1 as examined by Western blotting, immunoprecipitation, and immunostaining (data not shown). As shown in Fig. 1, 53BP1 is diffusely localized in the nuclei of normal cells, but relocates to discrete subnuclear foci structures in response to ionizing radiation (e.g., 1 Gy). These 53BP1 foci can be detected as early as 5 min after irradiation (data not shown). Higher doses of radiation (e.g., 10 Gy) lead to more but smaller 53BP1 foci (Fig. 1). The number of foci reaches a peak at ف 30 min after radiation. Thereafter, the foci number slowly decreases, whereas the foci size increases (data not shown).
Foci formation is also observed in response to other DNA-damaging events. UV radiation induced the formation of numerous small foci, similar to that induced by 4NQO (a UV-mimetic agent) and hydroxyurea (Fig. 1). Treatment with the DNA topoisomerase I poison camptothecin or the topoisomerase II poison etoposide (VP16), which cause DNA single strand and double strand breaks, respectively, also resulted in the formation of 53BP1 foci. Similar results were obtained with the alkylating agent methylmethanesulfonate. However, cisplatin, a DNA crosslinking agent, induced only a few 53BP1 foci during the first hour after drug application, whereas the protein kinase inhibitor UCN-01 and the antimitotic agent paclitaxel (Taxol; Bristol-Meyers Squibb Co.) did not induce 53BP1 foci formation. Thus, different types of DNA damage trigger the recruitment of 53BP1 into discrete nuclear foci.
53BP1 Colocalizes with ␥ -H2AX in Response to DNA Damage
The time course of 53BP1 foci formation and disappearance is very similar to that recently described for phosphorylated H2AX (Rogakou et al., 1999;Paull et al., 2000). H2AX is one of the histone H2A molecules in mammalian cells and becomes rapidly phosphorylated after exposure of cells to ionizing radiation (Rogakou et al., 1999;Paull et al., 2000). Phosphorylated H2AX ( ␥ -H2AX) appears within 1-3 min as discrete nuclear foci on sites of DNA double strand breaks (Rogakou et al., 1999). Similar to ␥ -H2AX (Rogakou et al., 1999), the number of 53BP1 foci showed a linear relationship with the severity of DNA damage ( Fig. 1 and data not shown). As shown in Fig. 2 A, damage-induced 53BP1 foci colocalized with ␥ -H2AX at the various time points analyzed. The number of 53BP1 foci was identical to that of ␥ -H2AX throughout the course of the experiment. In addition, coimmunoprecipitation analysis revealed that 53BP1 and ␥ -H2AX biochemically interact after ␥ -radiation (Fig. 2 B). Small amounts of 53BP1 were detected in ␥ -H2AX immunoprecipitates prepared from irradiated HBL100 cells. In unirradiated cells, H2AX was not phosphorylated and anti-␥ -H2AX antibodies did not immunoprecipitate any phosphorylated H2AX. Similarly, 53BP1 was also not present in anti-␥ -H2AX immunoprecipitates prepared from unirradiated cells. These results demonstrate that 53BP1 colocalizes and interacts with ␥ -H2AX at the sites of DNA strand breaks after ␥ -radiation.
ATM Is Involved in 53BP1 Foci Formation
Several phosphatidylinositol 3-kinase (PI3K)-related kinases, including DNA-dependent protein kinase (DNA-PK), ATM, and ATM-related kinase (ATR), participate in DNA damage-responsive pathways (Smith and Jackson, 1999;Khanna, 2000). It is possible that DNA damage-induced 53BP1 foci formation may depend on one or more of these PI3K-like kinase family members.
We first examined 53BP1 foci formation in the presence or absence of DNA-PK using two derivatives of the human glioma cell line MO59 (Lees-Miller et al., 1995). No difference in the time course of 53BP1 foci appearance and disappearance was observed in these two cell lines after exposure to 1 Gy of ␥ -radiation (data not shown). However, comparison was hampered by the high number of 53BP1 foci in unirradiated MO59K and MO59J cells and subtle differences might be overlooked.
We then examined whether the 53BP1 response to ionizing radiation is affected in cells lacking ATM. Immortalized ATM-deficient fibroblasts (FT169A) were compared with their isogenic derivative cells, YZ5, that have been reconstituted with wild-type ATM cDNA (Ziv et al., 1997). As shown in Fig. 3 A, although irradiation with 1 Gy resulted in a rapid formation of 53BP1 foci in the ATM-reconstituted cells (ATM ϩ ), a reduced response was observed in the cells lacking wild-type ATM (ATM Ϫ ). Similar results were obtained when we compared other ATM-deficient fibroblast lines (GM03189D and GM05849C) with wild-type ATM cell lines (GM02184D and GM00637H) (Fig. 3 A). The time course of the number of 53BP1 foci per cell, as calculated from three independent experiments using YZ5 versus parental FT169A cells, is illustrated in Fig. 3 B. To further corroborate the role of ATM in 53BP1 foci formation, we pretreated HeLa cells for 30 min with wortmannin before exposure to 1 Gy of irradiation. Wortmannin is a potent inhibitor of the PI3K-related kinases, including ATM and DNA-PK (Sarkaria et al., 1998). As shown in Fig. 3 C, pretreatment with 50 M wortmannin greatly reduced the number of 53BP1 foci evident 1 h after ␥ -radiation. At an even higher dose (200 M), wortmannin completely blocked 53BP1 foci formation. These results suggest that the kinase activities of ATM or other PI3K-related kinases are required for 53BP1 foci formation. Figure 1. 53BP1 forms nuclear foci in response to DNA damage. HeLa cells were exposed to ␥-irradiation (1 and 10 Gy) or a 50 J/m 2 UV pulse 1 h before immunostaining with anti-53BP1 mAb BP13. Alternatively, cells were treated for 1 h with the following drugs: 2 g/ml 4-nitroquino-
ATM Is Required for DNA Damage-induced Hyperphosphorylation of 53BP1
Many proteins involved in DNA damage-response and/or DNA repair are phosphorylated upon DNA damage. To examine whether 53BP1 becomes phosphorylated in response to ␥ -radiation, K562 cells were irradiated (20 Gy) and harvested 1 h later. After immunoprecipitation using anti-53BP1 antisera, the samples were incubated for 1 h at 30 Њ C in the presence or absence of protein phosphatase and separated on a 3-8% gradient SDS gel. Phosphatase treatment of unirradiated K562 cells revealed a faster migrating form of 53BP1 (Fig. 4 A). This indicates that 53BP1 is modified by phosphorylation in normal undamaged cells. Upon ␥ -radiation, 53BP1 showed an even slower mobility that was reversed by phosphatase treatment (Fig. 4 A). These results suggest that 53BP1 is phosphorylated in undamaged cells and becomes hyperphosphorylated after ␥ -radiation.
Since 53BP1 is hyperphosphorylated after ␥ -radiation, we then examined whether wortmannin would affect radiation-induced 53BP1 phosphorylation. As illustrated in Fig. 4 B, there was no detectable radiation-induced 53BP1 mobility shift in wortmannin (50 M)-pretreated cells. In contrast, the radiation-induced 53BP1 mobility shift was Figure 2. (A) 53BP1 colocalizes with ␥-H2AX in response to ␥-radiation. WI38 cells were coimmunostained with anti-53BP1 antibody BP13 and affinity-purified anti-␥-H2AX serum before (0 min) and at the indicated time points after exposure to 1 Gy (10-160 min). (B) Coimmunoprecipitation of 53BP1 and ␥-H2AX after DNA damage. HBL100 cells were exposed to 0 or 20 of Gy ␥-radiation 1 h before lysis in NETN buffer (150 mM NaCl, 1 mM EDTA, 20 mM Tris, pH 8, 0.5% NP-40) including 0.3 M NaCl. Immunoprecipitation experiments were performed using anti-␥H2AX or anti-53BP1 antibodies. A fivefold higher amount of cell lysate was used for anti-␥-H2AX immunoprecipitation than that used for anti-53BP1 immunoprecipitation. The samples were separated on 4-15% SDS-PAGE and Western blotting was performed using either anti-␥-H2AX or anti-53BP1 antibodies as indicated.
readily detected in cells that had received no drug treatment before radiation. We next repeated the experiment using the ATM-deficient GM03189D and GM02184D cells expressing wild-type ATM. Again, in ATM wild-type cells, ␥ -radiation induced a 53BP1 mobility shift in control, but not in wortmannin-pretreated samples (Fig. 4 C). However, no radiation-induced 53BP1 mobility shift was observed in ATM-deficient cells, with or without wort-mannin treatment (Fig. 4, C and D). Taken together, these results strongly suggest that ATM is required for 53BP1 hyperphosphorylation after ␥ -radiation.
53BP1 Is a Substrate of ATM In Vitro
S/TQ sites have been described to be the minimal essential recognition sites for ATM (Kim et al., 1999). 53BP1 contains a total of 30 S/TQ sites, many of them clustered in the The difference in foci number between ATMϩ and ATMϪ cells 10 or 20 min after irradiation was significant at P Ͻ 0.001 using a Student's t test. (C) Wortmannin inhibits ␥-radiation-induced 53BP1 foci formation. HeLa cells were pretreated for 30 min with 0, 50, or 200 M wortmannin before exposure to 1 Gy of ␥-radiation. After recovery for 1 h, the control or irradiated cells were immunostained with anti-53BP1 antibodies. NH 2 -terminal region. To examine whether 53BP1 is a substrate for ATM, and to define regions that can be phosphorylated by ATM in vitro, we designed six overlapping 53BP1 GST fragments that span the entire ORF of 53BP1 and performed a standard ATM kinase assay. As shown in Fig. 4 E, the first three NH 2 -terminal 53BP1 fragments were phosphorylated by ATM in vitro, whereas no phosphorylation was observed in the last three COOH-terminal fragments, despite the fact that there are a total of 10 S/TQ sites within these 53BP1 fragments. These data suggest that 53BP1 is a substrate of ATM kinase.
Discussion
Here we report that 53BP1 participates in the early DNA damage-response. Using several antibodies specifically recognizing 53BP1, we show that 53BP1 becomes hyperphosphorylated and forms nuclear foci after exposure to ionizing radiation. ␥ -Radiation-induced 53BP1 hyperphosphorylation and foci formation are reduced in ATM-deficient cells. Moreover, 53BP1 hyperphosphorylation, as well as foci formation, is inhibited by wortmannin, an inhibitor of the PI3K-related kinases including ATM, DNA-PK, and, to a lesser extent, ATR (Sarkaria et al., 1998). Taken together, these data suggest that ATM and other PI3K-related kinases directly phosphorylate 53BP1 and regulate its localization to the sites of DNA strand breaks.
In favor of a functional link between ATM and 53BP1, we also demonstrate that NH 2 -terminal fragments of 53BP1 are effectively phosphorylated by ATM in vitro. Similarly, Xia and colleagues have recently shown that Xenopus 53BP1 and a NH 2 -terminal fragment of human 53BP1 can be phosphorylated by ATM in vitro and in vivo (Xia et al., 2000), supporting our hypothesis that 53BP1 is a direct substrate of ATM. In contrast to our findings, Schultz et al. (2000) observed no difference in 53BP1 foci formation in ATM-deficient cells when compared with that in normal ATM wild-type cells (Schultz et al., 2000). We also observed that 53BP1 foci still formed, albeit with slower kinetics, in cells lacking ATM, suggesting the exist- K562 cells were exposed to 0 or 20 Gy of ␥-radiation and immunoprecipitated using polyclonal anti-53BP1 antibody. Immunoprecipitates were incubated for 1 h at 30ЊC with 800U -phosphatase (PPase) in 100 l incubation buffer or only incubation buffer. The samples were separated on a 3-8% gel and immunoblotted with anti-53BP1. (B) Wortmannin inhibits 53BP1 hyperphosphorylation. K562 cells were pretreated with 50 M wortmannin for 30 min before exposure to 1 Gy of radiation. Whole cell lysates prepared from treated and control samples were separated on a 3-8% gradient gel (30 g protein per lane) and immunoblotted with anti-53BP1 antibodies. (C) ATM is required for the ␥-radiation-induced hyperphosphorylation of 53BP1. ATM-deficient GM03189D cells or ATM wildtype GM02184D cells were treated as described in the legend to B, and 30 g lysates were separated on a 3-8% gel before immunoblotting with anti-53BP1 antibodies. (D) All three ATM-deficient cell lines tested (FT169A, GM03189D, and GM05849C) show no hyperphosphorylation 1 h after 20 Gy. (E) 53BP1 is a substrate of ATM in vitro. Six GST fusion proteins containing overlapping 53BP1 fragments were used as substrates in an ATM in vitro kinase assay. GST protein alone and GST fusion protein containing 13 residues surrounding the serine-15 of the p53 coding sequence were used, respectively, as negative and positive controls. ence of an alternative, ATM-independent pathway for the regulation of 53BP1. However, our data presented here clearly demonstrate that ATM plays a critical role in the regulation of 53BP1 hyperphosphorylation and foci formation after ␥ -radiation. 53BP1 rapidly colocalizes with ␥ -H2AX in response to ionizing radiation. H2AX is a histone H2A variant that becomes phosphorylated and forms foci at sites of DNA strand breaks after DNA damage (Rogakou et al., 1999;Paull et al., 2000). The number, as well as appearance and disappearance of 53BP1 foci, matched almost completely with that of ␥ -H2AX. Moreover, 53BP1 and ␥ -H2AX physically interact after ionizing radiation, suggesting that 53BP1 relocates to the sites of DNA double strand breaks in response to ␥ -radiation. Similar to 53BP1, ␥-H2AX foci formation is inhibited by wortmannin treatment (Rogakou et al., 1999;Paull et al., 2000) and is reduced in ATM-deficient cells (Rappold, I., and J. Chen, unpublished observation). It is possible that phosphorylation of H2AX may mediate the relocalization of 53BP1 to DNA strand breaks. If this is the case, ATM-dependent hyperphosphorylation of 53BP1 may be a secondary event that is not required for 53BP1 foci formation. This possibility will be examined in future studies using phosphorylation-deficient mutants of 53BP1.
Upon relocalizing to the sites of DNA damage, 53BP1 could participate in chromosome remodeling that makes DNA lesions accessible to DNA repair proteins. Alternatively, 53BP1 could be involved in the recruitment of repair proteins like BRCA1 and Rad51 to these DNA lesions. Both of these proteins colocalize with 53BP1 several hours after exposure to ionizing radiation (Rappold, I., and J. Chen, unpublished observations). In addition, BRCA1 biochemically interacts with 53BP1 after ␥-radiation (Rappold, I., and J. Chen, unpublished observations). 53BP1 contains two BRCT motifs at its COOH terminus. 53BP1 BRCT motifs are closely related with those of BRCA1 and Saccharomyces cerevisiae Rad9 (scRad9) protein. Insight into the potential role of scRad9 comes from studies of its association with scRad53. ScRad53 is the homologue of mammalian Chk2 or Schizosaccharomyces pombe Cds1. After DNA damage, scRad9 is phosphorylated and this phosphorylated scRad9 associates with the forkhead homology-associated (FHA) domain of scRad53 (Sun et al., 1998;Vialard et al., 1998). Mutations in either the scRad53 FHA domain (Sun et al., 1998) or scRad9 BRCT motifs (Soulier and Lowndes, 1999) prevent scRad53 activation after DNA damage. Although the mammalian homologue of scRad9 has not been identified, a scRad9 homologue likely exists in mammals. Because of the close homology of their BRCT motifs, two candidate scRad9 homologues are BRCA1 and 53BP1. Based on yeast studies, one would predict that the activation of Chk2, the homologue of scRad53, should depend on this scRad9 homologue in mammalian cells. However, DNA damage-induced phosphorylation of Chk2 was observed in BRCA1-deficient cells (Matsuoka et al., 1998), suggesting that BRCA1 may not be the mammalian homologue of scRad9. Experiments using 53BP1-deficient cells will be performed to examine whether 53BP1 is the scRad9 homologue in mammals.
In conclusion, our data demonstrate that 53BP1 participates early in DNA damage-signaling pathways and is regulated by ATM after ␥-radiation. The exact role of 53BP1 in these pathways remains to be resolved. Given the importance of these DNA damage-signaling pathways in cancer prevention, it will be interesting to examine whether 53BP1 is mutated in tumors. | 2014-10-01T00:00:00.000Z | 2001-04-30T00:00:00.000 | {
"year": 2001,
"sha1": "0a84d23bfcb28d41f391b8f9799e390759bd1fa3",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/153/3/613/1297621/0010106.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a84d23bfcb28d41f391b8f9799e390759bd1fa3",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211049094 | pes2o/s2orc | v3-fos-license | A new process to measure postural sway using a Kinect depth camera during a Sensory Organisation Test
Posturography provides quantitative, objective measurements of human balance and postural control for research and clinical use. However, it usually requires access to specialist equipment to measure ground reaction forces, which are not widely available in practice, due to their size or cost. In this study, we propose an alternative approach to posturography. It uses the skeletal output of an inexpensive Kinect depth camera to localise the Centre of Mass (CoM) of an upright individual. We demonstrate a pipeline which is able to measure postural sway directly from CoM trajectories, obtained from tracking the relative position of three key joints. In addition, we present the results of a pilot study that compares this method of measuring postural sway to the output of a NeuroCom SMART Balance Master. 15 healthy individuals (age: 42.3 ± 20.4 yrs, height: 172 ± 11 cm, weight: 75.1 ± 14.2 kg, male = 11), completed 25 Sensory Organisation Test (SOT) on a NeuroCom SMART Balance Master. Simultaneously, the sessions were recorded using custom software developed for this study (CoM path recorder). Postural sway was calculated from the output of both methods and the level of agreement determined, using Bland-Altman plots. Good agreement was found for eyes open tasks with a firm support, the agreement decreased as the SOT tasks became more challenging. The reasons for this discrepancy may lie in the different approaches that each method takes to calculate CoM. This discrepancy warrants further study with a larger cohort, including fall-prone individuals, cross-referenced with a marker-based system. However, this pilot study lays the foundation for the development of a portable device, which could be used to assess postural control, more cost-effectively than existing equipment.
Introduction
Postural control is key to maintaining balance during everyday activities. A decline of postural control with advancing age can cause difficulties when completing physical functional tasks and increases the risk of falls [1]. By the age of 75 years, the ability to stand on one leg with eyes closed is reduced to less than 20% of the performance of young adults [2] and the amount a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 where x TBCM, y TBCM are coordinates of TBCM; x i , y i are coordinates of the i-th segment; m i is mass of the i-th segment and M is the total body mass of the segment body model. The authors concluded that Kinect is an excellent tool for measuring TBCM.
In the current study, we demonstrate a much simpler method of calculating CoM, first used by Leightley et al. [28], is able to achieve similar results. Leightley's method takes the euclidean mean of 3, well-tracked joints (hip left, hip right, spine mid) to be a good estimate of the CoM position. Previous studies [26,29] have demonstrated that the accuracy of Kinect's joint tracking is related to the angle between the Kinect and the joint. This means that ankle and foot joints are tracked very poorly. Joints which have a less steep angle to the Kinect (e.g. the hip joints) are tracked with high accuracy. Poor tracking of joints can cause issues when estimating the TBCM, an issue which Leightley's method avoids. The human skeleton can be considered as a chain of connected joints meaning the positions of knee, ankle, and foot joints affect the CoM position without the need to consider them directly. Thus for an upright stance, the lengthy calculation of TBCM is not required for our application.
The aims of this study are: (1) develop a pipeline to track CoM and calculate postural sway from the output of a Kinect camera and (2) compare the output of the pipeline to the SMART Balance Master.
Participants
This study was approved by the Manchester Metropolitan University Research Ethics Committee. All participants provided written informed consent. Fifteen injury-free individuals (mean ± SD age: 42.3 ± 20.4 yrs; height: 172 ± 11cm; weight: 75.1 ± 14.2kg; BMI: 25.3 ± 3.3 kg/m 2 ; male = 11) took part in 346 trials during completion of the six components of the SOT used by the SMART Balance Master (NeuroCom International, USA), to assess postural sway during static and dynamic challenges. We chose a wide age range to ensure a wide range of postural sway was recorded. Postural sway is known to increase with age, as part of the normal ageing process. [3]. The age profile, of the participants, was 6 young (20-30), 5 middle age (31-59) and 4 older (>60).
The individual, pictured in Fig 1 has given informed consent for the use of their image, as outlined in PLOS consent form.
For this pilot study, no individuals with a history of falls were included. Also, several participants took part in more than one set of trials. This is a valid choice, as this is a study of agreement between two methods, not an investigation to identify those with balance impairment.
Procedure
The participants were simultaneously recorded using the EquiTest software that comes bundled with the SMART Balance Master and the CoM path recorder. CoM path recorder is custom software, detailed in section Recording of CoM path, using CoM path recorder. It processes the output of the Kinect depth camera into a 2D CoM path. Participants performed the six components of the SOT while standing, on the force plates incorporated into the Balance Master. The Balance Master was controlled and data recorded using the EquiTest software. The Kinect was controlled using the CoM path recorder. Participants wore a safety harness throughout all assessments to prevent falls. All six components of the SOT (outlined below) were carried out in accordance with the Balance Master operator instructions. The instructions require participants to stand on two legs approximately shoulder-width apart with heels aligned to markers on the force plates [30].
The six components of the SOT are as follows: (a) eyes open, platform fixed; (b) eyes closed to remove visual input; (c) eyes open with moving surround, to create sensory conflict between visual input (simulating a moving room) and vestibular inputs (a stable room); (d) eyes open and the platform support rotating freely to disrupt somatosensory and proprioceptive feedback from the feet and ankles; (e) eyes closed and the platform support rotating freely; and (f) eyes open with moving surround and the platform support rotating freely.
Two consistent trials, for each condition, were included in this study. Inconsistent trials and fails were excluded from further analysis. All assessments were conducted in the sequence of (a) to (f), as recommended by the operator instructions, this increases difficulty progressively. Each trial (an instance of an individual, carrying out one aspect of the SOT), was repeated twice, except if the second trial was inconsistent with the first, or was marked as a fail, in which case the participant was allowed a third attempt. A trial was marked as a fail if a participant touched the upright supports on the Balance Master frame or relied on the safety harness to maintain an upright posture for any reason.
Experimental setup
Participants stood upright on the force plates of the Balance Master, facing towards the large surround approximately 1m away. The surround is used to create visual-vestibular conflict, but obscured the front view of the participant (Fig 1). Therefore, the Kinect was positioned to capture the rear-view of the participant 2.5 m from the participant at a height of 1.2 m from the floor. The distance was selected after pilot trials to confirm that people of all heights could be captured equally well while their feet were placed correctly, along the foot markers on the force plates (Fig 1).
Recording of CoM path
Recording of CoM, using SMART Balance Master. The Balance Master [30] estimates a vertical projection of the Centre of Mass (CoM) from Centre of Force (CoF) data using the method described by Morasso et al. [31]. This method assumes that the body is rigid and the CoF is mid-way between the two feet with a single pivot at the ankle (Fig 2). The vertical projection of the CoM is estimated to be 0.5527 of the person's height (represented by length c in Fig 2). The value for a is obtained by taking the CoF value from the force plates and inclining it by -2.3˚, estimated to be the average anterior lean when standing. The force plates have a sampling rate of 100 Hz.
The CoM path was recorded using the EquiTest software, bundled with the SMART Balance Master. The CoM path is plotted in two dimensions, mediolateral and anterior-posterior.
Recording of CoM path, using CoM path recorder. Kinect measures the distance from the participant to the camera in three dimensions, using the time-of-flight of an infrared beam, at a rate of 30 Hz. From this information, Kinect fits a human skeleton to a 25-joint model [32], which has very high agreement with skeletons generated from marker-based systems [26].
The CoM path recorder is custom software, written in C# using Visual Studio and the Kinect SDK 2.0. It takes a series of skeleton frames and derives a CoM path. The pipeline of the CoM path recorder is shown in Fig 3. The steps of the pipeline are as follows: 1) The MLaxis of the skeletons are reversed, to take into account the rear position of the Kinect camera; 2) Each skeleton frame, is aligned to the first frame of the recording, making all subsequent movements relative to this initial position [33]; 3) The position of CoM is estimated, as described, in the section Frame-wise calculation of CoM; and 4) The ML and AP elements of the CoM path to disk.
Frame-wise calculation of CoM The position of CoM was calculated, frame-by-frame by taking the euclidean average of the left-hip, right-hip and mid-spine joints, as defined by Eq 2, first used by Leightley et al. [28]. This method estimates the position of CoM in three dimensions without needing to rely on the assumptions made by the Balance Master.
Creation of CoM time series
As noted by Prieto et al. [34], when calculating sway, precise foot placement is difficult. This makes meaningful comparison between individuals difficult, the same can be said for the comparison of methods. A more robust approach is to calculate a time series that places the mean position of the overall movement at the origin. This is achieved by subtracting the mean position in the ML and AP direction from each step in the time series (Eq 3). The resultant times series (RD) was used for the comparison of the two methods.
RD ¼
ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi
Calculation of sway
The resultant time series were used to calculate RMS of sway measured by each method, using Eq 4, where RD is the time series calculated in Eq 3 and N is the number of time points in the time series. The measurement of postural sway using a Kinect depth camera This measure of postural sway calculates the average deviation from the mean position, assuming the participant is standing upright [34,35]. MATLAB 2019a was used to implement Eqs 3 and 4 swayRMS ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi
Data exclusions
A total of 56 recordings were removed for various reasons, as detailed in Table 1. The remaining 288 records were used in the analysis.
A priori sample size calculation
A priori sample size estimation was carried out to ensure there was enough power to detect differences between the two methods. We utilised the recordings we made while experimenting with the best position for the Kinect camera. Using the mean and standard deviation of this data, we calculated the sample size required for each trial, using G � Power. The results are shown in Table 2, along with the actual sample size used for analysis.
Data analysis
The main analysis used in this study was the Bland-Altman test for agreement between methods [36]. In addition, several supporting analysis were carried out. The results obtained from each method were assessed for normality using D'Agostino-Pearson and Shapiro-Wilk methods. The difference between each method was normally distributed (Table 3). However, the range of values produced by each method was found to be nonnormal. Bland and Altman noted that this is often the case [36]. Normality was calculated using the scipy python library.
One-sample t-tests were used to provide a significance value for the absolute agreement between methods, i.e. an hypothesised difference of zero. The t-tests were carried out using SPSS (v. 21. IBM, US). Significance was accepted at p<0.05.
The repeatability of each method was assessed by comparing the repeated measures. Standard deviation (SD) and coefficient of repeatability (CR) were calculated for each method (Table 4).
Bland-Altman plots were created (Fig 5), using all available data for each method, without averaging over repeated measures. Repeatability and Bland-Altman tests were carried out using the Analyse-it plugin for excel (v. 5.40.2).
Results
Postural sway measured by the proposed pipeline did not differ significantly from that measured by the Balance Master, for eyes open conditions with a firm support. All other conditions showed significant disagreement. The disagreement, expressed as bias, increased with increasing challenge.
Agreement of postural sway measurement
One-sample t-tests were carried out on the differences between sway calculated by each method. Conditions (a) quiet standing, eyes open and (c) surround moving, eyes open showed no significant difference between methods. For all other conditions, a significant difference was found. Significance was accepted with an α of 0.05 (Table 5). NB As an alternative approach, we used the non-parametric, Wilcoxon Signed Rank test (designed for non-normal data). We used matched pairs of results from the two methods (which had been shown to be non-normal). Wilcoxon Signed Rank test produced the same result as the one-sample t-test. Bland-Altman plots (Fig 5) were used to assess the agreement between the two methods. The agreement results are summarised in Table 5. A small disagreement (bias), around The measurement of postural sway using a Kinect depth camera 0.1mm, was seen for the conditions that showed no significant difference in calculated sway (a and c). However, as the balance challenge increases, the disagreement between the two methods increases, the largest bias being 1.69mm.
Implications of the increased disagreement
The eyes open condition show the most similarity in repeatability. Looking at the bias between the two samples, conditions where the participant is standing on a firm surface, with eyes open agree the best. However, as the balance challenged increases, either by removing vision or by perturbing balance by standing on a pivoting platform, the two results, increasingly disagree.
The differences, seen in these results, may be explained by the fundamentally different approaches each method takes to estimate the CoM position, are discussed in the following section.
Discussion
In this study, we propose a pipeline that is able to assess upright human postural sway. It makes use of an inexpensive and portable depth camera (Kinect V2), in combination with custom software that calculates CoM directly from skeleton joints. We also carried out a pilotstudy that compares the postural sway calculated from the proposed pipeline and a Balance Master, obtained during a Sensory Organisation Test (SOT).
We examined the repeatability of each method (Table 4), i.e. the agreement between repeated measures. The comparison was based on the (SD) and reliability coefficients (CR), for each method. Both methods show an increase in variability with task difficulty. The SOT test uses this variability to identify balance defects. In the SOT, the ratio of sway measured in quiet standing eyes closed (b) vs quiet standing eyes open (a) is used as a measure of the reliance on the somatosensory system to balance. This is also known as the Romberg Ratio. The reliance on the visual system is given by the ratio of support moving, eyes open (d) vs quiet standing eyes open (a) (the measures with the greatest similarity in the repeatability test) and the reliance on the vestibular system is given by support moving eyes closed (e) vs quiet standing eyes open (a). In all these assessments, quiet standing eyes open (a) is used as a baseline measure [30]. This matches the intuition that in a given population, the ability to balance with eyes open is essential and so well-practised. However, the ability to balance Table 5. A summary of the agreement of postural sway derived from the two methods: Balance Master (BM) and the Proposed Pipeline (PP). The mean-within each method, mean-between the methods (bias), the 95% Confidence Interval (CI) and Limits of Agreement (LOA) and the significance of the t-test are shown. The measurement of postural sway using a Kinect depth camera well, when challenged in unfamiliar ways produces a wider range of scores, seen as increasing variance. We further examined the agreement between the two methods using Bland-Altman plots (Fig 5), and one-sample t-tests, with an hypothesised, mean difference of zero. The plots show the mean difference between measures (bias) is smallest for the most every-day tasks (eyes open with the least challenge), but bias increases with increasing task difficulty. The t-test suggests that the two methods only agree well for eyes open conditions with a firm surface. To understand how these disagreements may occur, it is worthwhile considering two elements. 1) the way the human body reacts to quiet standing vs its reaction to perturbation. Winter in his review on human balance [37] noted that the human body pivots about the ankle (the ankle strategy) in quiet stance and about both hip and ankle in reaction to a perturbation (the hip strategy), such as standing on a pivoting platform. The Balance Master uses a pivoting platform to induce perturbation in the tests which generated the biggest disagreement between methods (d to f). The induced perturbation causing an increase in postural sway amplitude. Black et al. [38] noted that quite standing with eyes closed also increases postural sway amplitude, and so a switch to a hip strategy, for some people. In condition (b) quiet standing with eyes closed, we see an increased bias between, compared to condition (a), although the increase is less than for conditions d-f, where the pivoting platform induces a greater postural sway. These observations lead to the second point. 2) The way the two methods estimate CoM is quite different. The Balance Master uses the most common method of estimating CoM, when using force plates, the inverted pendulum model, which ignores the hip and knee joints. In order to estimate the position of the CoM, using this method, an average value for the static incline of the body and an average offset from the position of the CoF, proportional to a person's height, is used to relate the CoF to the CoM [30]. The proposed pipeline calculates CoM from the Kinect data, as described in Eq 2; its estimate of CoM relates directly to the skeletal structure. Although it uses the values of only 3 joints (left hip, right hip and spine mid), these joints do not exist in isolation. Their movements are influenced directly by the movements of other anatomical structures such as the ankle, knee and hip joints, as well as the spine, arms and head. Previous reports questioned the assumptions used routinely to estimate CoM from CoF data. For example, Cretual et al. [39] suggested the single pendulum model should be used with caution to estimate CoM during more challenging conditions. Lafond et al. [40] also found error in this method of calculating CoM for more difficult poses, and Yeung et al. [26] demonstrated that Kinect performed better when recording more challenging balance tasks compared with force plates. Benda et al. [41] demonstrated that the accuracy of CoM estimated from CoF reduces with increased dynamics. Although the literature may go some way to explain the disagreement between the two methods, future work is warranted to empirically, demonstrate the reasons for the differences. This future work should provide a three-way cross-validation between CoM, measured using the proposed pipeline, a high quality marker-based system and a high quality force plate. Separately, future work should examine the potential of the proposed pipeline in the identification of individuals with balance impairments.
BM mean sway
For now, we can say that the proposed pipeline shows no significant difference to the Balance master when measuring sway for quiet standing, eyes open and quiet standing with a moving surround, eyes open. Quiet standing with a moving surround, eyes open is designed to assess individuals with a vestibular defect. Their over reliance on the visual system, inducing a substantial increase in postural sway. Since all our participants were healthy, an ankle strategy is sufficient to maintain balance, for both these conditions. This study was designed as a proof of concept and shows that assessment of postural control by depth camera is worth pursuing. Especially for applications where devices, such as the Balance Master, are too expensive or too cumbersome to be practical.
Limitations, considerations and future work
(1) Our assessments were completed in laboratory conditions. In more informal settings, there is the potential for Kinect to confuse non-human elements, such as table and chair legs for human limbs. (2) The current study only includes healthy individuals. Future work should extend these initial findings, to a larger group, including individuals who suffer from recurrent falls. (3) In this study, we used the Balance Master to automate the SOT. The Balance Master uses pivoting force plates and a pivoting surround to produce challenging balance conditions. In order to further the cause of machine-based balance assessments in informal settings, future work will need to utilise more portable means of challenging balance. These include compliant foam pads and visual conflict domes. For instance, the Clinical Test of Sensory Integration and Balance (CTSIB) [42] uses these items to replicate the SOT test, without the need for costly equipment. (4) Balance Master's force plates are not as accurate as more modern designs. Future work should incorporate the newer plates, ideally as part of a three-way validation with a marker-based system.
Conclusion
In this study, we propose a novel pipeline to assess upright postural sway. We carried out a pilot study to compare the results of the proposed pipeline to results from a Balance Master, obtained from simultaneously testing 15, healthy individuals (age: 42.3 ± 20.4 yrs, height: 172 ± 11 cm, weight: 75.1 ± 14.2 kg, male = 11). Our initial findings suggest that the methods agree well for static assessments of balance, with eyes open, but the agreement reduces under more challenging conditions. That said, the new method warrants further investigation, with a wider variety of devices and a larger cohort, including people for who falling is an ongoing issue. | 2020-02-06T09:09:21.203Z | 2020-02-05T00:00:00.000 | {
"year": 2020,
"sha1": "632c758243be4ccd88ae9058139858fbe2d67828",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0227485",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03153e4170bb069e63c50442535a16d90a825958",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
250920222 | pes2o/s2orc | v3-fos-license | Domain Decomposition Learning Methods for Solving Elliptic Problems
With recent advancements in computer hardware and software platforms, there has been a surge of interest in solving partial differential equations with deep learning-based methods, and the integration with domain decomposition strategies has attracted considerable attention owing to its enhanced representation and parallelization capacities of the network solution. While there are already several works that substitute the subproblem solver with neural networks for overlapping Schwarz methods, the non-overlapping counterpart has not been extensively explored because of the inaccurate flux estimation at interface that would propagate errors to neighbouring subdomains and eventually hinder the convergence of outer iterations. In this study, a novel learning approach for solving elliptic boundary value problems, i.e., the compensated deep Ritz method using neural network extension operators, is proposed to enable reliable flux transmission across subdomain interfaces, thereby allowing us to construct effective learning algorithms for realizing non-overlapping domain decomposition methods (DDMs) in the presence of erroneous interface conditions. Numerical experiments on a variety of elliptic problems, including regular and irregular interfaces, low and high dimensions, two and four subdomains, and smooth and high-contrast coefficients are carried out to validate the effectiveness of our proposed algorithms.
1. Introduction. Many problems of interest in science and engineering are modeled by partial differential equations, which help us to understand and control complex systems across a broad range of real-world applications [10,38]. Unfortunately, finding the analytic solution for many problems is often difficult or even impossible, therefore various numerical techniques such as finite difference, finite volume, and finite element methods [32,33,5] are developed to obtain the approximate solution. Based on a discretization of the solution space by dividing the computational domain into a polygon mesh, these mesh-based numerical methods are highly accurate and efficient for low-dimensional problems on regular domains. However, there are still many challenging issues to be addressed, e.g., mesh generation remains complex when the boundary is geometrically complicated or dynamically changing, computation of high-dimensional problems is often infeasible due to the curse of dimensionality, and others [28]. As classical methods are continuously being improved, it also raises the need for new methods and tools to tackle the difficulties mentioned above.
With recent advancements in computer hardware and software platforms, deep learning-based approaches [31] have emerged as an attractive alternative for solving different types of partial differential equations in both forward and inverse problems [51,14,6]. Thanks to its universal approximation capabilities [48], the use of neural networks as an ansatz to the solution function or operator mapping has achieved remarkable success in diverse disciplines. One noteworthy work is the physics-informed neural networks (PINNs) [47,30,29] that incorporate the residual of underlying equations into the training loss function, where integer-order differential operators can be directly calculated through automatic differentiation [45]. Another important work is the deep Ritz method [59], which resorts to the Ritz formulation and performs better than PINNs for problems with low-regularity solutions [7]. It is also possible to design learning tasks according to the Galerkin formulation [60], but the training process often struggles to converge due to the imbalance between generator and discriminator models. In addition, to improve the boundary condition satisfaction, several techniques including, but not limited to the deep Nitsche method [37], augmented Lagrangian relaxation [25], and auxiliary network with distance functions [40,3] have been developed. Compared to traditional numerical methods [32,5], deep learning-based approaches offer advantages of flexible and meshless implementation, strong ability to handle non-linearity and to break the curse of dimensionality [28]. However, they may exhibit poor performance when handling problems with multiscale phenomena [56,26], and the large training cost is also a major drawback that limits their use in large-scale scientific computing. To address these challenges as well as enhance the representation and parallelization capacity of network solutions, integrating deep learning with domain decomposition strategies [20,19,21,18] has attracted increasing attention in recent years.
One way is to incorporate the distributed training techniques [2], e.g., the data and module parallelization, into the original PINNs approach [26,27,23,24], where the learning task is split into multiple training sections through a non-overlapping partition of the domain and various continuity conditions are enforced on subdomain interfaces. Although this combination is quite general and parallelizable, it differs from the conventional way of splitting a partial differential equation [54,46]. Besides, the averaging of solution, flux, and residual on the interface [23,24] may be problematic for solutions with jump conditions * . On the other hand, conventional DDMs [54] can be formulated at the continuous or variational level, which also allows deep learning-based methods to be employed for solving the decomposed subproblem. As a result, the machine learning analogue of overlapping Schwarz methods have emerged recently and successfully handled many elliptic problems [37,34,42,50], however, the non-overlapping counterpart has not been systematically studied yet. A major challenge is that the local network solution is prone to returning erroneous flux prediction along the subdomain interface, which would propagate errors to neighbouring subproblems and eventually hamper the convergence of outer iterations. In other words, the low accuracy of flux estimation is a key threat to the integration of deep learning and non-overlapping DDMs, especially for those based on a direct flux exchange across subdomain interfaces, but has not been fully addressed or resolved in the existing literature.
This study mainly focuses on the benchmark Poisson's equation that serves as a necessary prerequisite to validate the effectiveness of deep learning-based domain decomposition approaches [20,34,35,42], namely, where Ω ⊂ R d is a bounded Lipschitz domain, d ∈ N + the dimension, and f (x) ∈ L 2 (D) a given function. The classification of DDMs for solving problem (1.1) are typically categorized as either an overlapping or a non-overlapping approach [54,46,39], which can be further refined according to the information exchange between neigh- * For instance, the solution of elliptic interface problem with high-contrast coefficients [36] lies in the Sobolev space H 1+ (Ω) with > 0 possibly close to zero [41], thereby enforcing the residual continuity condition on the interface [26,24] cannot be directly applied due to the lack of regularity. bouring subdomains (see Figure 1). Here, the refined category is adopted throughout this work to distinguish between various deep learning-based domain decomposition algorithms. Note that the trained solution of Dirichlet subproblem using PINNs [47], deep Ritz [59], or other similar methods [28] is often found to exhibit erroneous Neumann traces on the interface (see Remark 2.1). Accordingly, the flux transmission between neighbouring subdomains is thus of low accuracy, which would hinder the convergence of out iterations. To deal with this issue, we propose a novel learning approach, i.e., the compensated deep Ritz method using neural network extension operators, that allows reliable flux transmission between neighbouring subdomains but without explicitly involving the computation of Dirichlet-to-Neumann map on subdomain interfaces. This enables us to construct effective learning approaches for realizing classical Dirichlet-Neumann, Neumann-Neumann, Dirichlet-Dirichlet, and Robin-Robin algorithms in the non-overlapping regime (see Figure 1). It is noteworthy that although the Robin-Robin algorithm only requires the exchange of Dirichlet traces [8], two additional parameters within the interface conditions need to be determined, which may lead to incorrect network solutions. The remainder of this paper is organized as follows. In section 2, we provide a brief review of classical DDMs for solving elliptic boundary value problems, as well as several deep learning approaches that can be employed as our subproblem solver. Next, following the most straightforward idea (see Figure 1), we introduce the machine learning analogue of Robin-Robin algorithm in section 3. To realize other non-overlapping DDMs using neural networks, a detailed illustration of our compensated deep Ritz method is presented in section 4. Experimental results on a series of benchmark problems are reported in section 5 to validate the effectiveness of our methods, as well as an interface problem with high-contrast coefficients. Finally, in section 6, we conclude the paper and outline some directions for future work.
Preliminaries.
This section is devoted to provide a concise overview of classical DDMs [54,46] for solving the Poisson's equation, together with the widely used deep learning approaches [47,59] that can be adopted as our subproblem solver.
Domain Decomposition
Methods. The idea of domain decomposition for solving the Poisson's equation has a long history dating back to the 18th century [49], and there is an extensive literature on DDMs owing to the emergence and improvement of parallel computers (see [54,46,39] and references cited therein). For illustrative purposes, let us assume that the computational domain Ω ⊂ R d is partitioned into two subdomains {Ω i } 2 i=1 (see Figure 2 for example), while the case of multiple subdomains can be treated in a similar fashion. Depending on the partition strategy being employed, DDMs are usually categorized into two groups: overlapping and non-overlapping approaches, in which the decomposed subproblem is typically solved through mesh-based finite difference or finite element methods [54,46].
When using a neural network as the solution ansatz to boundary value problem (1.1), it has been observed that the trained model often agrees with the Dirichlet boundary condition but exhibits erroneous Neumann traces [9,1], which sets it apart from traditional mesh-based numerical methods [54,46]. In this regard, the classification of DDMs adopted in this paper is based on the information exchange between neighbouring subdomains rather than the partition strategy as depicted in Figure 1. To be specific, we summarize some representative decomposition-based approaches in the literature [54], referring to the Schwarz alternating method and the Robin-Robin algorithm as SAM and RRA, respectively, in Algorithm 2.1. On the other hand, the Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet algorithms are abbreviated as DNA, NNA, and DDA, respectively, in Algorithm 2.2 † . In addition, the relaxation parameter ρ should lie between (0, ρ max ) in order to achieve convergence † Abbreviations SAM, RRA, DNA, NNA, and DDA are only used in Algorithm 2.1 and 2.2.
2 ) on Γ 1 (RRA) end while end for (Remark: RRA is defined in the non-overlapping regime, i.e., Γ 1 = Γ 2 .) [44,11]. Notably, overlapping methods with small overlap are cheap and easy to implement, it usually comes at the price of slower convergence than non-overlapping ones. Besides, non-overlapping DDMs are more nature and efficient in handling elliptic problems with large jumps in the coefficient [57].
Deep Learning Solvers.
As can be concluded from the previous discussion, the decomposed subproblem on each subdomain takes on the form where B i is a boundary operator on the interface that may represent the Dirichlet, Neumann, or Robin boundary condition, namely, Dirichlet boundary condition: while the function h i (x) is iteratively determined along the outer iteration [54]. When deep learning-based approaches are utilized to solve (2.1), the hypothesis space of local solution is first built using a neural network. If not otherwise stated, we shall use the fully-connected neural network of depth L ∈ N + [22], in which i · n i for i = 1, 2, (DDA) % Exchange of Dirichlet or Neumann Trace 2 · n 2 ) on Γ for i = 1, 2, (DDA) end while end for the -th hidden layer receives an input x −1 ∈ R n −1 from its previous layer and transforms it to T (x −1 ) = W x −1 + b . Here, W ∈ R n ×n −1 and b ∈ R n are the weights and biases to be learned, and θ = {W , b } L =1 denotes the collection of all trainable parameters. By choosing an activation function σ(·) for each hidden layer, the solution ansatz can then be expressed asû i (x; θ) = T L • σ • T L−1 · · · • σ • T 1 (x), where • represents the composition operator. One can also employ other network architectures, e.g., residual neural network and its variants [16,14], for the parametrization of unknown solutions.
To update trainable parameters using the celebrated backpropagation algorithm [17], various training loss functions (before applying numerical integration) have been proposed, e.g., PINNs [47] that are based on the strong formulation of (2.1), i.e., where β > 0 is a user-defined penalty coefficient. Alternatively, the deep Ritz method [59] resorts to the Ritz formulation of (2.1), namely, where the last term depends on the interface condition being imposed In addition to these two widely-used techniques, the weak adversarial network [60] is based on the Galerkin formulation of (2.1), while another series of learning tasks is designed to use separate networks to fit the interior and boundary equations respectively [40,3]. We refer the readers to [28,50,20] for a more detailed review of deep learning-based numerical methods. Notably, with the interface conditions being included as penalty terms in the training loss function and the number of interface points being small compare to that of interior ones, the trained model of (2.1) is often prone to returning erroneous Dirichlet-to-Neumann map on the interface [9, 1] (see Remark 2.1). This emerges as a key threat to the integration of deep learning and flux exchange-based DDMs but has not been fully addressed in the literature.
Remark 2.1. To validate our statements, we first study the Dirichlet-to-Neumann map, i.e., B 1 u 1 = u 1 in (2.1), that sends boundary value data to normal derivative data through the trained network solution. Here, the PINNs approach [47] is adopted for network training, where Ω 1 = [0, 0.5] × [0, 1], Γ = {0.5} × [0, 1], f (x, y) and h 1 (x, y) are derived from the exact solution u 1 (x, y) = sin(2πx)(cos(2πy) − 1). As can be seen from Table 1, the network solution using a fully-connected neural network (depth = 8, width = 50, also known as the multilayer perceptron) agrees with our true solution, and the performance can be further improved through the use of a more sophisticated network architecture [55]. However, the corresponding Neumann traces are of unsatisfactory low accuracy, which is often unacceptable for flux transmission between neighbouring subdomains. Fortunately, the prediction of ∇u 1 using fullyconnected neural networks performs well inside the subdomain Ω 1 . Table 1: Trained network solutionsû 1 of Dirichlet subproblem (2.1) using different architectures, together with their error profiles |u 1 −û 1 | and |∂ x (u 1 −û 1 )|.
transformer network (adaptive β) [55] Remark 2.2. Next, the Robin subproblem (2.1), i.e., B 1 u 1 = ∇u 1 · n 1 + κ 1 u 1 , with the same exact solution u 1 (x, y) = sin(2πx)(cos(2πy) − 1) is studied numerically. By employing the standard PINNs approach using fully-connected neural networks (depth = 8, width = 50) and β = 400, the numerical results with different values of coefficient κ 1 are reported in Table 2. Clearly, the trained model fails to recover the true solution in the case of κ 1 = 10 4 , which is due to the weight imbalance between u 1 and ∇û 1 · n 1 in the boundary penalty loss term In addition, this issue is inherent within the Robin boundary condition and cannot be fixed by fine-tuning the penalty coefficient β > 0.
3. Robin-Robin Algorithm using Physics-Informed Neural Networks. In addition to the deep learning analogue of overlapping Schwarz methods [34,42,35,50,53], the non-overlapping Robin-Robin algorithm [54,46] is also based on the exchange of Dirichlet traces between neighbouring subproblems (see Figure 1 or Algorithm 2.1). As the decomposition leads to simpler functions to be learned on each subdomain, the PINNs approach [47], rather than the deep Ritz method, is employed here as the subproblem solver since it is known to empirically work better for problems with smooth solutions [7]. However, a major drawback is the determination of two additional parameters, i.e., κ 1 and κ 2 within the Robin boundary conditions, which may require more outer iterations to converge or cause difficulties for the optimization process (see Remark 2.2).
For ease of illustration, we consider the case of two non-overlapping subdomains in what follows (see Figure 2 for example), where the interface conditions are invariable of the Robin type [54,46,8]. The detailed iterative process (in terms of differential operators) is presented in Algorithm 2.1, from which it can be observed that the update of interface conditions only involves the Dirichlet traces.
To realize the Robin-Robin algorithm using PINNs, the decomposed subproblem Algorithm 3.1 Robin-Robin Algorithm using PINNs (2 Subdomains) % Initialization -divide domain Ω ⊂ R d into two non-overlapping subdomains Ω 1 and Ω 2 ; -specify network structuresû 1 (x; θ 1 ) andû 2 (x; θ 2 ) for each subproblem; -generate Monte Carlo training samples X Γ , X Ωi , and X Di for i = 1, 2; % Outer Iteration Loop Start with the initial guess h [0] along the interface Γ; for k ← 0 to K (maximum number of outer iterations) do while stopping criteria are not satisfied do % Subproblem-Solving using PINNŝ is first rewritten as an optimization problem through the residual of equations, i.e., Ωi for i = 1, 2, where the boundary and interface conditions are included as penalty terms during training. Then, by introducing the neural network parametrization ‡ 2 ), and generating the training sample points inside each subdomain and at its boundary for i = 1 and 2, the stochastic tools [4] can be applied for fulfilling the corresponding optimization problems. Specifically, the learning tasks at the k-th outer iteration are where the loss functions (not relabelled) are defined as ‡ For notational simplicity, i ) are sometimes abbreviated asû i andû Here, and in what follows, the sampling points are drawn uniformly at random from their corresponding domains. One can also use adaptive or adversarial sampling strategies [12,15] to reduce the training cost. To sum up, by employing PINNs as subproblem solvers, the deep learning analogue of Robin-Robin algorithm is presented in Algorithm 3.1, where κ 1 , κ 2 > 0 are two additional user-defined parameters. We can assume, without loss of generality, that κ 1 = 1 and leave the other parameter to be tuned. In fact, as the number of interface points is typically much smaller than that of the interior of the subdomains, too large (or small) value of κ 2 may cause weight imbalance in the interface penalty term (see Remark 2.2), while a moderate value of κ 2 can guarantee convergence but at the cost of extra outer iterations. Such an imbalance issue greatly differs from the conventional finite element setting [8], and is further demonstrated through numerical experiments in section 5. Fortunately, this problem can be tackled through the use of our compensated deep Ritz method, which is theoretically and numerically studied in the following sections.
Compensated Deep Ritz
Method. This section begins by studying nonoverlapping DDMs that rely on a direct flux exchange across subdomain interfaces, then the Robin-Robin algorithm is revisited from a variational viewpoint.
Note that when the Dirichlet subproblem is solved using the PINNs or deep Ritz approach [47,59], it is common for the trained model to converge to a local minimizer that nearly satisfies the given Dirichlet boundary condition but with inaccurate Neumann traces. As a result, the flux transmission between neighbouring subdomains may be of low accuracy that would hinder the convergence of outer iterations. To address this issue, we propose in this section the compensated deep Ritz method that enables reliable flux transmission in the presence of erroneous interface conditions. Moreover, our proposed learning algorithm can also help with the network training when realizing the Robin-Robin algorithm with large coefficients.
Dirichlet-Neumann Learning Algorithm.
Here, we focus on the classical Dirichlet-Neumann algorithm [54,46], where the detailed iterative process (in terms of differential operators) is presented in Algorithm 2.2. To avoid the explicit computation and transmission of Dirichlet-to-Neumann maps at interface, the variational formulation of multidomain problem is taken into consideration. More precisely, the Galerkin formulation of problem (1.1) reads: find u ∈ H 1 0 (Ω) such that where the bilinear forms are defined as Here, we consider a two subdomain decomposition of (4.1), while similar results can be obtained for multidomain cases [54]. Let Ω 1 and Ω 2 denote a non-overlapping decomposition of the computational domain Ω ⊂ R d , with the interface Γ = ∂Ω 1 ∩∂Ω 2 separating our subdomains as shown in Figure 2. Moreover, we set , and define the bilinear terms Then, the Green's formula implies that (4.1) can be reformulated as: find u 1 ∈ V 1 and u 2 ∈ V 2 such that where γ 0 v = v| Γ indicates the restriction of v ∈ H 1 (Ω i ) on the interface Γ, and R i : [46,54]. Based on the minimum total potential energy principle [10], we obtain its equivalent Ritz formulation, i.e., arg min u1∈V1, u1|Γ=u2 Therefore, the Dirichlet-Neumann algorithm [54,46] can be written in terms of their energy functionals: given the initial guess on Γ, with ρ ∈ (0, ρ max ) being the acceleration parameter [11]. Notably, the flux continuity across the subdomain interface is now guaranteed without explicitly calculating and exchanging the Neumann trace of our Dirichlet subproblem.
Such a variational method also makes it possible to integrate with deep learning approaches. Next, the unknown solutions are parametrized by neural networks i ) denotes the solution ansatz with trainable parameters θ i for i = 1 and 2. We note that in contrast to the standard finite element method [54] where the approximate solution of Neumann subproblem is locally defined and the extension operation is mesh-dependent, the neural network parametrizationû is meshless and thus can extend itself to neighbouring subdomains. Therefore, we obtain a natural extension operator, i.e., the neural network extension operator, which extends the restriction ofû 2 (x, θ 2 ) on interface Γ to the subdomain Ω 1 with zero boundary value on ∂Ω 1 ∩ ∂Ω. Here, the requirement of homogeneous boundary condition on ∂Ω 1 ∩ ∂Ω is dealt by introducing an additional penalty term into the loss function of our extended Neumann subproblem (4.6). In addition, as the extension function is required to be weakly differentiable and the solution of Neumann subproblem is typically regular enough in its subdomain, the hyperbolic tangent or sigmoid activation function is preferred rather than the ReLU activation function. Accordingly, by introducing penalty terms for enforcing essential boundary conditions, the Dirichlet subproblem on Ω 1 can be formulated as (4.4) where β > 0 is the penalty coefficient. In fact, as the decomposition usually leads to simpler functions to be learned on each subdomain, the second-order derivatives can thus be involved during the training. As such, the residual form is then preferred to the Ritz energy (4.4) since PINNs (4.5) are empirically found to be capable of offering more accurate estimation of ∇u 1 inside Ω 1 . On the other hand, the learning task associated with our Neumann subproblem gives (4.6) which relies on the precision of ∇û 1 and therefore benefits from (4.5). Now we are ready to discretize functional integrals (4.4, 4.5) and (4.6), where the Monte Carlo method is adopted to overcome the curse of dimensionality [43]. To be specific, the training sample points are generated uniformly at random inside each subdomain and at its boundary, i.e., , where D i = ∂Ω i ∩ ∂Ω, N Ωi , N Di , and N Γ represent the sample size of training datasets X Ωi , X Di , and X Γ , respectively. Consequently, by defining the following loss functions the learning task associated with (4.4, 4.5) is defined as % Neumann Subproblem-Solving using Compensated Deep Ritz Method % Update of Interface Condition using Dirichlet Trace while that of the functional integral (4.6) is given by Although the solution of Dirichlet subproblem is often prone to returning erroneous Neumann traces along the interface [9,1], it is evident from (4.8) that our extended Neumann subproblem can be numerically solved without involving the issue of erroneous Dirichlet-to-Neumann map. Moreover, with the second-order differential operator being explicitly involved during the network training of Dirichlet subproblem (4.5), the resulting solution's gradient ∇û 1 is rather accurate inside the subdomain Ω 1 (see Remark 2.1), which is highly desirable for solving our extended Neumann subproblem (4.6).
In summary, our proposed Dirichelt-Neumann learning algorithm is presented in Algorithm 4.1, where the mini-batch data are not relabelled for notational simplicity and the stopping criteria can be constructed by measuring the difference between two consecutive iterations [34]. We also note that our Dirichlet-Neumann learning algorithm has sequential steps that inherited from the original scheme [54,46], and various techniques have been developed to solve subproblems in parallel (see [39] and references cited therein).
Remark 4.1. Note that in the case of two subdomains, e.g., red-black partition in Figure 3a, the solutionû 2 (x; θ 2 ) of our extended Neumann subproblem (4.6) is defined over the entire domain, which seems to incur enormous cost at the first glance. In fact, the extension operation (4.3) only involves subdomains that have a common interface with the underlying subproblem. Therefore, the computational domain of our extended subproblem (4.6) can be locally defined (see Figure 3b for example). Our proposed method can also be used to solve the elliptic interface problem with high-contrast coefficients [36,15,52], which is formally written as where Γ = ∂Ω 1 ∩ ∂Ω 2 is an immersed interface (see Figure 2 for example), the coefficient function c(x) is piecewise constant with respect to the decomposition of domain and natural jump conditions [36] are given by [u] = 0 and c ∂u ∂n = q on Γ.
Applying Green's formula in each subdomain and then adding them together, we obtain the Galerkin formulation: find u 1 ∈ V 1 and u 2 ∈ V 2 such that where the bilinear forms are defined as for i = 1, 2. By parametrizing solutions as neural networks, i.e., u i (x) ≈û i (x; θ i ), and employing the neural network extension operator R 1 γ 0û2 =û 2 , the learning task associated with the Dirichlet subproblem § on Ω 1 gives (4.9) while that of the Neumann subproblem takes on the form (4.10) Accordingly, an iterative learning approach for solving the elliptic interface problem with high-contrast coefficients can be immediately constructed from (4.9) and (4.10), while a further theoretical investigation can be found in [52].
Neumann-Neumann Learning
Algorithm. Similar in spirit, the compensated deep Ritz method can be applied to construct the Neumann-Neumann learning algorithm (see Figure 1). Using same notations as before, the Neumann-Neumann scheme (see Algorithm 2.2) can be written in an equivalent Ritz formulation: given the initial guess h [0] ∈ H 1 2 00 (Γ), then solve for k ≥ 0 and i = 1, 2, 2 ) on Γ, with ρ ∈ (0, ρ max ) denoting the acceleration parameter. Next, by parametrizing the unknown solutions as neural networks, that is, for i = 1, 2, i ) and by employing extension operators R 1 γ 0ψ2 (x; η 2 ) =ψ 2 (x; η 2 ) and R 2 γ 0ψ1 (x; η 1 ) = ψ 1 (x; η 1 ), the learning tasks associated with the Neumann-Neumann algorithm are given by, for i = 1, 2, Ωi where β > 0 is the penalty coefficient and training tasks associated with Dirichlet subproblems are defined in a residual form as before. Therefore, the iterative learning approach can be constructed after applying numerical integration. § Here, the residual form is used instead since the solution on each subdomain can be assumed regular enough, which would result in a good approximation of ∇u 1 inside the subdomain Ω 1 .
Robin-Robin Learning Algorithm.
As mentioned before, the Robin-Robin algorithm only requires the exchange of Dirichlet traces between neighbouring subproblems, however, it may suffer from the issue of weight imbalance (see Remark 2.2). More specifically, let κ 1 = 1 in what follows, then a relatively large value of κ 2 κ 1 is typically required in order to achieve fast convergence along the outer iteration [8]. To alleviate the negative influence of κ 2 κ 1 , our compensated deep Ritz method is a promising alternative for realizing the Robin-Robin algorithm.
Note that in terms of differential operator, the decomposed subproblem with parameter κ 2 κ 1 = 1 in the Robin-Robin algorithm [46] can be rewritten as 1 · n 1 on Γ.
Using same notations as before, it is equivalent to find u Next, by using the Green's formula, we arrive at another form of (4.11), that is, for any v 2 ∈ V 2 . Therefore, the energy formulation of (4.11) is given by which completely differs from the original PINNs approach (3.1). Next, by parametrizing unknown solutions as neural networks, i.e., 2 ), and by employing our neural network extension operator the learning task associated with the second Robin problem takes on the form: which removes the issue of weight imbalance within the Robin boundary condition.
Numerical Experiments.
To validate the effectiveness of our proposed domain decomposition learning algorithms, we conduct experiments using Dirichlet-Neumann and Robin-Robin learning algorithms on a wide range of elliptic boundary value problems in this section. Here, the Neumann-Neumann and Dirichlet-Dirichlet learning algorithms are omitted for space considerations. For brevity, we refer to our Dirichlet-Neumann learning algorithm as DNLA (PINNs/deep Ritz), with bracket indicating the type of deep learning method used for solving the Dirichlet subproblem. In contrast to our proposed algorithms, the existing learning approach [35] for realizing the Dirichlet-Neumann algorithm is based on a direct substitution of local solvers with PINNs, which we refer to as DN-PINNs in what follows. On the other hand, the update of interface conditions in the Robin-Robin algorithm only relies on the exchange of Dirichlet traces, however, the subproblem-solving may suffer from the issue of weight imbalance as discussed in Remark 2.2. To further investigate its influence on the convergence of outer iterations, the Robin-Robin algorithm is realized using PINNs and compensated deep Ritz methods after the empirical study of DNLA, which is referred to as RR-PINNs and RRLA (PINN/deep Ritz) in a similar fashion.
For practical implementation [35,28], the network architecture deployed for each subproblem is a fully-connected neural network with 8 hidden layers of 50 neurons each [13]. The hyperbolic tangent activation function is assigned to each neuron, which is differentiable and smooth enough to capture our local solutions. During training, we randomly sample N Ωi = 20k points from each interior subdomain Ω i , N Γ = 5k points from the interface Γ, and N D = 5k points from each boundary ∂Ω i \ Γ of length equal to the interface. The trained models are then evaluated on the test dataset, i.e., N Ω = 10k points that are uniformly distributed over the entire domain, and compared with the true solution to assess their performances. The penalty coefficient is set to β = 400 and the number of mini-batches is chosen as 5 for all simulations. When executing the learning task on each subdomain, the initial learning rate of the Adam optimizer is set to be 0.01, which is divided by 10 at the 600-th and 800-th epoch. The training process terminates after 1k epochs for each decomposed subproblem, and we choose the model with minimum training loss for subsequent operations. The stopping criterion we set here is that either the relative-L 2 error between two consecutive iterations is less than 0.01 or the number of outer iteration reaches 30. All experiments are implemented using PyTorch 1.8.1 and trained on the NVIDIA GeForce RTX 3090.
Dirichlet-Neumann Learning Algorithm.
As a representative benchmark, we consider deep learning-based approaches for realizing the non-overlapping Dirichlet-Neumann algorithm in this subsection. More precisely, a comparative study between DN-PINNs, DNLA (PINN), and DNLA (deep Ritz) is presented, with experiments conducted on a wide variety of elliptic boundary value problems to demonstrate the effectiveness and flexibility of our proposed methods.
Poisson's Equation with
Simple Interface. First, we consider the benchmark Poisson problem in two dimension, that is, where the true solution is given by u(x, y) = sin(2πx)(cos(2πy) − 1) and the interface Γ = ∂Ω 1 ∩ ∂Ω 2 is a straight line segment from (0.5, 0) to (0.5, 1) as shown in Figure 4. It is noteworthy that our exact solution reaches local extrema at (0.5, 0.5), thereby deviations in estimating the Neumann trace at and near the extreme point [1] can create a cascading effect in the convergence of outer iterations, which differs from other examples that have simple gradients on the interface [35]. We first conduct experiments using the DN-PINNs approach, i.e., PINNs [47,28] are used as the numerical solver for both Dirichlet and Neumann subproblems. The iterative solutions over the entire domain in a typical simulation are depicted in Figure 5, with the initial guess for the interface value data given by h [0] (x, y) = (2π cos(2πx) + sin(2πx)) (cos(2πy) − 1) − 50xy(x − 1)(y − 1) on Γ, which remains unchanged for other methods tested below. As the trained networks tend to provide erroneous Neumann trace on the interface even when the training loss is very small (see Remark 2.1 or [9,1]), DN-PINNs fails to converge to the correct Fig. 4: From left to right: decomposition into two subdomains, true solution u(x, y), and its partial derivatives ∂ x u(x, y), ∂ y u(x, y) for the numerical example (5.1). solution of (5.1) as shown in Figure 5. Such an inaccurate flux prediction would hamper the convergence of outer iterations but is perhaps inevitable in practice for problems with complex interface conditions. In fact, a straightforward replacement of the numerical solver by other learning strategies, e.g., the deep Ritz method [59], also suffer from the same issue.
In contrast, although the Dirichlet-to-Neumann map through the trained solution of Dirichlet subproblem is usually of unacceptable low accuracy, our proposed method doesn't need to explicitly enforce the flux continuity along subdomain interfaces, thereby enabling the effectiveness of convergence in the presence of erroneous interface conditions (see Figure 8). To validate our statements, we show in Figure 6 and Figure 7 the numerical results using Algorithm 4.1, where PINNs [47] and deep Ritz methods [59] are employed for solving the Dirichlet subproblem, respectively.
As can be observed from Figure 6 and Figure 7, the predicted solution using our proposed learning algorithms is in agreement with the true solution, while the Neumann traces shown in Figure 8 indicate that the network solution of Dirichlet subproblem learns to fit the given Dirichlet boundary condition with erroneous Neumann traces. More quantitatively, we run the simulations for 5 times to calculate the relative-L 2 errors, and the results (mean value ± standard deviation) are reported in Table 3. By employing our proposed compensated deep Ritz method for solving the Neumann subproblem, it can be observed that our learning algorithms work reason- 1 and |∂ x (û [9] 1 − u 1 )|.
ably well, while the DN-PINNs is typically divergent due to the lack of accurate flux transmission across the interface. Moreover, as the solution of (5.1) is rather smooth on each subdomain, it can be found in Table 3 that DNLA (PINNs) performs better than DNLA (deep Ritz). This is because that second-order derivatives are explicitly involved during the training process, leading to better estimates of the solution's gradient inside the subdomain (see Figure 8). Moreover, by employing the DNLA (PINNs) for solving (5.1), we report in Table 4 0.0644 (9) 0.0316 (9) 0.0580 (9) 0.0592 (8) 8 0.0482 (9) 0.0860 (9) 0.0449 (9) 0.0490 (9) the relative-L 2 error and the corresponding number of outer iterations for different architectures, which indicates that the number of outer iterations required to achieve a comparable accuracy remain approximately constant as the width and depth of the network vary across a certain range of values. When the depth goes further deeper, such an observation may no longer be valid due to the vanishing gradient problem.
Poisson's Equation with
Zigzag Interface. To demonstrate the advantage of mesh-free property over traditional mesh-based numerical methods [54], we consider the previous example but with a more complex interface geometry, where the exact solution u(x, y) = sin(2πy)(cos(2πx)−1) and the interface is a curved zigzag line as depicted in Figure 2. More precisely, the zigzag function reads where coefficients a = 0.05(−1+2×mod(floor(20y), 2)), b = −0.05×mod(floor(20x), 2) and c = −2 × mod(floor(10x), 2) + 1, therefore enabling the sample generation inside each subdomain and at its boundary. Our proposed learning algorithm can easily handle such irregular boundary shapes, while the finite difference or finite element methods [5] requires careful treatment of edges and corners.
(b) ∂xû [9] 1 , ∂yû [9] 1 and error profiles |∂x(û [9] 1 − u1)|, |∂y(û [9] 1 − u1)| using DNLA (deep Ritz). of Dirichlet subproblem is prone to return erroneous Neumann traces at interface. In contrast, by solving the Neumann subproblem through our compensated deep Ritz method, the numerical results in Figure 9 demonstrate that our DNLA (PINNs) can obtain a satisfactory approximation to the exact solution of (5.2), which also avoids the meshing procedure that is often challenging for problems with complex interfaces. Importantly, DNLA (PINNs) remains effective in the presence of inaccurate flux estimations (see Figure 10), making it highly desirable in practice since the erroneous Dirichlet-to-Neumann map always occurs to some extent.
However, when the deep Ritz method [59] is used to solve the Dirichlet subproblem, the accuracy of approximate gradients within the subdomain Ω 1 is no longer comparable to that of PINNs [7]. This situation can become even worse for irregular domains (see Figure 10 and Figure 9). To further validate our claims, we present in Table 5 the quantitative results from 5 runs, which reveals that DNLA (PINNs) outperforms DN-PINNs and DNLA (deep Ritz) in terms of accuracy.
Poisson's Equation with
Four Subdomains. Next, we consider the Poisson problem that is divided into four subproblems in two-dimension, i.e., where u(x, y) = sin(2πx) sin(8πy) and f (x, y) = 65π 2 sin(2πx) sin(8πy). Here, the domain is decomposed using the red-black partition [54], while the multidomains are categorized into two sets [54] as depicted in Figure 2. Then, the deep learning-based algorithms are deployed, with the initial guess at interface chosen as h [0] (x, y) = u(x, y) − 100x(x − 1)y(y − 1) in what follows. Due to the high frequency of exact solution, the number of epochs here is 5k and the initial learning rate is 0.001, which will decay at the 2k-th and 4k-th epoch.
For problem (5.3) with non-trivial flux functions along the interface, it is not guaranteed that iterative solutions using DN-PINNs will converge to the true solution due to issue of erroneous Dirichlet-to-Neumann map (see supplementary materials). However, even though the inaccurate flux predicition on subdomain interfaces remains unresolved when using our methods (see Figure 12), the compensated deep Ritz method has enabled the Neumann subproblem to be solved with acceptable accuracy. Moreover, we execute the simulation for 5 runs and report the statistical results in Table 6 to further demonstrate that DNLA (PINNs) can outperform other methods in terms of accuracy. Notably, as neural networks often fit functions from low to high frequency during the training process [58], the relative-L 2 errors for problem (5.3) are larger than previous examples and can be further reduced using more sophisticated network architectures [58].
Poisson's Equation in
High Dimension. As is well known, another key and desirable advantage of using deep learning solvers is that they can tackle difficulties induced by the curse of dimensionality. To this end, we consider a Poisson (a) ∂xû [3] R , ∂yû [3] R and error profiles |∂x(û [3] R − uR)|, |∂y(û [3] R − uR)| using DNLA (PINNs).
High-Contrast Elliptic Equation.
Note that as mentioned in Remark 4.2, our proposed Dirichlet-Neumann learning algorithm can also be used to solve the more challenging interface problem with high-contrast coefficients. As such, we consider an elliptic interface problem in two dimension, (5.5) −∇ · (c(x, y)∇u(x, y)) = 32π 2 sin(4πx) cos(4πy) in Ω = (0, 1) 2 , u(x, y) = 0 on ∂Ω, where the computational domain is decomposed into four isolated subdomains as shown in Figure 3b, the exact solution is given by u(x, y) = sin(4πx) sin(4πy)/c(x, y), and the coefficient c(x, y) is piecewise constant with respect to the partition of domain Here, we choose h [0] = 100 cos(100πx) cos(100πy) + 100xy as the initial guess, and the numerical results using DNLA are depicted in Figure 13. Clearly, our method can facilitate the convergence of outer iterations in the presence of erroneous flux estimations (see supplementary materials for more details). where the exact solution u(x, y) = sin(2πx)(cos(2πy) − 1), and the interface Γ = ∂Ω 1 ∩ ∂Ω 2 is a straight line segment from (0.5, 0) to (0.5, 1) as depicted in Figure 4. By choosing (κ 1 , κ 2 ) = (1, 0.01), the computational results using RR-PINNs, i.e., Algorithm 3.1, in a typical simulation is depicted Figure 14, which can converge to the true solution but requires extra outer iterations when compared to the DNLA (PINNs) or DNLA (deep Ritz) approach (see Figure 6 or Figure 7).
To accelerate the convergence of outer iterations, we set (κ 1 , κ 2 ) = (1, 1000) in the following experiments. Unfortunately, the RR-PINNs approach suffers from the issue of weight imbalance and therefore fails to work (see Figure 15). On the contrary, our compensated deep Ritz method (see Figure 16) can handle the issue of weight imbalance and converge effectively, which only requires the replacement of the second subproblem solver with our proposed learning approach. 6. Conclusion. In this paper, a systematic study is presented for realizing classical non-overlapping DDMs through the use of artificial neural networks, which is based on the information exchange between neighbouring subproblems rather than domain partition strategies. For methods that rely on a direct flux exchange across subdomain interfaces, a key difficulty of deploying deep learning approaches as decomposed subproblem solvers is the issue of erroneous Dirichlet-to-Neumann map that always occurs to a greater or lesser extent in practice. To deal with the inaccurate flux estimation at interface, we develop a novel learning approach, i.e., the compensated deep Ritz method using neural network extension operators, to enable reliable flux transmission in the presence of erroneous interface conditions. As an immediate result, it allows us to construct effective learning approaches for realizing classical Dirichlet-Neumann, Neumann-Neumann, and Dirichlet-Dirichlet algorithms. On the other hand, the Robin-Robin algorithm, which only requires the exchange of Dirichlet traces but may suffer from the issue of weight imbalance, can also benefit from our compensated deep Ritz method. Finally, we conduct numerical experiments on a series of elliptic boundary value problems to demonstrate the effectiveness of our proposed learning algorithms. Possible future explorations would involve the coarse space acceleration [42], adaptive sampling techniques [15], efficient parallel iteration, and improvements of network architecture that could potentially further accelerate the convergence at a reduced cost.
7. Acknowledgement. We are grateful to the anonymous reviewers for their valuable feedback, which helped us improve the manuscript. This research was conducted using computational resources and services at the HPC center, School of Mathematical Sciences, Tongji University. PINNs scheme in a typical simulation are depicted in Figure SM1, which fails to 5 converge to the exact solution. Our proposed DNLM (PINN) can facilitate the con- 6 vergence of outer iteration in the presence of interface overfitting (see Figure SM2). The numerical results using DN-PINNs and DNLM (PINN) are depicted in Fig-10 ure SM4 and Figure SM5, which implies that our proposed method can converge to 11 the exact solution while the DN-PINNs scheme fails. This manuscript is for review purposes only. Fig. SM6: From left to right: decomposition of domain into two subregions, exact solution u(x, y) and its partial derivatives ∂ x u(x, y), ∂ y u(x, y) for example (5.5). 14 The numerical results using DN-PINNs and DNLM (PINN) are depicted in Fig-15 ure SM7 and Figure SM8, which implies that our proposed method can converge to 16 the exact solution while the DN-PINNs scheme fails. (a) Derivatives ∂xû [7] R , ∂yû [7] R and errors |∂xû [7] R −∂xuR|, |∂yû [7] R −∂yuR| using DNLM (PINN). | 2022-07-22T06:42:43.290Z | 2022-07-21T00:00:00.000 | {
"year": 2022,
"sha1": "9955add869be05c86e55e450a36fb2b0becf091d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9955add869be05c86e55e450a36fb2b0becf091d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
258447152 | pes2o/s2orc | v3-fos-license | Novel immune cell infiltration-related biomarkers in atherosclerosis diagnosis
Background Immune cell infiltration (ICI) has a close relationship with the progression of atherosclerosis (AS). Therefore, the current study was aimed to explore the role of genes related to ICI and to investigate potential mechanisms in AS. Methods Single-sample gene set enrichment analysis (ssGSEA) was applied to explore immune infiltration in AS and controls. Genes related to immune infitration were mined by weighted gene co-expression network analysis (WGCNA). The function of those genes were analyzed by enrichment analyses of the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO). The interactions among those genes were visualized in the protein-protein interaction (PPI) network, followed by identification of hub genes through Cytoscape software. A receiver operating characteristic (ROC) plot was generated to assess the performance of hub genes in AS diagnosis. The expressions of hub genes were measured by reverse transcription quantitative real-time PCR (RT-qPCR) in human leukemia monocyticcell line (THP-1) derived foam cells and macrophages, which mimic AS and control, respectively. Results We observed that the proportions of 27 immune cells were significantly elevated in AS. Subsequent integrative analyses of differential expression and WGCNA identified 99 immune cell-related differentially expressed genes (DEGs) between AS and control. Those DEGs were associated with tryptophan metabolism and extracellular matrix (ECM)-related functions. Moreover, by constructing the PPI network, we found 11 hub immune cell-related genes in AS. The expression pattern and receiver ROC analyses in two independent datasets showed that calsequestrin 2 (CASQ2), nexilin F-Actin binding protein (NEXN), matrix metallopeptidase 12 (MMP12), C-X-C motif chemokine ligand 10 (CXCL10), phospholamban (PLN), heme oxygenase 1 (HMOX1), ryanodine receptor 2 (RYR2), chitinase 3 like 1 (CHI3L1), matrix metallopeptidase 9 (MMP9), actin alpha cardiac muscle 1 (ACTC1) had good performance in distinguishing AS from control samples. Furthermore, those biomarkers were shown to be correlated with angiogenesis and immune checkpoints. In addition, we found 239 miRNAs and 47 transcription factor s (TFs), which may target those biomarkers and regulate their expressions. Finally, we found that RT-qPCR results were consistent with sequencing results.
INTRODUCTION
Atherosclerosis (AS) is the underlying cause of major adverse cardio-and cerebro-vascular events, such as stroke, peripheral artery disease and coronary artery disease, contributing to disability statistics and global death (Nong et al., 2022;Libby, 2021). The main pathogenic causes of AS include low-density lipoprotein (LDL) particle deposition in large-and medium-sized arteries, emigration of immune cells through damaged endothelial cells and the development of lipid plaques (Malekmohammad, Bezsonov & Rafieian-Kopaei, 2021). In addition, the interaction between lipid metabolism and immune response is also responsible for AS progression (Schaftenaar et al., 2016). Moreover, recent works have shown that anti-inflammatory interventions may be promising in the treatment of AS (Libby, 2021). Thus, identification of biomarkers related to immune and inflammation may provide theoretical and clinical guidance in AS prevention and treatment.
Immune cells are major players in immune system to mediate inflammation, and immune cell infiltration within vessel walls has close relationship with AS progression. By integrative analyses of CyTOF, CITE-seq and scRNA-seq, Fernandez et al. (2019) found multiple immune cell subpopulations, such as macrophages, monocytes and NK cells, in plaque and blood samples from AS patients. Furthermore, they found distinct features of T cells and macrophages in plaque samples with clinically symptomatic disease compared to asymptomatic disease (Fernandez et al., 2019). In mice, dendritic cells regulate T cell activation and adaptive immune responses to modulate atherogenesis (Subramanian et al., 2013;Daissormont et al., 2011). Using bioinformatics, Wang et al. (2022), Xia et al. (2021) and Xu, Chen & Yang (2022) found the proportions of immune cells were remarkably different between AS and control samples. However, so far to our knowledge, the role of genes related to immune infiltration in AS remains poorly understood. Therefore, the current study is designed to give a more comprehensive mining of genes related to immune infiltration and evaluate their diagnostic potential in AS by bioinformatic strategies and in vitro validation. We hope our findings could facilitate the diagnosis and treatment for AS patients from immunological perspective.
Data source
In the current study, gene expression data from 32 AS plaque samples at stage IV and/or V lesions including core and shoulders of the plaque and 32 distant macroscopically intact control samples in GSE43292 (Ayari & Bricca, 2013) were used as the testing set to find diagnostic AS biomarkers. The demographic data of GSE43292 cohort have been reported by Ayari & Bricca (2013) in a previous literature. In addition, 29 atherosclerotic carotid artery samples and 12 healthy artery samples in GSE100927 (Steenman et al., 2018) were used as an external set to validate the expression patterns and diagnostic value of biomarkers identified in GSE43292. GSE43292 and GSE100927 were sourced from GEO database (http://www.ncbi.nlm.nih.gov/geo). The workflow of the current study was presented in Fig. 1.
Exploration of differentially expressed genes (DEGs) in GSE43292
Limma program in R was applied to mine DEGs using |logFC (fold change) | ≥ 1 and adjusted p-value < 0.05. The "ggplot2" package of R was used to create volcano plot. The heatmap was produced using "pheatmap" in R.
Identification of differentially infiltrated immune cells (DIICs) between AS and control
The proportions of IICs in AS and control samples was evaluated using the ssGSEA algorithm. This process was completed by using "GSVA" R package. The immune cells exhibiting significant differences between AS and control were examined using the Wilcoxon method with Benjamin & Hochberg adjusted p-value < 0.05.
Weighted co-expression network analysis
WGCNA was performed (Horvath & Dong, 2008) on all samples in the training set in order to screen the gene modules most associated with DIICs. To remove outliers, the hierarchical clustering trees for all samples was constructed, followed by the selection of optimal β value to build the scale-free network. Next, Pearson's correlations between DIICs and gene modules were determined and presented in the heatmap. Finally, the most positive module and the most negative module correlated with DIICs were selected as key modules, and key moduler genes were used for the following analysis.
Mining and analysis of DEGs related to DIICs
To get DEGs related to DIICs, we overlapped DEGs with key modular genes identified in WGCNA and presented by a Venn diagram. Furthermore, KEGG pathway and GO analysis, comprising molecular function (MF), cellular composition (CC) and biological process (BP) were performed by "clusterProfiler" in R (Yu et al., 2012).
Identification of hub DIIC-related DEGs in AS
To identify hub DIIC-related DEGs in AS, a protein-protein interaction (PPI) network was first developed by uploading DIIC-related DEGs into STRING database (http://string-db. org) (Szklarczyk et al., 2015). Then Cytoscape software was applied for identifying core network by MCODE plug-in (Shannon et al., 2003). Genes in the core network were defined as hub DIIC-related DEGs in AS. Next, receiver operating curves (ROC) were plotted to assess the role of hub genes in AS diagnosis. If a hub gene has an area under ROC (AUC) > 0.7, then it was considered as a potential marker in AS diagnosis. In addition, their expressions and diagnostic potential of hub genes were validated in GSE100927. Then, the hub genes with consistent expression patterns and diagnostic potential were taken into the next analyses.
Characteristics and regulatory network of hub genes
To investigate the characteristics of hub genes and their relationships, we (1) analyzed their functional similarity by R package "GOSemsim", (2) performed Spearman correlation analysis to determine whether their expressions were correlated, and (3) calculated angiogenesis and immune checkpoint scores using ssGSEA algorithm, followed by the calculation of correlations between hub gene expressions and angiogenesis/immune checkpoint scores. Furthermore, hub genes were imported into the miRNet database (https://www.mirnet.ca/) to search for miRNAs and transcription factors (TFs) that target and regulate the expressions of hub genes. Finally, the regulatory networks of miRNAs-hub genes and TFs-hub genes were developed using Cytoscape software.
Cell culture and RT-qPCR
To validate hub genes' expressions, we purchased THP-1 cell line from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China, CAS No: KG224). Cell culture were performed in RPMI-1640 medium (Sigma-Aldrich, St. Louis, MO, USA) containing 10% fetal bovine serum (FBS) in T25 flasks. At 37 C and 5% CO 2 THP-1 monocytes were induced to differentiate into macrophage by adding PMA at 100 ng/ml to the media for 48 h. Subsequently, the macrophages were incubated with 80 ug/ml ox-LDL for 24 h to transform into foam cells. Total RNA were extracted from foam cells and macrophages, respectively, using TRIzol Reagent (Thermal Fisher Scientific, Waltham, MA, USA). The purity and concentration of extracted RNA were detected for the following cDNA synthesis by SureScript-First-strand-cDNA-synthesis-kit (Servicebio, Guangzhou, China).
Next, cDNA was applied for qPCR under 40 cycles at 95 C for 60 s, 95 C for 20 s, 55 C for 20 s and 72 C for 30 s using qPCR reagent from Servicebio (Wuhan, China). Gene expressions were analyzed by 2 −DDCt method. Primers of HMOX1, CHI3L1 and MMP9 were listed in Table 1.
DEGs and 27 DIICs were identified between AS and control
We discovered that 49 and 53 genes were significantly elevated and reduced, respectively, in AS than in controls in GSE43292 ( Fig. 2A). The expressions of top 40 DEGs were presented in the heatmap (Fig. 2B). Meanwhile, ssGSEA algorithm was applied to estimate the infiltrations of 28 ICs in controls and AS as shown in Fig. 2C. After the Wilcoxon test, we observed that the average proportions of 28 ICs were much higher in AS group, and there were significant differences of ICs between AS and control except for type 2 T helper cells (Fig. 2D).
DIIC-related DEGs were screened by WGCNA
Next, we performed WGCNA to obtain DIIC-correlated modules. The sample clustering tree identified no outlier in the training set ( Fig. 3A), followed by the generation of the sample dendrogram and trait heatmap (Fig. 3B). Afterwards, 10 is selected as the best soft threshold to develop the scale-free network (Fig. 3C). Finally, eight gene modules, including yellow, blue, gold, black, pink, blueviolet, turquoise and brown, were obtained (Fig. 3D). The correlations between DIICs and modules were calculated and presented in the heatmap (Fig. 3E). Accordingly, we selected most negatively (brown) and positively (pink) correlated modules with DIICs. Finally, a total of 4,568 genes in the pink and brown modules were screened as DIIC-related genes in AS. Thereafter, by overlapping 102 DEGs with 4,568 DIIC-related genes, we identified 99 DIIC-related DEGs (Fig. 3F). By GO and KEGG pathway analyses, we identified 24 significantly enriched KEGG pathways, the top 10 of which were tryptophan metabolism, cAMP signaling pathway, long-term potentiation, PPAR signaling pathway, diabetic cardiomyopathy, complement and coagulation cascades, circadian entrainment, ECM-receptor interaction, Malaria and long-term depression (Figs. 4A and 4B). As for GO analysis, 52 BP, 11 MF and 21 CC were significantly enriched. As shown in Figs. 4C and 4D, the top 10 GO terms, such as tryptophan metabolic process, extracellular matrix disassembly, aromatic amino acid family metabolic process, and the corresponding DIIC-related DEGs involved in those GO terms, such as FABP4, MMP9, CXCL10 were presented. The interactions among DIIC-related DEGs were explored in the STRING database (Fig. 5A) and were visualized by Cytoscape software (Fig. 5B). Then we screened core network from PPI network by MCODE plug-in (Fig. 5C). A total of 11 DIIC-related DEGs, including CASQ2, NEXN, MMP12, CXCL10, PLN, HMOX1, SELE, RYR2, CHI3L1, MMP9, ACTC1 were extracted from the core network (Fig. 5D). Among them, the expressions of MMP9, MMP12, HMOX1, CHI3L1, SELE and CXCL10 were upregulated, while the expressions of CASQ2, NEXN, RYR2, ACTC1 and PLN were down-regulated in AS samples compared to controls from training set (Fig. 6A). The ROC curves in the training set showed that those 11 genes had the potential to classify AS from controls Hub genes had close relationship with angiogenesis and immune checkpoints Next, we explored the relationship among the hub genes. Correlation analysis showed that the expressions of them were highly correlated (Fig. 7A). Among them, MMP9 and CHI3L1 were most positively correlated (Fig. 7B), and NEXN and HMOX1 were most negatively correlated (Fig. 7C). Interestingly, we also found that there was a medium to high degree of similarity in their function (Fig. 7D). Moreover, in consideration the role of angiogenesis (Moulton, 2006;Perrotta et al., 2019) and immune checkpoints (Vuong et al., 2022;Alyagoob, Lahmann & Joner, 2020) in AS development, progression and treatment, we calculated the angiogenesis and immune checkpoints scores based on corresponding gene sets by ssGSEA algorithm. Further analyses showed that 10 hub genes were significantly correlated with angiogenesis score (Fig. 8A) and with immune checkpoints score (Fig. 8B).
Hub genes were regulated by multiple miRNAs and TFs
Next, we investigated the miRNAs and TFs targeting and regulating the expressions of 10 hub genes. A total of 313 miRNA-hub gene pairs were searched to establish the miRNA-hub gene network, which includes 239 miRNAs and 10 hub genes (Fig. 8C). In the network, we found some hub genes were regulated by common miRNAs, for example that MMP9 and RYR2 were both regulated by mir-9-3p and mir-29b-3p. Meanwhile, 53 TF-hub gene pairs were obtained in miRNet database and a TF-hub gene network composed of six hub genes and 47 TFs was constructed (Fig. 8D). Also, we observed that
RT-qPCR validation
Foam cells derived from macrophages may indicate the initial stages of AS (Chistiakov, Bobryshev & Orekhov, 2016;Maguire, Pearce & Xiao, 2019). Thus, in the current study, the foam cell model established from THP-1 monocytes was used, which was widely used in studying AS and AS-related disease (Yin et al., 2019;Huwait, Al-Saedi & Mirza, 2022;Mehta & Dhawan, 2020). After treatment with PMA, THP-1 cells were differentiated into macrophages and were defined as control group. Then THP-1 derived macrophages were treated with ox-LDL to form foam cells, which mimic the early stages of AS. Then we examined the expressions of HMOX1, CHI3L1 and MMP9 by RT-qPCR. The results showed that the expressions of HMOX1, CHI3L1 and MMP9 were significantly up-regulated in foam cells, which were consistent with the sequencing results (Fig. 9).
DISCUSSION
Identifying novel biomarkers and potential mechanisms is critical for treatments of AS patients. In the current study, in consideration of the importance of immune cell infiltration in AS pathology, we combined ssGSEA, WGCNA and differential expression analyses to comprehensively get immune cell-related genes involved in AS, and identified 10 genes with diagnosis potential for AS. Firstly, we found that the proportions of 27 ICs were significantly elevated in AS, indicating that those immune cell types may contribute to AS. The roles of different macrophage and T cell subsets in AS have been reviewed by Anton and Goran that they function at different AS stage and affect plaque stability (Gistera & Hansson, 2017). In addition, innate immune response by dendritic cells and neutrophils are critical in AS initiation (Alberts-Grill et al., 2013). Thus, it is great significance to mine immune cell-related genes and investigated their role in AS. By WGCNA and differential expression analyses, we identified 99 DIIC-related DEGs, which were found to be involved in tryptophan metabolism and ECM-related biological processes through GO and KEGG pathway enrichment analyses. It has been reported that alteration of tryptophan metabolism plays an important role in AS. Briefly, inflammation in AS is driven by multiple cytokines, such as IFN-γ, which can up-regulate the expression of IDO. Then, tryptophan acts as a substrate for IDO and is degraded into kynurenine, leading to the progression of AS (Sudar-Milovanovic et al., 2022;Nitz, Lacy & Atzler, 2019;Wang et al., 2015). During this process, multiple innate and adaptive immune cells are involved, such as macrophages and dendritic cells (Nitz, Lacy & Atzler, 2019). Also, tryptophan metabolism is critical for the proliferation and function of T cells (Fallarino et al., 2006;Mezrich et al., 2010;Munn et al., 2005). As for vascular ECM, it is mainly composed of elastin, microfibrils, collagens, proteoglycans and other glycoproteins (Ma et al., 2020). Degradation of elastin into elastokines promotes AS by regulating uptake of ox-LDL (Kawecki et al., 2019), remodulating function of macrophages (Hsu, Tintut & Demer, 2021) and promoting angiogenesis (Heinz, 2020). Collagens have been reported to have an essential role in determining the stability of AS plaque (Xu & Shi, 2014;Johnston, Gaul & Lally, 2021). Proteoglycans and their GAGs are regulators in lipid retention, activation of immune response and proliferation of smooth cells, which contribute to AS progression (Viola et al., 2016). Those reports and our findings suggest that the identified DIIC-related DEGs may regulate AS at least by tryptophan metabolism and ECM. From those 99 DIIC-related DEGs, we selected 10 potential markers with good performance distinguishing AS from control by constructing PPI network and performing ROC analysis. CASQ is the most abundant Ca 2+ -binding protein in skeletal and cardiac muscle sarcoplasmic reticulum. The CASQ2 and CASQ1 genes have been found to be mutated in patients with catecholamine-induced polymorphic ventricular tachycardia (Lodola et al., 2016), which could lead to sudden death (Rajagopalan & Pollanen, 2016). The PLN plays an important role in regulating sarcoplasmic reticulum (SR) function as well as cardiac contractility. Perisic Matic et al. (2016) found that PLN expression was sharply decreased in smooth muscle cells treated with IFN-γ and ox-LDL, which mimic the environment of AS, and polymorphism in PLN was found to be associated with maximum common carotid artery thickness. Through mitochondrial function, calcium homeostasis and excitation-contraction coupling, RYR2 activity in myocytes is associated with electrical and contractile dysfunction in the arrhythmogenesis heart of aged human (Hamilton & Terentyev, 2019). The expression of CXCL10 was observed in different stages of AS lesion development (Mach et al., 1999), and knockout of ApoE and CXCL10 in mice lead to significantly smaller lesions, less CD4+ T cells and increased regulatory T cells compared to ApoE −/− mice (Heller et al., 2006). As for HMOX1, its expression was significantly elevated in high-fat diet ApoE −/− mice, and knock-down of HMOX1 in endothelial cells impaired overload of Fe 2+ , ROS and lipid peroxidation, which led to impaired ferroptosis and may attenuate diabetic AS development ( As for CHI3L1, it is a risk factor of AS, in which its expression is up-regulated and knockdown of CHI3L1 reduced lipids, macrophages and expressions of local proinflammatory mediators in plaques (Gong et al., 2014). Higher MMP9 and MMP12 expressions were observed in AS patients (Gong et al., 2014;Marcos-Jubilar et al., 2021), which is consistent with the public sequencing results. Nuciferine exerts its protective role against AS by Calm4/MMP12/AKT signaling to regulate the migration and proliferation of vascular smooth muscle cells. Although those genes have been reported in AS and AS-related diseases, the exact mechanisms of those genes in regulating AS remains poorly understood.
At last, RT-qPCR was applied to examine the expression patterns of MMP9, CHI3L1 and HMOX1 via THP-1 cellular model. We found that their expressions were consistent with what we observed in sequencing data. But it is needed to notice that the in vitro cellular model can't fully represent what is going on in vivo in consideration of its simple cellular environment. It has been reported that cell-cell communications and cell-cell interactions such as B cell-T cell interaction (Ma et al., 2022), cross-talk among endothelial cells, immune cells and vascular smooth muscle cells (Sorokin et al., 2020) and even the structure of extracellular matrix (Halabi & Kozel, 2020), are important for the development of AS plaques. Thus, in vivo plaque and control samples are still needed to further verify their expressions.
There are some limitations in our study. First, the expressions of identified biomarkers need to be determined in AS and control tissues. Second, how they interact with immune cells and how they regulate AS should be elucidated in in vitro and in vivo experiments. Last but not the least, their potential to act as biomarkers in AS treatment and therapeutic targets in AS treatments needs to be verified in real clinic world. Thus, in future work, we plan to explore the underlying mechanisms of those genes in the regulation AS, including but not limited to: (1) examine the expression patterns of those genes in AS patients under different stages by western blotting or qPCR, (2) explore the immune cell infiltration in AS using mice AS model via and detect whether those biomarkers are colocalize with infiltrated immune cells by immunofluorescence, (3) select 2-3 most interested genes and construct crispr knock-out and/or overexpression via lentivirus infection to monitor its role in the formation of AS and inflammation, and (4) examine its effect on angiogenesis by tube formation assay in vitro.
CONCLUSIONS
For the first time, we identified 10 DIIC-related DEGs, including CASQ2, PLN, RYR, CHI3LI, NEXN, MMP9, MMP12, HMOX1, CXCL10, and ACTC1 in AS. These 10 genes were significantly correlated with angiogenesis and immune checkpoints. Those findings provide future directions in unveiling the molecular mechanisms of AS and also offer novel potential biomarkers and therapeutic targets for AS patients. | 2023-05-03T15:10:14.940Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "42c5ee5eb7ad1deec2054c2404371da162b9548f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.15341",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85c5945be1384c7c216cfa52de45049660633972",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
244144768 | pes2o/s2orc | v3-fos-license | CRISPR-Cas12a nucleases function with structurally engineered crRNAs: SynThetic trAcrRNA
CRISPR-Cas12a systems are becoming an attractive genome editing tool for cell engineering due to their broader editing capabilities compared to CRISPR-Cas9 counterparts. As opposed to Cas9, the Cas12a endonucleases are characterized by a lack of trans-activating crRNA (tracrRNA), which reduces the complexity of the editing system and simultaneously makes CRISPR RNA (crRNA) engineering a promising approach toward further improving and modulating editing activity of the CRISPR-Cas12a systems. Here, we design and validate sixteen types of structurally engineered Cas12a crRNAs targeting various immunologically relevant loci in-vitro and in-cellulo. We show that all our structural modifications in the loop region, ranging from engineered breaks (STAR-crRNAs) to large gaps (Gap-crRNAs), as well as nucleotide substitutions, enable gene-cutting in the presence of various Cas12a nucleases. Moreover, we observe similar insertion rates of short HDR templates using the engineered crRNAs compared to the wild-type crRNAs, further demonstrating that the introduced modifications in the loop region led to comparable genome editing efficiencies. In conclusion, we show that Cas12a nucleases can broadly utilize structurally engineered crRNAs with breaks or gaps in the otherwise highly-conserved loop region, which could further facilitate a wide range of genome editing applications.
These results provide novel insights into the development of simple design rules for modulating MAD7 editing activity. Next, we examined whether the observed tolerance to the STAR-crRNA designs would extend to other Cas12a family nucleases. We assayed both in-vitro DNA cleavage and in-cellulo INDEL formation using two commercially available variants of AsCas12a (Cas12a-V3 and Cas12a-Ultra, IDT) and commercially available LbCas12a (EnGen LbaCas12a, NEB). We used MAD7 crRNAs to guide both AsCas12a effectors, while for LbCas12a we designed a separate set of its respective crRNAs and STAR-crRNAs. Analysis of in-vitro DNA cleavage showed that MAD7, AsCas12a-V3, and AsCas12a-Ultra had comparable cleavage activity using either wild-type crRNAs or engineered STAR-crRNAs (Split 3), while LbCas12a showed reduced activity compared to MAD7 and both AsCas12a (Supplementary Fig. 4a). Interestingly, when assaying the DNMT1 locus in Jurkat Fig. 4c). On the other hand, Lb-Split 4 STAR-crRNA designed for LbCas12a led to the marginal INDEL formation efficiency of 8% at the DNMT1 locus but resulted in adequate editing of 44% at the PDCD1 locus compared to MAD7 with crRNA ( Fig. 4c and Supplementary Fig. 4c). Next, we tested two other STAR-crRNAs designed for LbCas12a, Lb-Split 1 and Lb-Split 6, observing editing efficiencies < 10% at both target sites ( Supplementary Fig. 4c). This indicates that LbCas12a does not tolerate shorter loops and alternate sequences of MAD7 crRNAs, but it may utilize some of the Split crRNAs in a target-or PAM-dependent manner. These observations are in contrast with the previous study 20 , which showed that the LbCas12a activity was eliminated altogether when guided by split crRNA. However, our data suggest that LbCas12a is more conservative than AsCas12a in its interaction with crRNA and less tolerant of crRNA modifications. Notably, MAD7 was able to utilize native LbCas12a crRNAs without affecting INDEL formation ( Supplementary Fig. 4d). This is consistent with the observed tolerance to Gap 4 STAR-crRNA (Fig. 4a), highlighting the greater tolerance of MAD7 to altered crRNAs compared to LbCas12a. Given the observed differences in Cas12a tolerance to STAR-crRNAs, we next tested the extent to which our STAR system could be used with novel, divergent Cas12a nucleases. To identify more Cas12a family members, we mined public databases following methodology previously described in Zetsche et al., 2015. We based the search on AsCas12a and MAD7 amino acid sequences and selected nine uncharacterized proteins that met our technical criteria: the presence of CRISPR array in the genome of the organism of origin, the predictable crRNA sequence, and the over 40% GC content in the coding sequence. We further examined the evolutionary relationship of the nine putative Cas12a-from here onwards ABW1-9 30 -and known Cas nucleases used in this study (Fig. 5a) and aligned amino acid sequences (Fig. 5c). Both dendrogram and sequence similarity matrix suggest that the selected proteins come from diverse bacterial strains and share as little as 15% sequence identity. www.nature.com/scientificreports/ Alignment of predicted direct-repeat sequences, containing pre-and crRNAs, revealed remarkably conservative sequence of the stem and loop structure directly preceding spacer (Fig. 5b). We ran small-scale synthesis of the nine ABW nucleases, which we tested in the in-vitro cleavage assay with the predicted, native pre-crRNAs, and MAD7-optimized crRNA (Fig. 5d). Six ABWs showed cleaving activity with their predicted crRNAs, while seven nucleases cleaved oligonucleotides amplified from the DNMT1 target site when guided by MAD7 crRNA. Finally, using the in-cellulo INDEL assay in Jurkat cells, we tested genome editing capacity of ABW1 at the DNMT1, PDCD1, and TIGIT loci with both MAD7 wild-type crRNA and Split 3 STAR-crRNA (Fig. 5e). While ABW1 tolerated the split within the loop, its activity varied in a target-or PAM-dependent manner. The assayed nuclease was both less active and led to lower INDEL formation frequency than MAD7 with both crRNAs (Fig. 5e).
Discussion
In this study, we explored and tested CRISPR-Cas12a-based editing systems. We hypothesized that split constructs, i.e. STAR-crRNA (SynThetic trAcrRNA), may affect editing by altering affinity to target DNA, and consequently other characteristics of the systems, such as PAM recognition, cleavage site, and off-target activity.
Our results show that it is possible to successfully introduce breaks and gaps in the highly-conserved loop region of crRNAs, and therefore to transform type V-A Cas12a crRNA into a functioning two-component tracrRNA-crRNA-like system analogous to the type II and other type V nucleases (e.g. V-B, V-E). Previous attempt to structurally modify the loop region of CRISPR-Cas12a crRNA in a plasmid-based system resulted in complete termination of gene-cutting efficiency in the presence of AsCas12a nuclease 19 Notably, ErCas12a showed comparable cleaving activity with structures analogous to Split 2 and Gap 3, even at the same concentration 22 . Our findings in-vitro and in-cellulo demonstrate that Split 2, Split 3 STAR-crRNA, and various other structural modifications to the crRNA loop region have minimal impact on both the DNA cleavage efficiency and on genome editing via HDR in the presence of various Cas12a nucleases. In line with this, we show that the MAD7 nuclease also tolerates the insertion of a 5' Hairpin structure in addition to the engineered break in the crRNA loop at the position 3, while the addition of a 3' Hairpin in combination with Split 3 STAR-crRNA reduces the nuclease activity. Furthermore, our findings indicate that the tolerance to such structurally modified crRNAs (STAR and Gap) is both Cas12a nuclease specific as well as dependent upon the location of the disruption within the loop structure and the specific nucleotide at the -10 position. It is important to note that we do not observe any changes in the DNA cleavage site, overhang length, or off-target editing activity of the tested Cas12a nucleases. Finally, these findings give insight into the flexibility of Cas12a nucleases and their tolerance towards crRNA spatial modifications. Together, they advance our understanding of the development of simple design rules for modulating activity and open possibilities for further engineering of CRISPR-Cas12a editing systems. In conclusion, the modularity of STAR-crRNAs offers more flexibility than the wild-type crR-NAs, consequently providing a simple engineering approach to dial-up or dial-down the activity. While current autologous cell therapy approaches require high editing efficiencies, reduced on-target activity with eliminated off-target activity would be beneficial for cell line manufacturing, e.g. induced pluripotent stem cell engineering.
In addition, STAR-crRNAs may be advantageous in diagnostic tests development, e.g. DETECTR-based diagnostics, and multiplex editing studies for simultaneous targeting of multiple genome loci. Finally, STAR-crRNAs allow for additional modulation level of editing, as well as reduced cost of crRNA synthesis. While Split 1 STAR-crRNA leads to almost complete termination of MAD7 activity, our findings indicate that nearly entire loop can be removed, except for the ribonucleotide at the position -10, without affecting the nuclease activity. In addition, our data show that all other alterations to the nucleotides in the loop region enable efficient DNA cleavage activity in the presence of Cas12a nucleases and promote efficient gene editing at the immunologically relevant loci in human cells. Crystal structures of Cas12a-crRNA-DNA complexes provide a rationale for the observed activities of split crRNAs used in our study; while Cas12a makes extensive contacts to the crRNA hairpin and DNA complementary sequence, the tetraloop is reported to be solvent-exposed and free of interactions with amino acid residues 31 . Interestingly, the reduced activity of Split 1 may be explained by the reverse Hoogsteen base pairing between U (-10) and A (-18) 31,32 . Evidently, Split 1 STAR-crRNA disrupts the RNA backbone between U (-10) and C (-11), while Split 2 disrupts the backbone between U (-10) and C (-9) and exhibits no loss of activity. This suggests that the positioning of U (-10) adjacent to C (-11) is important for maintaining the reverse Hoogsteen base pair and that this interaction is important for nuclease activity. In contrast, Gao's team (2016) reported that Cas12a K752 contacts the RNA backbone between G (-6) and U (-7) 31 , at the position of the disruption in Split 5, yet, Split 5 STAR-crRNA exhibits no loss of activity.
Although the classification of CRISPR effector proteins remains unclear 33,34 , and assigning newly discovered nucleases in type V-A may be disputable, all Cas nucleases used in this study are classified as class 2, type V, subtype V-A effectors based on the current classification criteria-single effector proteins guided by a single crRNA while lacking defined tracrRNA in the CRISPR array 25,26 . We show that the SynThetic trAcrRNAs are tolerated by four of the five enzymes tested in this study, while MAD7 and AsCas12a-Ultra (IDT) show comparable activity with the unaltered crRNAs and STAR-crRNAs. In conclusion, our data demonstrate that some of the Cas12a nucleases can utilize split constructs, and as such act analogously to either type II or other type V effectors (e.g. V-B, V-E). Consequently, we observed nuclease-specific differences in the crRNA tolerance, which may inform improved classification criteria and engineering strategies going forward.
Nuclease expression and purification. E. coli BL21 star (DE3) competent cells (ThermoFisher Scientific) were transformed with an expression vector encoding the nuclease gene. 2 × YT medium supplemented with kanamycin was inoculated with a single colony and incubated overnight at 37 °C. The culture was diluted in 1-2 L 2 × YT medium to OD 600 = 0.1 and grown at 37 °C to OD 600 = 0.6. At this point, the culture was placed on ice for 15-20 min. Next, IPTG was added in the final concentration of 0.2 mM, and protein expressed overnight (18-20 h) at 18 °C. Cells were harvested by centrifugation and resuspended in lysis buffer (20 mM Tris, 500 mM NaCl, and 10 mM imidazole, pH = 8.0) supplemented with cOmplete™, EDTA-free protease inhibitor cocktail (Roche). After resuspension, Benzonase® nuclease (Sigma Aldrich, ≥ 250 units/µL, 10 µL per 40 mL lysate) and lysozyme (1 mg/mL lysate) were added and the cell suspension was placed on ice for 30 min. Cells were disrupted on an Avestin EmulsiFlex C-5 homogenizer (15,000-20,000 psi), and insoluble cell debris removed by centrifugation (15,000 g, 4 °C, 15 min).
All subsequent chromatography steps were carried out at 10 °C. The cleared lysate was loaded on a 5-mL HisTrap FF column (GE Healthcare). The resin was washed with 10 column volumes of wash buffer (20 mM Tris, 500 mM NaCl, and 20 mM imidazole, pH = 8.0) and the protein eluted with 10 column volumes of elution buffer (20 mM Tris, 500 mM NaCl, and 250 mM imidazole, pH = 8.0). Fractions containing the protein (typically 13.5 mL) were pooled and diluted to 25 mL in dialysis buffer (250 mM KCl, 20 mM HEPES, and 1 mM DTT, and 1 mM EDTA, pH = 8.0). The sample was dialyzed against 1 L of dialysis buffer at 10 °C using a dialysis membrane tubing with a molecular-weight cut-off of 6-8 kDa (Spectra/Por® standard grade regenerated cellulose, 23 mm wide). The dialysis buffer was replaced after 1-2 h and dialysis continued overnight.
The next day, the dialyzed sample was diluted two-fold in 10 mM HEPES (pH = 8.0) and immediately loaded on a 5-mL HiTrap Heparin HP column (GE Healthcare), pre-equilibrated with buffer A (20 mM Hepes, 150 mM KCl, pH = 8.0). Resin was washed with 2 column volumes of buffer A and the protein eluted using a linear gradient from 0 to 50% of buffer B (20 mM Hepes, 2 M KCl, pH = 8.0) over 12 column volumes. Fractions containing the protein were pooled (typically 10-15 mL) and concentrated to 2 mL using a centrifugal filter unit (Amicon® Ultra-15, 30,000 MWCO; centrifugation at 4 °C). A final chromatography step was performed by injecting the sample on a 120-mL Superdex200 gel filtration column (GE Healthcare) with 50 mM sodium phosphate, 300 mM NaCl, 0.1 mM EDTA, pH = 7.5 as separation buffer. Fractions of interest were pooled and concentrated by centrifugal filtration (Amicon® Ultra-15, 30,000 MWCO; centrifugation at 4 °C) to at least 20 mg/mL (concentration determined by measuring absorbance at 280 nm on a NanoDrop™2000, ThermoFisher) with a percent solution extinction coefficient (Abs 0.1%) of the nuclease). Nuclease search. Following the methodology described in Zetsche et al., 2015, PSI-BLAST program 35 was used to identify AsCas12a and MAD7 homologs in the NCBI NR database using AsCas12a protein sequence (WP_021736722.1) and MAD7 (WP_055225123.1) as queries with the E-value cut-off of 0.01 with low-complexity filtering and composition-based statistics turned off. The first selection criteria, namely, < 60% sequence similarity to AsCas12a, < 60% sequence similarity to MAD7, and > 80% query coverage, were applied and the results of those searches combined. The dataset was cross-checked to exclude already studied proteins. Multiple sequence's alignments and pairwise comparisons were constructed using the CLC Main Workbench 7 software (Alignment and Pairwise Comparison with default settings) to exclude proteins of > 90% similarity to already rejected records. The second selection round removed proteins with unknown protein-coding gene or incomplete genomic or chromosomal sequences. Phylogenetic analysis was performed using the Maximum Likelihood Phylogeny (CLC Main Workbench 7.9.1, Neighbor Joining algorithm and Jukes-Cantor Distance measure). DNA sequences coding for selected proteins were collected and analyzed. Genomic data were applied to investigated CRISPR array presence and genomic location of the protein-coding gene using CRISPRCasFinder 36 , CRISPRone 37 , and PILER-CR 38 .
RNPs formulation. Ribonucleoprotein complexes (RNPs) were generated by incubating relevant crRNAs or STARs with nucleases in molar ratio 3:2 crRNA:nuclease for 10 min at room temperature. For electroporation, the RNP complexes were generated by mixing the specific RNA (150 pmol) and MAD7 (100 pmol), or when indicated, other type V nucleases, in nuclease-free water up to 5 μL. To reduce the complexity and preparation time on the day of the assay 39 , all RNPs were prepared one day before electroporation and stored at 4 °C overnight. Immediately before electroporation, RNPs were incubated for 10 min at room temperature.
In vitro cleavage assay. Target DNA was amplified from 10 ng wild-type genomic DNA from Jurkat cells using the Phusion High-Fidelity PCR Master Mix with HF Buffer (ThermoFisher Scientific). The PCR products were purified with the Agencourt AMPure XP beads (Ramcon), using the sample to beads ratio of 1:1.8. The DNA was eluted from the beads with nuclease-free water. The RNPs were generated by mixing 1 μL of 12 μM crRNA or STAR with 1 μL of 4 μM nuclease and 10 min incubation at room temperature. The in vitro cleavage www.nature.com/scientificreports/ assay was then performed by adding 200 fmol target DNA in 1 × NEBuffer 2.1 (NEB). The reaction was then incubated for 10 min at 37 °C. The sample was treated with 1 μL Proteinase K (ThermoFisher Scientific) for 10 min at room temperature and the cleavage products analyzed on a 3% agarose gel stained with SYBR safe (ThermoFisher Scientific).
Electroporation experiments. Lonza 4D Nucleofector with Shuttle unit (V4SC-2960 Nucleocuvette
Strips) was used for electroporation, following the manufacturer's instructions. Jurkat cells were electroporated using the SF Cell Line Nucleofector X Kit (Lonza), CA-137 program, with 2 × 10 5 cells in 20 µL SF buffer for each nucleofection reaction. The cell suspension was mixed with RNPs, immediately transferred to the nucleocuvette, and subjected to nucleofection in the 96-well Shuttle device. Cells were immediately re-suspended in the cultivation medium and plated on 96-well, flat-bottom, non-cell culture treated plates (Falcon). Cells were harvested 48-h post-transfection for genomic DNA extraction and viability assays. For the Homology-Directed Repair efficiency assay, the HDR template, 160 nt long ssDNA (Supplementary Table 2), was collected via pipetting from the HDR plate after the RNPs addition and immediately before the electroporation. The electroporation parameters, cells recovery and proliferation were performed the same way as described above.
Genomic DNA extraction and PCR amplification. Targeted amplicon sequencing. Extracted genomic DNA was quantified using the NanoDrop spectrophotometer (ThermoFisher Scientific). Amplicons were constructed in two PCR steps. In the first PCR, regions of interest (150-400 bp) were amplified from 10-30 ng of genomic DNA with primers containing Illumina forward and reverse adapters on both ends (Supplementary Table 3) using Phusion High-Fidelity PCR Master Mix (ThermoFisher Scientific). Amplification products were purified with Agencourt AMPure XP beads (Ramcon), using the sample to beads ratio of 1:1.8. The DNA was eluted from the beads with nuclease-free water and the size of the purified amplicons analyzed on a 2% agarose E-gel using the E-gel electrophoresis system (Ther-moFisher Scientific). In the second PCR, unique pairs of Illumina-compatible indexes (Nextera XT Index Kit v2) were added to the amplicons using the KAPA HiFi HotStart Ready Mix (Kapa/Roche). The amplified products were purified with Agencourt AMPure XP beads (Ramcon), using the sample to bead ratio of 1:1.8. The DNA was eluted from the beads with 10 mM Tris-HCl pH = 8.5 + 0.1% Tween20. Sizes of the purified DNA fragments were validated on a 2% agarose gel using the E-gel electrophoresis system (ThermoFisher Scientific), quantified using Qubit dsDNA HS Assay Kit (Thermo Fisher) and then pooled in equimolar concentrations. Quality of the amplicon library was validated using Bioanalyzer, High Sensitivity DNA Kit (Agilent) before sequencing. The final library was sequenced on Illumina MiSeq System using the Miseq Reagent Kit v.2 (300 cycles, 2 × 250 bp, paired-end). De-multiplexed FASTQ files were downloaded from BaseSpace (Illumina).
NGS data analysis.
Initial quality assessment of the obtained reads was performed with FastQC 40 . The sequencing data were aligned and analyzed using CRISPResso2 41 , more specifically CRISPRessoBatch command with the parameters -cleavage_offset 1 -w 10 -wc 1 -expand_ambiguous_alignments. Modification rates from the CRISPResso2 output were analyzed in Excel.
Equipment and settings. Gel images were taken using iBright FL1000 instrument (ThermoFisher Scientific) with following settings: "smart exposure" function was used to set exposure time and avoid overexposure, resolution 1 × 1, optical zoom 1.5, digital zoom 1x, and focus level 385. Images were exported in reverse color. In Fig. 5d, contrast was adjusted for better visibility of the bands. Original images are available in Extended Data Figures.
Data availability
Next-generation sequencing data have been deposited to the NCBI Sequence Read Archive database under accession PRJNA820998. | 2021-11-17T16:26:17.052Z | 2021-11-15T00:00:00.000 | {
"year": 2022,
"sha1": "046b9f4c7c0d40cef8f88603d9f53c569fd57898",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d5c618e897153e409f3ab216a078707cbc2cfb26",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237308808 | pes2o/s2orc | v3-fos-license | Recent advances in management of COVID-19: A review
The coronavirus disease 2019 (COVID-19) pandemic caused and is still causing significant mortality and economic consequences all over the globe. As of today, there are three U.S Food and Drug administration (FDA) approved vaccines, Pfizer-BioNTech, Moderna and Janssen COVID-19 vaccine. Also, the antiviral drug remdesivir and two combinations of monoclonal antibodies are authorized for Emergency use (EUA) in certain patients. Furthermore, baricitinib was approved in Japan (April 23, 2021). Despite available vaccines and EUA, pharmacological therapy for the prevention and treatment of COVID-19 is still highly required. There are several ongoing clinical trials investigating the efficacy of clinically available drugs in treating COVID-19. In this study, selected novel pharmacological agents for the possible treatment of COVID-19 will be discussed. Point of discussion will cover mechanism of action, supporting evidence for safety and efficacy and reached stage in development. Drugs were classified into three classes according to the phase of viral life cycle they target. Phase I, the early infective phase, relies on supportive care and symptomatic treatment as needed. In phase II, the pulmonary phase, treatment aims at inhibiting viral entry or replication. Drugs used during this phase are famotidine, monoclonal antibodies, nanobodies, ivermectin, remdesivir, camostat mesylate and other antiviral agents. Finally, phase III, the hyper-inflammatory phase, tocilizumab, dexamethasone, selective serotonin reuptake inhibitors (SSRI), and melatonin are used. The aim of this study is to summarize current findings and suggest gaps in knowledge that can influence future COVID-19 treatment study design.
Introduction
The COVID-19 pandemic first appeared as a case of pneumonia of unknown cause in December 2019 in Wuhan, China. Later, it evolved to a global outbreak and was declared a pandemic by the Word Health Organization (WHO) on March 11, 2020. The WHO reported over 94 million confirmed cases of COVID-19 including 2 million deaths, globally as of 2021 [1]. It is caused by a novel virus from the family of Coronavirus (CoV). This same family of virus caused the previous outbreaks of Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) in 2003-4 and 2012, respectively. The WHO defines Coronavirus as "a large family of viruses that cause illness ranging from the common cold to more severe diseases" [2]. Coronaviruses are single-stranded RNA viruses. They are highly diverse due to their susceptibility to mutation and recombination. They mainly infect humans, mammals, and birds. The SARS-CoV-2 or COVID-19 virus is thought to have originated in bats then spread to humans, possibly by contaminated meat sold in China's meat market. Symptoms of COVID-19 may involve multiple systems including respiratory, gastrointestinal, musculoskeletal, and neurologic. Respiratory symptoms can be manifested as dry cough, chest pain, rhinorrhoea and/or nasal congestion, sore throat and shortness of breath. Gastrointestinal symptoms can present as diarrhea, nausea, vomiting, haemoptysis and abdominal pain. Finally, patients could experience nonspecific symptoms such as fever, chills, fatigue, muscle ache, loss of taste and/or smell, headaches and confusion [3].
The Coronavirus enters the host cell via a trimeric spike glycoprotein, or peplomers, which give the viruses their corona-like appearance. The spike is constituted of two subunits: S1 and S2. The top of S1 subunit termed RBD, binds to the angiotensin-converting enzyme 2 (ACE2) receptor on the surface of the host cell. S2 subunit fuses with the host cell membrane. As the S1 subunit binds to the receptor, a host transmembrane serine protease 2 (TMPRSS2) activates the spike and cleaves ACE2, by acting on S2 subunit. This cleavage facilitates the fusion of the virus with the cell membrane, as shown in Fig. 1 [4]. Beside the more common direct membrane fusion pathway, a second suggested mechanism for COVID-19 entry is the endocytic pathway, thought to be pH dependant [5].
The viral RNA of coronavirus can be detected by polymerase chain reaction (real-time PCR). Since the outbreak of COVID-19, several treatment and prevention methods (i.e., vaccines) are under various phases of clinical trials. Some even got approved for Emergency Use Authorization (EUA) by the U.S Food and Drug Administration (FDA). Pharmacological agents could be classified into three classes according to the stage they tackle in COVID-19 infection. Stage I is the early infection phase during which the domination of upper respiratory tract symptoms is present. Management during this phase relies on supportive care to assist the immune system or prophylactic therapy possibly with ivermectin. Stage II is the pulmonary phase in which the patient develops pneumonia with all its associated symptoms. The aim during this stage is to inhibit viral entry or replication. In this class we mainly focused on famotidine, monoclonal antibodies, nanobodies, camostat mesylate and antiviral drugs. Stage III is the hyperinflammation phase, the most severe phase, in which the patient develops acute respiratory distress syndrome (ADRS), sepsis and multi-organ failure. Treatment during this phase aims to suppress the immune response. Drugs like dexamethasone, the monoclonal antibody tocilizumab, dexamethasone, repurposed selective serotonin reuptake inhibitors (SSRI), melatonin or other immunomodulatory agents are being investigated in halting the cytokine release syndrome. Some patients also developed disseminated intravascular coagulation, against which anticoagulants are given [6].
The FDA issued EUA for three vaccines: Pfizer-BioNTech, Moderna and Janssen COVID-19 vaccine. However recently, on April 13, 2021, the FDA and the Centers for Disease Control and prevention (CDC) recommended withholding Johnson & Johnson (Janssen) COVID-19 vaccine until further safety investigations. This decision came after six reported cases of blood clotting, namely cerebral venous sinus thrombosis [7]. In this review, we summarized the findings of selected pharmacological agents against COVID-19 in terms of mechanism of action, efficacy, safety and stage of development. Our aim is to shed light on promising drugs and identify gaps in knowledge.
Early infection (phase I)
Phase I is identified by upper respiratory symptoms most commonly cough, malaise and headaches, with the absence of shortness of breath. Less commonly patients might also present with sore throat, arthralgia, chills, rhinorrhoea, nausea and vomiting or loss of taste and/or smell. During this phase, the virus is replicating in the upper respiratory tract, mainly the nasal passages. The patient shows no to mild symptoms, with a presentation that is very similar to a flu or common cold. The goal during this phase is to support the immune system and to provide symptomatic management according to patient's presentation. Some patients are limited to this phase while others progress to the more severe stage II or III [8,9].
Symptomatic treatment/supportive care
Symptomatic treatment involves the use of analgesics and antipyretics to relieve symptoms of headache, fever and myalgia. For cough or dyspnea, self-proning (patient with respiratory distress is placed on his stomach) provides symptomatic improvement. Education on breathing exercise is also important. For mild cases of COVID-19 infection, general supportive care is provided. This includes adequate hydration (especially when fever is present), rest, repositioning and ambulation [10]. Table 1 summarizes the symptomatic treatment and supportive care used during the mild phase (early infection).
Pulmonary phase (phase II): entry/fusion inhibition & antiviral agents
In phase II, the virus proceeds to infect the lungs triggering the innate immune response. As a result, patients develop pneumonia with its associated symptoms such as a worsened cough, fever, dyspnea and decreased oxygen levels. It is during this stage that most patients require hospitalization. Management during this phase is focused on preventing viral entry and invasion, in addition to limiting viral replication by antiviral therapy [11][12][13], as indicated below:
Ivermectin
Ivermectin is approved by the FDA as an anti-parasite drug to treat onchocerciasis (river blindness), Malaria, head lice and scabies [14,15]. The class of Ivermectin is avermectins. Ivermectin has shown an antiviral activity towards many RNA and DNA viruses [16]. In recent studies, ivermectin has shown in vitro antiviral activity against COVID-19.
The use of 5 µM ivermectin reduced viral particle proliferation (5000-fold reduction in COVID-19 levels) within a 48-hour incubation period. The mechanism of action of ivermectin against COVID-19 is through inhibiting importin (IMP) α and β. IMP α and β are needed for the virus to gain access into the nucleus of the host cell [17]. Ivermectin was also found to antagonize transmembrane receptor CD147 [18].
In clinical settings, a retrospective cohort study including (n = 280) hospitalized patients infected with COVID-19 in South Florida hospital. 173 patients received ivermectin 200 mcg/kg orally and usual clinical care, while 107 patients received the usual clinical care only. Patients treated with ivermectin had significantly lower mortality rate (15.0% vs 25.2%) compared to conventional care only (p = 0.03) [19]. Furthermore, in a cross-sectional study, 100 mild to moderate COVID-19 patients were treated with a combination of oral doxycycline 100 mg and ivermectin 0.2 mg/kg. Within 6 days, 83.5% tested negative for COVID-19 and had major improvement in symptoms (p = 0.59). Additionally, no side effects or admission to intensive care was needed [20]. A case-control study conducted among healthcare workers in an Indian hospital, evaluated ivermectin as a prophylactic agent. Study subjects were health care workers that tested positive (case) or negative (control) for COVID-19. 77 of the control group and 38 of the case group, who took two doses of ivermectin prophylactically had 73% reduced risk of infection by COVID-19 [21]. It is not clear whether ivermectin should be used as treatment or prophylaxis and further studies are needed to establish ivermectin efficacy and mechanism against COVID-19.
Monoclonal antibodies
Antibodies are an important part of the host immune system and play a role in the eradication of pathogens including viruses. Monoclonal antibodies are synthetic proteins produced to mimic the natural immune response. As a result, they are very effective with vast applications. They are used in autoimmune diseases, asthma, oncology, neurology, radioimmunology and diagnostics [22][23][24]. Nonetheless, the FDA approved agents for viral infections are limited to Ebola and Respiratory Syncytial Virus (RSV) [25,26]. In comparison to other therapeutic agents, monoclonal antibodies are more specific, as they are designed to target a single protein.
There are many monoclonal antibodies developed or under development for treatment and/or prophylaxis of COVID-19. The majority target the S-spike protein, limiting viral attachment to the ACE2 receptor and further entry. Currently, the FDA permitted EUA for two combinations of monoclonal antibodies. REGEN-COV2 (casirivimab with imdevimab) approved in November 2020. While lately, in February 2021, the combination of bamlanivimab with etesevimab, by Eli Lilly and Company, was also approved [27,28]. Clinical trials that lead to FDA approval are provided in Table 2. Bamlanivimab monotherapy was initially approved, but due to development of resistant, the decision was revoked by the FDA [29]. According to the FDA, monoclonal antibodies are indicated in mild to moderate COVID-19 infected adults or even children (12 years or older with a minimum body weight of 40 kg), at high risk (as defined in the FDA fact sheet) for developing severe disease [30,31].
Expressly, an IV infusion of monoclonal antibodies is given to patients that test positive for COVID-19 with no critical symptoms, but at risk of developing severe infection. Some of these risk factors include age > 65 years, obesity, immunodeficiency and others. According to several studies, early treatment with monoclonal antibodies in these patients would reduce viral load, hospitalization and death [32]. It is suggested that monoclonal antibodies possess antiviral effect by reducing viral replication in the nasopharynx. As hospitalized patients with more severe symptoms experienced no benefit, their use is possibly limited to early therapy. Although their ineffectiveness in later, more sever stages, could be related to the hyperinflammatory state that is of higher impact [33].
Nanobodies
Despite advances in bioengineered monoclonal antibodies, as mentioned above, there are still some barriers to their use. Cost, heat sensitivity, and intravenous administration which requires patient hospitalization are all disadvantages of monoclonal antibodies. Additionally, in order to achieve effective alveolar concentration, a high dose must be injected which is associated with side effects. Their use was also linked with antibody-dependant enhancement of the disease (ADE), which could result in additional side effects [34]. Nanobodies (Nbs) are a new class of recombinant antibodies that are derived from heavy-chain antibodies, found in sharks and camels [35]. Mammalian antibodies (also known as conventional or traditional) are heterotetrameric proteins consisting of one pair of heavy chains and another pair of light chains. Interestingly, camelid species including camels, lamas and alpacas have antibodies that are devoid of light chains, with only the two sets of heavy chains. The variable domain of the camelid antibody is called the VHH domain (illustrated in Fig. 2), more commonly known as nanobody [24]. Nbs are a rapidly growing filed in research with extensive evaluation in therapeutics and diagnostics. They have many advantages over conventional antibodies. To start, their small size (about 15 kDa) allows for good tissue penetration. Moreover, they have excellent aqueous solubility, stability, are easily bioengineer and suitable for large scale production assisted by yeast or bacteria. These outstanding biochemical properties possibly assists their administration by inhalation. Inhaled Nbs allow for lower doses, are more patient friendly and do not require hospitalization [36,37]. Finally, although attained from different species, their heavy chain is very alike to human antibodies, thus, is of low immunogenicity [38][39][40][41]. Caplacizumab is an FDA approved Nb for the management of thrombotic thrombocytopenia, supporting their therapeutic potential [42]. Although many Nbs have been bioengineered and successfully tested preclinically, their efficacy in humans is yet to be trialed [43][44][45].
The main mechanism of action of Nbs against COVID-19 is by inhibiting spike protein-ACE2 binding interaction as shown in Fig. 3. Schoof, et al., using yeast surface-display libraries, identified two classes of neutralizing Nbs. Class I Nbs such as Nb6 and Nb11 (the most potent member of class I), attached to RBD, while class II Nbs such as Nb3 attached to a different, unidentified epitope on spike proteins. The lateral class had less inhibitory effect against COVID-19. Cryo-electron microscopy (cryo-EM) was used to identify the binding site of the most potent class I Nb. Both Nb11 and Nb6 were found to bind the up and down conformation of RBD. Uniquely, the binding of Nb6 to the more stable down state, stabilized two nearby RBD in the down conformation. This was likely to facilitate binding of other Nb6 molecules. These findings were not manifested by Nb11, that only bound RBD. The mechanism of Nb6's binding, influenced Schoof, M. and colleagues to design bivalent and trivalent forms of Nb6, that could possibly keep all RBD in the down state. Indeed, upon investigating the equilibrium dissociation constant (K D ) of bivalent and trivalent Nb6, more than 200,000-fold improvement in K D was noted. Furthermore, Nb6 was modified with the aim of enhancing potency. The matured (modified) Nb6 (mNb6) exhibited 500-fold augmentation in spike binding affinity. As mNb6 showed a similar binding mode to Nb6, engineered trivalent mNb6 were the most potent multivalent Nb in neutralizing COVID-19. The observed mNb6 neutralizing effect was by two mechanisms; blocking RBD-spike interaction and stabilizing RBD in inactive downstate (ACE2 receptor only binds to the up-state). Table 3 provides a summary of the monovalent and multivalent Nb affinities and neutralizing activities [46].
In another study, a humanized llama VHH was used to examine the potency of Nbs against COVID-19. 91 high-affinity Nbs hit the spike protein binding site and 69 out of the 91 Nbs had a unique sequence. Upon further investigation of the 69 unique Nbs, 15S protein binders were discovered to block the spike protein-ACE2 receptor interaction, enhancing the neutralization effect against COVID-19 infection [47]. A Further study by Chi, et al., reported five single domain Nbs (sdNbs) that act against COVID-19 spikes. These monovalent Nbs had low affinity against pseudotyped particle of COVID-19. A successful attempt to improve the neutralizing activity of these sdNbs was noted upon fusion with human IgG Fc-domain. Fc-fused sdNbs showed 10-fold increase in activity compared to conventional sdNbs [48]. Although Fc-fused Nbs are more potent, they are more likely to be associated with ADE.
Koenig, et al., screened Nbs produced by immunized lama and alpaca. Out of 23 potential Nbs, four were the most potent VHH U, V, W and E in competing with COVID-19 for the RBD. VHH E had the highest activity (out of the four) with an IC50 (half maximal inhibitory concentration) of 60 nM. Surface plasmon resonance (SPR) assay identified two binding sites on RBD. One region that binds each of VHH U, V and W, while VHH E binds to a separate region. Based on this information, and those obtained from SPR, X-ray crystallography and cryo-EM, Koenig et al., engineered two types of Nbs. Multivalent Nb VHH EE and EEE, since VHH E was the most potent Nb. However, upon exposure, the virus rapidly developed resistance and was no longer recognized by the Nbs. To overcome or limit the resistance, bivalent biparatropic Nbs were developed that targeted two independent regions of the RBD (VHH E + U, VHH E + V, VHH E + W). Notwithstanding, it is speculated that bivalent biparatropic Nbs' neutralizing mechanism enhances viral fusion as VHH E, U and W stabilized the RBD in the up conformation. Since the up conformation is the active conformation for COVID-19, it is believed that this triggers further conformational changes that eventually cause viral-membrane fusion. This observation is of interest as VHH E compared to VHH U and W target different binding sites, a phenomenon that was not observed in other coronaviruses. The exact mechanism is not clear and necessitates further investigation [37]. Table 3 provides a summary of the different Nbs' mechanism of action, binding affinity and IC50.
To conclude, the use of Nbs that stabilize the RBD in the more stable down-confirmation, like the potent trivalent mNb6, might be of higher benefit. This would prevent possible Nb induced viral fusion. Also, they are devoid of ADE induced by Fc-fused sdNbs. Additionally, comparing potencies, mNb6 had the lowest IC50 versus each of VHH EV and VHH VE respectively. (1.6 nM vs 2.9 and 4.1 nM).
Famotidine
Famotidine is a Histamine-2 receptor (H2) antagonist, used in the treatment of peptic ulcer, mild reflux esophagitis and Zollinger-Ellison syndrome [49]. The potential mechanism of action of famotidine is being investigated. There are several studies that support the use of repurposed famotidine in COVID-19 patients.
In a case series, 10 non-hospitalized patients administered 80 mg famotidine three times daily for 11-21 days. All patients reported marked improvement in symptoms [50]. In a cohort retrospective study including a total of 1620 inpatients, 84 received a median dose of 136 mg famotidine for a duration of 5-8 days. On the other hand, 1566 patients were classified as control (did not receive famotidine). The results showed that the death/intubation ratio was (8/84). 10% of patients administered famotidine while 22% did not receive famotidine (332/1536). Results were statically significant (p < 0.01) showing that famotidine administration was associated with an improved outcome in terms of need for intubation or death [51]. However, bases on these results alone, it cannot be confirmed that famotidine has a direct effect on COVID-19, because it was an observational study. In another Table 3 List of the different types of neutralizing nanobodies. [53].
A hypothesis suggests that a high dose of famotidine could produce antiviral effect by inhibiting two COVID-19 proteases, papain-like protease and 3-chymotrypsin-like protease [50,54]. Nonetheless, in silico studies did not support this hypothesis. Loffredo et al., suggests that high dose famotidine is more likely to be involved in limiting the hyperinflammatory phase [55]. In summary, further studies are needed to identify famotidine's mechanism of action and additional larger, multicentred studies are needed to confirm and support the effectiveness of famotidine towards COVID-19.
Other drugs
The following table summarizes other drugs that act against COVID-19 during the pulmonary phase (Table 4). Camostat mesylate and baricitinib inhibit viral fusion while the other drugs inhibit viral replication.
Hyperinflammatory phase III
During this phase, inflammation extends beyond the lungs into a systemic hyperinflammatory syndrome, also known as cytokine storm syndrome. As a result, patients can develop a range of complications mainly ARDS, sepsis or even multiorgan failure. It is characterized by an elevation in inflammatory mediators like IL-2, IL-6, IL-7, TNF-alpha, Creactive protein and a decrease in T-cell count [56].
Tocilizumab
Tocilizumab is a humanized monoclonal antibody that binds to interleukin 6 (IL-6) receptor. The approved use of tocilizumab is rheumatoid arthritis due to its anti-inflammatory effect. Tocilizumab can bind to both soluble IL-6 receptor and membrane bound receptor, antagonizing the effect of IL-6. IL-6 is an important pro-inflammatory mediator and its production is triggered by tissue injury and infection. Release of IL-6 into the circulation mobilizes B and T-cells. Targeting IL-6 receptor accordingly has a role in limiting the inflammatory and immune response [57,58].
Unlike REGEN-COV2 and etesevimab used in selected mild to moderate COVID-19 outpatients, tocilizumab is investigated for more severe COVID-19 cases in hospitalized patients. COVID-19 intensive care unit (ICU) patients have high plasma levels of cytokines, known as cytokine storm. IL-6 particularly, was elevated in more severe COVID-19 cases or those requiring mechanical ventilation [57]. There are many studies that investigated the effect of tocilizumab as shown in Table 5 with conflicting findings. Thence, larger randomized clinical trials were conducted to delineate tocilizumab's findings.
Tocilizumab was part of the [RECOVERY] trial, a large randomized clinical trial that included all big hospitals in the United Kingdom. This trail aims to find potential treatment for severe COVID-19 hospitalized patients. Eligible patients had a positive COVID-19 test, hypoxia (defined as oxygen saturation < 92%) and systemic inflammatory Creactive protein levels of ≥ 75 mg/L). Study participants were randomized to receive standard care only or standard care along with intravenous tocilizumab at a dose of 400-800 mg (according to weight). If patient's condition did not improve, a second dose of tocilizumab was given, 12-24 h after the initial dose. 4116 adults were eligible according to the study criteria. Of those, 596 out of 2022 patient (29%) on the tocilizumab arm, were discharged after 28 days. Contrastingly, 694 out of 2094 subject (33%) who received the usual care, died (p = 0.007). Overall, patients allocated to receive tocilizumab had 4% reduction in mortality and the need.
for invasive mechanical ventilation (p = 0.0005) [59]. Comparable results were seen with EMPACTA (Evaluating Minority Patients with Actemra) in terms of reduced mortality and need for mechanical ventilation. EMPACTA is a phase III, international clinical trial. The aim of the study was to explore whether tocilizumab is safe and effective in COVID-19 hospitalized pneumonia patients, not on mechanical ventilation [60]. REMAP-CAP trial that recently published its results in New England Journal of Medicine (NEJM) also supported positive findings with tocilizumab when used in COVID-19 ICU patients that were not organ supported [61]. Finally, COVACTA trial involved 62 hospitals, and also published its results in NEJM, showed no major improvement in Table 4 Other drugs that act against SARS-CoV-2 during the pulmonary phase.
Dexamethasone
One of the treatment approaches that has been widely implicated in the management of the hyperinflammatory phase is dexamethasone. Dexamethasone belongs to the Corticosteroids family, specifically; it is a glucocorticoid. Corticosteroids are used in several inflammatory conditions affecting a wide range of system, including dermatological, ophthalmic, rheumatologic, hematologic, gastroenterological and others. More importantly, they are commonly used in pulmonary infections like asthma, chronic obstructive pulmonary disease and viral pneumonias [64,65]. Dexamethasone is cheap, readily available and has a long half-life. In comparison to other corticosteroids, dexamethasone is 25 times more potent and has relatively no mineralocorticoid effect [66,67].
Dexamethasone has both anti-inflammatory and immunomodulatory effect. These effects result from a genomic or non-genomic pathway depending on the dose. At a low dose, dexamethasone has a genomic effect, altering genes that code for proinflammatory cytokines and chemokines. The lipophilic nature of dexamethasone allows it to cross the cell membrane and bind to the glucocorticoid receptor in the cytoplasm. Upon binding, the complex relocates to the cell nucleus where it binds to glucocorticoid response elements. Glucocorticoid response elements modulate gene transcription of several inflammatory mediators like cytokines, macrophages, mast cells, lymphocytes and prostaglandins. Additionally, this binding upregulates anti-inflammatory mediators IL-10, annexin A1 and lipocortin-1. When a higher dose of dexamethasone is used, the anti-inflammatory and immunomodulatory effect result from a nongenomic pathway. Compared to the genomic pathway, it is faster but shorter in duration of action. Instead of binding intracellularly, dexamethasone binds either membrane-bound glucocorticoid receptor or cytosolic glucocorticoid receptor. A third mechanism is by a nonspecific cell membrane interaction, that alters certain signaling pathways [64,[68][69][70]. Furthermore, based on computational molecular modeling, dexamethasone was found to inhibit the COV-ID-19M pro [71].
A low dose of dexamethasone is indicated in severe cases of COVID-19, while no benefits were observed in mild to moderate cases. High doses are not recommended as they are associated with harmful effects [72]. According to the WHO and National Institutes of Health (NIH), corticosteroids are indicated as standard care for up to 10 days or until discharge in patients with COVID-19 pneumonia, requiring respiratory support [66]. These recommendations were largely shaped by the results of the RECOVERY trial. In the RECOVERY trial, hospitalized COVID-19 patients were allocated to receive dexamethasone along with usual care (n = 2104) or usual clinical care only (n = 4321). A dose of 6 mg dexamethasone was administered for up to 10 days or until discharge. Findings demonstrated that dexamethasone use is associated with lower mortality rate in patient on mechanical ventilation (29.3% vs. 41.4%; rate ratio, 0.64; 95% CI, 0.51-0.81) or oxygen therapy (23.3% vs. 26.2%; rate ratio, 0.82; 95% CI, 0.72-0.94) by day 28. Contrarily, no reduction in mortality was noted in patients not on respiratory support (17.8% vs. 14.0%; rate ratio, 1.19; 95% CI, 0.92-1.55) [73].
Timing of initiating therapy is crucial. Early administration can aid viral replication and interfere with the adaptive immune response [64]. Evidence driven by current studies suggest maximum benefit from corticosteroid therapy when initiated in patients with persistent [72] symptoms, beyond 7 days [74]. Although dexamethasone is mainly indicated in the hyperinflammatory phase, initiation during the pulmonary phase exclusively in hypoxic patients has also been advocated [56]. While guidelines were based on the results of the RECOVERY trial, the extend of secondary bacterial infections (seen with pervious viral pneumonias) was not assessed. Therefore, careful use is warranted particularly when administered with other immunosuppressants.
Selective Serotonin Reuptake inhibitors (SSRI)
2.3.3.1. Sigma-1 receptor (S1R) agonist. Sigma-1 receptor (S1R) is a transmembrane chaperon protein and function as a receptor for many ligands. It is located in the mitochondria-associated membrane that is found in many organ tissues, but mainly in the central nervous system. Mutations or polymorphism of the S1R can lead to neuronal degeneration, resulting in pathological conditions such as amyotrophic lateral sclerosis, Huntington's diseases, Alzheimer's and dementia [75]. S1R agonist used in animal models displayed neuroprotective actions [75][76][77].
Interestingly, S1R is also involved in regulating oxidative stress in the endoplasmic reticulum. Specifically, inositol-requiring enzyme 1α (IRE1), a main stress sensor, promotes the release of inflammatory cytokines upon subjection to lipopolysaccharide (LPS). Unfortunately, IRE1 is difficult to target as it is involved in other important psychological conditions. For this reason, Rosen, et al., shifted their focus on S1R, suggesting the involvement of this receptor in IRE1-induced inflammation. The study assessed the effect of S1R-IRE1 pathway modification in mice after injection of LPS, or peritoneal administration of fecal slurry. Mice with deleted S1R had significant higher sepsis induced-mortality rates (higher tumor necrosis factor alpha (TNF-α) and IL-6 concentration p < 0.05). Oppositely, upregulation of S1R or suppression of IRE1 increased survival due to a decrease in inflammatory response (decrease in IL-8 concertation p < 0.05). These findings supported further assessment of the anti-inflammatory effect of fluvoxamine. Fluvoxamine is a SSRI but also a potent S1R agonist. Injection of fluvoxamine in septic animal models revealed positive improvement. The anti-inflammatory effect was reproducible in human cells as well [78]. These findings, although induced by LPS, are certainly promising. Sepsis is a lethal complication and a major cause of mortality in intensive care patients. Despite sepsis being more commonly bacterial, viral sepsis can be a complication of viral infection as well [79,80]. In a study by Chen et al., in (Wuhan, china), out of 99 cases admitted to the hospital, 4% had septic shock [81]. More importantly, TNF-α and IL-6 are two very important pro-inflammatory cytokines involved in COVID-19 associated cytokine storm. Cytokine storm is a major complication of COVID-19 resulting in multiorgan failure, ARDS or even death [82].
A Double-blinded randomized study involved 152 mild PCR confirmed COVID-19 cases to receive either fluvoxamine (day 1 = 50 mg OD, day 2&3 = 100 mg twice daily, day 4-15 = 100 mg three times per day if tolerated) or placebo for 15 days. The objective was to assess the ability of fluvoxamine to halt disease progression and improve clinical outcome in symptomatic, non-hospitalized patients. The study was carried out remotely from patients' home. None of the fluvoxamine arm subjects had deterioration in their condition compared to 8.3% of the placebo arm (p < 0.09). Unfortunately, these finding cannot be considered statistically significant due to the limited sample size and the rather homogenous study subjects. It is also important to point out that since the study was conducted from distance, there is a higher chance of user bias [83]. Fluvoxamine does not require hospitalization, taken orally, readily available and relatively cheap in comparison to other COVID-19 used agents. Fluvoxamine is advantageous compared to other SSRI in that it does not cause QT-interval prolongation [84].
Lysosomotropic agents.
SSRI are also investigated as lysosomotropic agents. Specifically, sertraline and fluoxetine have been approved as lysosomotropic agents. Lysosomotropic agents are weak bases (pKa > 6, hydrophobic) that can penetrate endosomes or lysosomes in their unionized form. Once they cross, the acidic pH of endosomes or lysosomes causes protonation. Protonation traps the drug inside (ionized drugs cannot cross the membrane), neutralizing the acidic environment (increase in pH is represented in green Fig. 4). The acidic environment is crucial for the fusion of coronaviruses including COVID-19. Upon viral entry through the endocytic pathway, the decrease in endosomal pH allows the virus to attach vacuolar membrane and release the genetic material into the cytosol. This step is vital for viral replication and completion of its life cycle [85]. The closer the endosome gets to the nucleus, the higher the drop in pH (as shown in Fig. 5), which acts as a signal for the virus to exit the vacuole. Furthermore, the peptides needed for viral fusion are usually activated by endolysosomal proteases that require acidic pH for their function. Neutralizing the pH will thus inhibit this step.
Other agents besides SSRI have also been investigated. Examples are chloroquine, hydroxychloroquine and sodium bicarbonate [86][87][88]. Chloroquine and hydroxychloroquine, although studied extensively as lysosomotropic agents, are largely limited by their toxicity, side effect profile, interpatient-variations and their long half-life (30-60 days). Fluoxetine on the other hand has a better side effect profile, is less toxic and has a notable faster elimination half-life (1-3 days) [88]. However, Schloer, et al., stated that complete inhibition of viral entry will only be achieved by inhibiting both pathways, the endocytic pathway as well as the direct fusion with the host plasma membrane [89].
Functional inhibitors of Acid sphingomyelinase (FIASMA).
Functional inhibitions of Acid sphingomyelinase (FIASMA) is a pharmacological class with a wide range of therapeutic applications. Members of this class share the inhibitory effect on sphingomyelinase (SMA), the small size and general tolerability. The antidepressant fluoxetine belongs to this class.
In addition to the lysosomotropic effect of SSRI (mentioned above), Schloer, et al., also reported acid SMA inhibition (ASM), achieved with fluoxetine at higher concentration. ASM is a membrane bound lysosomal enzyme as indicated in Fig. 6. During cellular stress ASM relocates to the cell membrane, where it catalyses the cleavage of sphingomyelin into lipophilic ceramide and hydrophilic phosphorylcholine head. Ceramide is involved in cell signaling that could lead to apoptosis [90]. Once fluoxetine crosses the lysosomal membrane, it disrupts ASM membrane binding and releases it into the lysosomal lumen. Detachment renders the enzyme inactive and further subjects it to proteolytic enzymes Fig. 6 [89]. In vitro studies showed that FIASMA can prevent the infection with influenza and Ebola virus. In a study by Carpinteiro et al., virus infected the cells by activation of SMA, therefore inactivation of SMA could limit viral infection [91].
Fluoxetine also prevents efflux of cholesterol from the endosomes and lysosomes. As a result, less cholesterol is available for the plasma membrane and other cell functions. Cholesterol is particularly important for enveloped viruses as they form their envelopes from the host membrane. This mechanism is exhibited in influenza virus, through viral envelopes with decreased cholesterol content (crucial for viral survival) and less viral release. Indeed, 10 µM fluoxetine when used in a cell culture model of COVID-19, significantly reduced viral load. It was also noted that the inhibitory effect was dose-related [89]. In vitro and observational studies indicated that fluoxetine prevents COVID-19 infection at usual psychiatric doses [92]. The risk of mortality and intubation was reduced dramatically in COVID-19 patients receiving regular antidepressant doses of fluoxetine (20 mg), as documented by a retrospective clinical study [93].
To summarize, SSRI can inhibit COVID-19 through several mechanism as shown in Table 6 S1R modulation, endolysosomal pH reduction and FIASMA. They have a good safety profile, are readily available, can be taken orally and are cost effective. This certainly makes them an attractive class for repurposing during this pandemic, when there are yet no definitive treatments available. However, their exact place of therapy is not yet clear, and further clinical trials need to be conducted.
Melatonin
There are several studies reporting the possible beneficial effect of melatonin, especially in elderly [94]. Melatonin biosynthesis has long been restricted to the pineal gland mainly at night. Nevertheless, increasing data indicate its release from the mitochondria. This means that most of the cells, including macrophages, synthesize melatonin.
Melatonin possesses anti-inflammatory, antioxidant, immunomodulatory effect and preserves mitochondrial function during conditions of oxidative stress (see Table 7) [95]. Impressively, Veltri, et al., reported that the liver, heart and brain have the highest mitochondrial density. This means they would be most protected by melatonin during sepsis, which is a major cause of morbidity in COVID-19 patients [96,97].
Suppression of the hyper-inflammatory state would ultimately also improve lung function. Especially when considering patients in the ICU that suffer additional pulmonary stress and inflammation due to mechanical ventilation [98]. However, melatonin has no documented direct antiviral activity and is therefore suggested as adjuvant therapy [94,99].
It has been noticed that viruses can inhibit melatonin release both form the pineal gland and the mitochondria. Exogenous melatonin administration in several infections exhibited a protective effect and limited the intensity of the infection [100]. It is suspected that COVID-19 similarly inhibits melatonin synthesis, thus reducing melatonin plasma levels.
Administration of melatonin in COVID-19 patients would therefore reduce the cytokine storm and the generation of free radicals. Consequently, limiting not only alveolar damage, but also protect other vital organs. Several preclinical studies have demonstrated the positive effect of melatonin in reversing organ damage and increasing survival in septic shock [54,101]. Melatonin in several models successfully managed sepsis and restored vital organ function [102].
The effect of melatonin is even more advantageous in the geriatric population. Upon aging, the body's functions decline, and less melatonin is released, resulting in more severe cases. Both age and chronic conditions worsen prognosis and are associated with decreased melatonin levels [100,103]. This was evidenced by better response in aged rats, as reported by Escames, et al., [102]. In addition, supplemental melatonin administered to rodents delayed both aging and its associated chronic conditions [104]. Various clinical trials also confirm effectiveness. In one study, a dose of 60 mg/day parenteral melatonin, was administered to ICU COVID-19 patients. Melatonin reduced the severity of sepsis, improved discharge by 40% and abolished mortality rates [105]. These promising findings encouraged many researchers to conduct further clinical trials in hospitalized ICU COVID-19 infected patients [106]. As of 2021, there are two ongoing clinical trials, both approved by Spanish agency of medicine. Escames et al., (EudraCT, 2020-001808-42) aims to find an effective dose of melatonin against COVID-19 [105]. The second trial (EudraCT, 2020-001530-35) tests 2 mg of Circadin® (melatonin) as a prophylactic agent in high-risk individuals [107].
Another important role of melatonin, especially in elderly, is its effect on the circadian rhythm [95,97]. Defective sleep patterns largely weaken the immune system and increase susceptibility to infection. This is further amplified by stress and lockdown [108]. This indicates that melatonin will not only assist in preventing infection, but also aim to regulate defective sleep patterns. Melatonin is safe even at high doses, readily available and can be taken orally. An oral dose of 50-100 mg half an hour before bedtime, has been suggested for the geriatric population [109]. However, firm guidelines on the use of melatonin in COVID-19 patients are lacking. Therefore, further investigations are needed to drive firm conclusions. 6. Functional inhibitors of Acid sphingomyelinase (FIASMA) by fluoxetine. ASM is a membrane bound enzyme. Fluoxetine displaces ASM which turns it into inactive. Displacement from the membrane also subjects it to proteolysis by proteolytic enzymes.
Conclusion
Despite the large number of clinical trials and available vaccines, cure against COVID-19 is still lacking. Patients with mild COVID-19 should be limited to supportive care and symptomatic treatment according to presentation. Selected monoclonal antibody combinations (casirivimab with imdevimab or bamlanivimab with etesevimab) are FDA approved for EUA in mild to moderate no-hospitalized COVID-19 patient at risk for clinical deterioration. Although inhaled neutralizing Nbs are almost certainly superior to monoclonal antibodies, they still lack clinical evidence for their efficacy. Ivermectin use as per WHO recommendations should be limited to clinical trials. For hospitalized COVID-19 patients that meet the WHO severity criteria, systemic corticosteroids or a combination of corticosteroids with an IL-6 blocker (tocilizumab or Sarilumab) is strongly recommended. While WHO only suggests the use of remdesivir and baricitinib in clinical trial, the NIH recommends the use of either remdesivir or baricitinib in combination with dexamethasone in hospitalized patients that require oxygen therapy. Other agents like camostat mesylate and plitidepsin seem promising, but still await the results of phase III clinical trials. Preliminary studies on repurposed melatonin and SSRI suggest their positive effect against COVID-19 in both outpatient and inpatient settings. Therefore, we suggest further investigations and larger clinical trials to determine their efficacy and place of therapy.
Although the fate of this pandemic is unpredictable, as the virus continues to mutate, history of previous coronavirus strains forecast that COVID-19 might end up similarly to influenza. Thus, even with the end of this pandemic, effective treatment will probably always be needed.
CRediT authorship contribution statement
Soraya Mouffak: Conceptualization, collection of data and original draft preparation Qamar Shubbar: Conceptualization, collection of data and original draft preparation Ekram Saleh: manuscript reviewing and editing Raafat El-Awady: Conceptualization, Supervision, manuscript reviewing and editing. | 2021-08-27T13:12:54.424Z | 2021-08-27T00:00:00.000 | {
"year": 2021,
"sha1": "0deb4ffe95a736765f81ca62c45ea51d68839659",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.biopha.2021.112107",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5830e89a2061895e5752d00e308a35a871748e21",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262546384 | pes2o/s2orc | v3-fos-license | A multi-omics study to investigate the progression of the Correa pathway in gastric mucosa in the context of cirrhosis
Background Patients with liver cirrhosis (LC) are prone to gastric mucosa damage. We investigated the alterations of gastric mucosa in LC patients and their possible mechanisms through multi-omics. Results We observed significant gastric mucosa microbial dysbiosis in LC subjects. Gastric mucosal microbiomes of LC patients contained a higher relative abundance of Streptococcus, Neisseria, Prevotella, Veillonella, and Porphyromonas, as well as a decreased abundance in Helicobacter and Achromobacter, than control subjects. The LC patients had higher levels of bile acids (BAs) and long-chain acylcarnitines (long-chain ACs) in serum. The gastric mucosal microbiomes were associated with serum levels of BAs and long-chain ACs. Transcriptome analyses of gastric mucosa revealed an upregulation of endothelial cell specific molecule 1, serpin family E member 1, mucin 2, caudal type homeobox 2, retinol binding protein 2, and defensin alpha 5 in LC group. Besides, the bile secretion signaling pathway was significantly upregulated in the LC group. Conclusions The alterations in the gastric mucosal microbiome and transcriptome of LC patients were identified. The impaired energy metabolism in gastric mucosal cells and bile acids might aggravate the inflammation of gastric mucosa and even exacerbate the Correa’s cascade process. The gastric mucosal cells might reduce bile acid toxicity by bile acid efflux and detoxification. Trial registration: ChiCTR2100051070. Supplementary Information The online version contains supplementary material available at 10.1186/s13099-023-00571-y.
ascites, and saliva samples, pointing toward a global mucosal immune impairment in patients with liver cirrhosis [4].Among published studies, some results indicated that patients with liver cirrhosis and portal hypertension had mucosal abnormalities.They include mucosal inflammatory-like abnormalities (edema, erythema, granularity, and friability), erosive gastritis, gastric ulcer, and intestinal metaplasia [5][6][7].Enrichment of certain pathogenic bacteria in the stomach can promote further progression of gastric inflammation or gastric cancer [8].However, the characterization of gastric microbiota in cirrhosis and their relationship with gastric mucosal abnormalities is unclear.
In addition, patients with cirrhosis also have alterations in serum metabolite [9], which could collaborate with mucosal microbiota to irritate lesions.Technological advances in liquid chromatography coupled with high-resolution mass spectrometry are beginning to revolutionize our understanding of the cause of liver cirrhosis.Recent advances indicate that a general metabolic alteration is a hallmark of the intricate road to decompensated cirrhosis [10].This metabolic alteration is characterized by decreased β-oxidation of fatty acids in the mitochondria and a concomitant increase in extramitochondrial glucose utilization via glycolysis [10,11].Nevertheless, the interaction between gastric flora and blood metabolism in patients with liver cirrhosis is still unknown.It is also unclear whether gastric mucosal dysbiosis in cirrhosis patients has any underlying relationship with serum metabolite.
We hypothesized that there are significant alterations in the gastric mucosal microbiota in cirrhosis, like the gut, which may exacerbate mucosal abnormalities in the stomach.We also postulated that perturbations of the gastric mucosal microbiota have significant correlations with serum metabolite in patients with liver cirrhosis, and poses significant effects on the development of the disease.To answer these questions, we performed 16S rRNA high-throughput sequencing to characterize the gastric mucosal microbial communities and liquid chromatography-mass spectrometry (LC-MS) to assess serum metabolism in the same group of patients with liver cirrhosis.
Subjects and sample collection
We performed an observational study of 64 patients with liver cirrhosis and 59 non-liver cirrhosis patients.Patients were recruited at the Qilu Hospital of Shandong University from September 2021 to April 2022.According to international guidelines, LC is diagnosed based on clinical symptoms, physical signs, radiologic examination, laboratory tests, medical history, and cirrhosis-associated complications in chronic liver disease [12].Participants aged above 18 years who signed informed consent were included into the study.The exclusion criteria were as follows: (1) cardiovascular disease, diabetes, inflammatory bowel disease, and mental disease, (2) individuals who received proton pump inhibitors, antibiotics, hormones, immunosuppressants, or chemotherapy drugs within one month of enrollment, (3) patients with positive rapid urease test.Detailed demographic, clinical, and etiology of cirrhosis model for end-stage liver disease scores were collected for all patients at the time of inclusion (Tables 1, 2).The study was registered in Chinese Clinical Trials.gov(http:// www.chictr.org.cn/ index.aspx, ChiCTR2100051070).The study was approved by the Ethics Committee of Qilu Hospital of Shandong University and complied with the Declaration of Helsinki.
We performed high-definition gastroscopy (Pentax EG29-i10, Pentax, Tokyo, Japan) for all enrolled participants.Each piece of mucosa from the gastric antrum to body of the stomach was examined by gastroscopy, and a rapid urease test was carried out.Two pieces of gastric tissue fragments with negative rapid urease test were collected using 2.2 mm sterile biopsy forceps, and all were stored immediately at − 80 °C for the subsequent procedure.One piece was used for 16S rRNA sequencing, and the other was stored in RNALater according to the manufacturer's instructions and stored at − 80 ℃ for subsequent transcriptomics analysis.All examinations were performed by two experienced gastroenterologists with at least five years' experience (YYL, YJG).
The blood samples were drawn by experienced nurses after 8h fasting on the 2nd day morning after admission, and were collected in 10 mL EDTA tubes.Samples were centrifuged at 13,000g for 5 min at 4 ℃ within one hour after collection, and the plasma was frozen at − 80 °C freezer.Standard laboratory parameters were evaluated by the central laboratory of the Qilu Hospital of Shandong University (Additional file 1: Table S1).Blood samples, and gastric mucosa were transported under dry ice to laboratory, where they were thawed at the time of analysis.The flow chart of the study population is shown in Additional file 2: Figure S1A.
Microbiome analysis
We performed microbiota analysis using alpha and beta diversity analyses.Samples with < 1% Helicobacter pylori relative abundance were grouped as H. pylori-negative, while samples with > 1% H. pylori relative abundance were grouped as H. pylori-positive [13,14].Differentially abundant bacterial taxa are identified by the linear discriminant analysis (LDA) effect size (LEfSe) method.
We used microbial network analysis to identify clusters of microbial taxa that were highly correlated (correlation coefficients < -0.5 or > 0.5, q < 0.05).
Metabolome analysis
Unsupervised principal component analysis and partial least squares discriminate analysis (PLS-DA) was used to assess the global metabolic alterations between groups.The partial least squares discriminate analysis model was used with the first principal component of variable importance in projection values (VIP > 1.0) combined with Student's t-test (P < 0.001) to determine the significantly different metabolites between the LC group and control group.The differentially accumulated metabolites were mapped to the Kyoto Encyclopedia of Genes and Genomes database (KEGG) for descriptive annotation.
Spearman method was used to analyze the correlation coefficients for the metabolome and microbiome data integration.
Transcriptome analysis
We identified genes with false discovery rate (FDR) < 0.05 and |log2FC|> 1 in a comparison as significant differential expression gene (DEG).DEGs were considered significantly enriched in a KEGG pathway at q ≤ 0.05 compared with the whole transcriptome background.
Statistical analysis
Statistical analysis was performed using the Mann-Whitney U test and one-way ANOVA when appropriate.Statistical significance was taken as P < 0.05.Data were analyzed using SPSS software version 25.0.All authors had access to the study data and had reviewed and approved the final manuscript.
Increased alpha-diversity and altered overall microbial composition in LC
Total of 69 subjects, including 30 control subjects (S-C) and 39 patients with cirrhosis (S-LC), with similar demographics, were included in this study and the information were shown in Tables 1 and 2. We compared the alpha diversity of gastric microbiota between the S-LC and S-C group, and the Sobs (P < 0.05), Shannon (P < 0.001), Ace (P < 0.05), and Chao1 (P < 0.05) values were significantly higher in S-LC (Additional file 1: Table S2, Fig. 1A).Meanwhile, beta diversity analysis showed separate clusters for S-LC and S-C (P = 0.001, Fig. 1B).The gastric microbiota was dominated by eight phyla: Proteobacteria, Campilobacterota, Firmicutes, Bacteroidetes, Actinobacteria, unclassified_k__norank_d__Bacteria, Fusobacteria, and Cyanobacteria (Fig. 1C), although the two groups presented in different order of relative abundance at the phylum level.The gastric microbiota in liver cirrhosis had an over-representation of Firmicutes, Bacteroidetes, Actinobacteria, and Fusobacteriota (P < 0.001; Additional file 3: Figure S2).At the genus level, several genera, including Streptococcus, Prevotella, Neisseria, Fusobacterium, Haemophilus, Veillonella, Porphyromonas, Actinomyces, Gemella, Alloprevotella, Rothia, Granulicatella, and Peptostreptococcus, significantly increased in relative abundance between the S-C and S-LC (Fig. 1D).The Helicobacter and Achromobacter decreased in LC (Fig. 1D).
Bacteria differentially abundant in LC versus controls
We performed linear discriminant analysis effect size at the genus level to further identify the gastric-specific species signatures.Seven bacterial taxa showed distinct relative abundances between the two groups.Increased abundance in bacteria, including Streptococcus, Neisseria, Prevotella, Veillonella, and Porphyromonas, and decreased abundance in Helicobacter and Achromobacter were observed in S-LC (LDA score > 4, P < 0.05; Fig. 1E).Furthermore, we found that the H. pylori infection was significantly lower in S-LC than in S-C (Fig. 1F).
To understand the potential interplay among differentially abundant bacteria in S-LC and control, we performed the network topology analysis (Additional file 4: Figure S3) at the genus level.In S-C group, co-occurrence interactions were observed, reflecting synergistic interkingdom interactions' contribution to gastric microbiota homeostasis (Additional file 4: Figure S3A).Much fewer co-occurrence interactions were observed in S-LC (Additional file 4: Figure S3B).In addition, H. pylori had a co-exclusive association with other gastric microbes in S-C predominantly.However, these correlations were relatively infrequent in S-LC.Collectively, the above microbial analysis indicated a state of dysbiosis in the mucosal microbiome of LC patients.
Blood metabolism changes with LC
We recruited 22 cirrhotic patients (age 53.04 ± 11.08 years, 15 men) and 44 control patients (age 48.32 ± 14.37 years, 25 men) who agreed to give blood serum (Tables 1, 2).We used liquid chromatography-mass spectrometry; to analyze the serum samples, and the abundance profiles were obtained for 1540 annotated serum metabolites.It was found that 492 of 1540 metabolites had significantly different abundances (Fig. 2A).We subsequently performed the partial least squares discriminate analysis and the results revealed visual separation between these groups without overfitting (Fig. 2B).In addition, PLSDA-VIP table (Fig. 2C, Additional file 1: Table S3) showed top 30 metabolites with VIP > 1.0 and P < 0.001.Of them, 9 were increased in the control group, and 21 were increased in the LC group.Notably, the VIP value of taurochenodeoxycholate-3-sulfate (VIP = 3.72), taurodeoxycholic acid (VIP = 2.93), and cis-5-tetradecenoylcarnitine (VIP = 2.89) were more than 2, indicating their significant contribution to the disease.Furthermore, we carried out an enrichment analysis for all differential metabolites using the KEGG.The results indicated that the metabolites in LC were mainly associated with sphingolipid metabolism, glycerophospholipid metabolism, cysteine and methionine metabolism (Additional file 5: Figure S4).
Association of microbes and metabolites with LC
To further determine the relationships between gastric microbiota and metabolic changes, we subsequently perform the Spearman's correlation analysis of serum differential metabolite and microbiota in stomach.Interestingly, there was a strong correlation between a large number of stomach microbiota and altered metabolites (Fig. 3A).The correlation was considered statistically significant with correlation coefficient |r|> 0.7 and p < 0.05 in this study, and results were visualized as heatmaps.
Gastric mucosal transcriptome changes in LC
To obtain a comprehensive view of gastric mucosa influenced by microbial colonization and metabolic alterations, we further investigated the gastric mucosal transcriptome in two groups comprising 10 control and 10 LC patients.It was found that 181 and 124 genes were differentially down and upregulated, respectively.Of them, the endothelial cell specific molecule 1, serpin family E member 1, mucin 2, caudal type homeobox 2, and retinol binding protein 2 were significantly upregulated genes (Fig. 4A, B) and were associated with intestinal metastasis and gastric carcinogenesis [15][16][17][18].We also found that defensin alpha 5 (DEFA5), a gene encoding an antimicrobial peptide, is significantly upregulated (P < 0.001).Then, we mapped all DEGs to KEGG pathways, and the top 11 specific pathways were represented in a bubble chart (Fig. 4C).Furthermore, genes associated with the most significantly enriched pathways (q < 0.05) were shown in Additional file 6: Figure S5.Among these 11 pathways, "neuroactive ligand-receptor interaction" was the most represented pathway.Significantly, it was found that the bile secretion signaling pathway was significantly upregulated in the LC group.In the bile secretion signaling pathway, solute carrier family 51 subunit alpha (SLC51A), solute carrier family 51 subunit beta(SLC51B), cytochrome P450 3A4(CYP3A4) were up-regulated.
Discussion
The major conclusions derived from the current studies are that altered gastric flora and elevated bile acids might aggravate the injury to the gastric mucosa and even exacerbate Correa's cascade process.The gastric mucosal cells might reduce bile acid toxicity by bile acid efflux and detoxification.We base this conclusion on three lines of evidence.First, we demonstrated that opportunistic pathogenic bacteria colonized the stomach mucosa of patients with cirrhosis.Next, we provided metabolic evidence to support the conclusion that gastric mucosal cells had an impaired energy metabolism.Finally, the bile secretion pathway, including genes involved bile acid efflux and detoxification, was found to upregulate in the gastric mucosal cell by transcriptome analysis.
The microbiota alteration in the skin, intestinal mucosa, ascites fluid, serum, and the oral cavity have been studied before, except for the gastric mucosa.Our Fig. 2 The liver cirrhosis group is associated with altered serum metabolites.A Differential metabolites by volcano diagram.B PLS-DA of the metabolites across the two groups.C VIP scores with the corresponding expression heatmap.On the left side is the metabolite heatmap.On the right side is the metabolite VIP bar graph.The bar length indicates the contribution of the metabolite to the difference between the two groups.The higher value means the metabolite is more difference between the two groups.The bar color indicates the P value of the metabolite between the two groups, *p < 0.05, **p < 0.01, and ***p < 0.001 data demonstrate that higher bacterial diversity and increased relative abundance of multiple bacterial genera characterize S-LC microbial dysbiosis.In our study, only Helicobacter pylori (Hp) was detected in the Helicobacter; therefore, we consider the Helicobacter to be Hp in the following.There was a greater relative abundance of Streptococcus sp. and Prevotella_melaninogenica, Neisseria spp., and Fusobacterium_periodonticum, with lower Hp.These significantly increased bacteria are pathogenic oral bacteria [19][20][21] that have the potential to elicit an inflammatory response in epithelial cells.
Interestingly, emerging findings suggest the magnitude of roles of specific oral and gastric microbiota correlated with inflammation in the development of early-stage gastric adenocarcinoma [13].Furthermore, as the diseases develop into more severe stages, such as atrophic gastritis, intestinal metaplasia, and gastric adenocarcinoma, the dominance of Hp begins to be displaced by other bacteria, including Streptococcus, Prevotella, and other bacteria [13,22].Taken together, we hypothesize that bacteria in the stomach of patients with cirrhosis may originate from the oral cavity, which may induce gastric mucosal abnormalities similar to Correa's cascade process.
Previous studies have demonstrated that BAs modulate intestinal immunity, inflammation, and tumorigenesis [23].We found that serum BAs were predominantly conjugated, and primary BAs were elevated.BAs can alter membrane lipid composition, and increased BAs concentrations can solubilize membranes and dissociate integral membrane proteins [24].Of note, we also found that long-chain ACs were increased and positively correlated with most gastric mucosal flora, except Hp.In the process of β-oxidation, acylcarnitine transports acyl groups (organic acids and fatty acids) from the cytoplasm to the mitochondria so that they can be broken down to produce energy for cell activities [25].According to the Human Metabolome Database, the primary function of most long-chain acylcarnitine is to ensure long-chain fatty acid transport into the mitochondria.Blood accumulation of long-chain ACs is a marker for incomplete fatty acid oxidation.Moreau R. et al. found that acute-on-chronic liver failure was characterized by extra-mitochondrial glucose metabolism through glycolysis and depressed mitochondrial ATP-producing fatty acid β-oxidation, which may contribute to the development of organ failures [10].The evidence above indicates impaired energy utilization in the microcirculation of cirrhotic patients who did not have other organ complications at enrollment.
We performed a small sample transcriptome analysis to validate further the alterations in gastric mucosaupregulation of genes associated with gastric mucosa malignancy of cirrhotic patients identified in differential gene analysis.We also found that the gene encoding DEFA5 upregulates, contributing to direct antimicrobial, mucosal host defense, and immunomodulatory properties [26].Furthermore, the over-expression of defensins in multiple types of cancer, such as colon cancer, lung cancer, and renal cell carcinomas, suggests a potential involvement of defensins in cancer development [26][27][28].Another study indicates that DEFA5 produced from metaplastic Paneth cells may accelerate the initiation of Barrett's esophagus, which is thought to be a precancerous lesion of esophageal adenocarcinoma [29].An ex vivo animal study shows the down-regulation of the DEFA5 gene in gastric cancer cells and that DEFA5 inhibits the growth of gastric cancer cells [30].The underlying mechanisms of DEFA5 in the initiation and progression of gastric cancer await further studies.
Based on the pathway enrichment analysis of DEGs from gastric mucosa transcriptome, we found that the bile secretion signaling pathway was significantly upregulated in the LC group.We also found that the genes encoding the organic solute transporterα-β (Ostα-Ostβ) upregulate.Ostα-Ostβ responsible for transporting bile acids across the enterocyte basolateral membrane into the portal circulation for subsequent renal excretion [31].CYP3A4 is the major enzyme that catalyzes the hydroxylation of bile acids at various positions, converting bile acids into more hydrophilic and less toxic molecules [32].Upregulation of these genes is an adaptive change in gastric mucosal cells in response to bile acid.
There are some limitations of the study that are important to the trial.First, this is a single-center study with a limited sample size.Second, the etiology of LC is complex and diverse, which may yield different flora microenvironments and serum metabolites.Meanwhile, this difference can reflect the generality of the study's conclusions.Other factors, such as drinking and smoking, were not adjusted, which may impact gastric flora and damage to the gastric mucosa, but relevant antibiotic exposures have been excluded.
In conclusion, this study is the first work on integrated metabolomic, transcriptomic, and microbial analyses to identify critical metabolites and provide insight into the molecular and metabolic mechanisms underlying the alteration in gastric mucosa.Importantly, we showed that members of gastric pathogenic taxa were accumulated, which might originate from the oral cavity.The overrepresented bacteria and serum bile acids co-exacerbated the damage of gastric mucosa and even accelerated the Correa's cascade process.Our study highlights the enrichment of the bile secretion pathway in gastric mucosal cells in the context of LC, which may potentially serve as a protective cellular mechanism for preventing gastric lesions.
Conclusions
The major conclusions are that altered gastric flora and elevated bile acids might aggravate the injury to the gastric mucosa and even exacerbate Correa's cascade process.We also provided transcriptomic evidence to support a protective mechanism for preventing gastric mucosal cells from bile acid toxicity.Understanding how microbiome and metabolite operate in the gastric mucosa should guide the development of new therapeutics for gastric abnormalities in patients with liver cirrhosis.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Fig. 1
Fig. 1 Gastric mucosal microbiome dysbiosis between S-C and S-LC.S-C represents gastric mucosa derived from the control group, and S-LC represents gastric mucosa derived from the liver cirrhosis group.In A, B, D-F, red is the S-C group, and blue is the S-LC group, *p < 0.05, **p < 0.01, ***p < 0.001.A Increased microbial richness, estimated by Shannon index.B Gastric mucosal microbiota showed relative clustering between control subjects compared with all patients with cirrhosis.C Relative abundance of microbial species at the phylum level.D The difference in species composition at the genus level.E The LDA value distribution histogram shows the species between S-C and S-LC at the genus level (LDA > 4).F The relative abundance of H. pylori was higher in S-C than in S-LC.Statistical significance was determined by the Mann-Whitney U test
Fig. 4 S
Fig. 4 S-C represents gastric mucosa derived from the control group, and S-LC represents gastric mucosa derived from the treatment group.A Volcano map of DEGs.B Venn graph of DEGs.C DEGs enriched in the KEGG pathway.The X-axis represents the rich factor, indicating the ratio of enriched genes to total genes in this pathway.A more prominent rich factor indicates more significant enrichment.ESM1 endothelial cell specific molecule 1, SERPINE1 serpin family E member 1, MUC2 mucin 2, CDX2 caudal type homeobox 2, RBP2 retinol binding protein 2, DEFA5 defensin alpha 5, DEG differential expression gene
Table 1
Demographic characteristics of the groupsComparisons between patients with cirrhosis and controls BMI body mass index, MELD model for end-stage liver disease, SD standard deviation
Table 2
Etiology of cirrhotic patients in each group | 2023-09-26T14:19:23.068Z | 2023-09-26T00:00:00.000 | {
"year": 2023,
"sha1": "14307d86cd4189cebd74bb0ec8a84564bdd768b1",
"oa_license": "CCBY",
"oa_url": "https://gutpathogens.biomedcentral.com/counter/pdf/10.1186/s13099-023-00571-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3097a26e5b6732a7244dbe9cd5cfa59d8416e5fb",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237858416 | pes2o/s2orc | v3-fos-license | Fatigue Assessment of Selective Laser Melted Ti-6Al-4V: Influence of Speed Manufacturing and Porosity
: Additive Manufacturing represents a promising technology as an alternative to the conventional manufacturing process, with rapid and economic product development, as well as a significant weight reduction and a freeform design. Although the mechanical properties of additively manufactured metals, such as the Ti-6Al-4V alloy, are well-established, a complete understanding of the fatigue performance is still a pending aspiration due to its inherent stochastic complexity and the influence of several manufacturing factors. This paper presents a study of the influence of speed manufacturing and porosity in the fatigue behaviour of a Ti-6Al-4V alloy. To this aim, a numerical simulation of the expected porosity at different laser velocities is performed, together with a simulation of the residual stresses. These numerical results are compared with experimental measurements of residual stresses and a qualitative analysis of the porosities. Then, fatigue strength is experimentally obtained for two different laser speeds and fitted by a probabilistic model. As a result, the probabilistic S–N fields for different laser velocities are found to be similar, with scatter bands nearly coincident, drawing the conclusion that this effect is negligible in comparison with other concurrent ones, such as roughness or surface defects from manufacturing conditions, promoting crack initiation and premature fatigue failure.
Introduction and Motivation
Additive Manufacturing (AM), formerly known as Rapid Prototyping (RP) (see ASTM F2792-12a [1]), has undoubtedly been increasing over the last two decades as a technology that is disrupting current manufacturing processes, and attracts interest from both industrial and academic perspectives [2][3][4][5]. Known also as 3-D printing, AM consists of a progressive consolidation of raw materials, such as powder or wire, in a layer-by-layer fashion, in an opposite approach to traditional manufacturing processes, which are typically based on machining block parts, that is, the subtraction or removal of material [6]. Moreover, this novel technology has several important advantages compared with traditional methods: an agile development product from Computer Aided Design (CAD) to fabrication; a significant reduction of weight in the final design (with potential reductions of up to nearly 50% [6,7]); and a geometric freedom that allows the production of parts otherwise not possible with conventional methods. Additionally, AM may lead to the reduction of carbon emissions compared with traditional manufacturing processes, due to the use of lighter weight parts [6,8].
Polymeric materials were originally preferred for producing additively manufactured parts [5], but nowadays both non-metallic (composites, ceramics) and metallic materials are usually employed. The titanium (Ti) alloys, in particular, are of great interest because of their increasing use in the aerospace industry due to their weight saving, operating temperature, corrosion resistance and compatibility with biological and composite materials. Unfortunately, their higher cost compared with other alternatives hitherto represents the most important limitation [6,[9][10][11][12].
Three different technologies were developed to produce metal additively manufactured parts: Laser Beam Melting (LBM), Laser Metal Deposition (LMD) and Selective Laser Melting (SLM), the latter being the most commonly used [13,14]. In this process, the energy of the laser source is applied to melt powder, as a raw material, within a powder-bed layer. Then, the 3D geometrical design is built up by recoating a new powder-layer and subsequent melting [5]. Nowadays, these manufacturing technologies for developing structural alloys are particularly useful, since the intrinsic heat can be directly used to trigger the chemical reactions, such as those implied in precipitation hardening alloys [15,16].
There are different works in the literature that have focused on researching the influence of different additive manufacturing parameters on the fatigue performance of Ti-6Al-4V, such as the microstructure, the build direction, the residual stresses and the porosity. In the first case, Nalla et al. [17] have investigated the influence of the microstructure on both bimodal and coarser lamellar types, concluding that the latter improved the fatigue behaviour in the HCF zone, whereas Thijs et al. [18] studied the influence of the scanning parameters and scanning strategy on the microstructure during the SLM process. In the second case, Edwards et al. [6,19] presented a study on the effect of the build direction, revealing that the cracks oriented perpendicular to the build layers provide enhanced fatigue crack growth behaviour. Regarding the residual stresses, several researchers [20,21] have found that a high temperature pre-heating during the additive manufacturing process may reduce thermal gradients. Lastly, [22] evidenced that the failure initiation in SLM or EBM manufactured titanium alloys is governed by porosity and lack of fusion. Nevertheless, previous works have not investigated the influence of speed manufacturing, which would be conducted with different porosities and could imply different fatigue behaviours.
The aim of this paper is to study the influence of different laser velocities on the porosity of additively manufactured specimens of the Ti-6Al-4V alloy and to evaluate the fatigue performance associated with those porosities. Firstly, numerical simulations were developed to study the expected porosity considering different laser velocities, together with residual stresses inherent to the manufacturing process. Secondly, these numerical results were compared with experimental measurements of residual stresses and a qualitative analysis of the porosities. Thirdly, a tensile test was conducted on specimens produced at two different laser velocities in order to evaluate its influence on the mechanical properties. After that, a fatigue experimental campaign was carried out on specimens at two different laser speeds and the results were evaluated according to a probabilistic S-N model developed by [23], contrary to the fatigue assessment methodologies performed in the literature, based on deterministic S-N models [24][25][26], through the inherent and non-negligible scatter titanium fatigue tests [6,[27][28][29][30][31].
The paper is structured as follows: in Section 2, the material and methods employed in this study are detailed, including the material and geometry selected (Section 2.1), the manufacturing conditions (Section 2.2) and the experimental procedures followed in the testing (Section 2.3). Section 3 details the numerical study of the porosity and the residual stresses together with the experimental results obtained. Section 4 is focused on both the tensile (Section 4.1) and fatigue characterization of two different laser velocities (Section 4.2). Section 5 presents an interpretation of the experimental results and, finally, Section 6 summarises the main conclusions drawn from this work.
Materials and Methods
This section describes the material and geometry of the specimens to be used in the experimental campaign in this work. Then, the manufacturing conditions are also detailed, distinguishing different batches of samples fabricated for porosity, tensile and fatigue characterization. Finally, the experimental testing procedures are exposed.
Material and Geometry
The specimens were produced using the SLM technique employing a titanium-based alloy, Ti-6Al-4V. The dimensions of the specimen are indicated in Figure 1. Note that the build direction Z-axis is indicated, starting at the support.
Manufacturing Conditions
All the specimens were provided by the manufacturer Optimus3D (Vitoria-Gasteiz, Spain) in the same orientation. The SLM parameters selected are detailed in Table 1. Three different batches of additively manufactured Ti-6Al-4V samples were produced with the following purposes: The samples from Batch 1 were sliced using a diamond disc cutter by removing a layer thickness of no less than 1 mm, in order to avoid the effect of the cutting process on the microstructure. Silicon carbide papers with different grades of 80, 240, 600, 1200 and 2500 (in this order) were used for the grinding with a continuous water stream for flushing the loose and abrasive particles. Then, a manual polishing process was applied to the surfaces using a diamond suspension of 9, 3 and 1 microns of particle size on a Remet LS1.
It is worth mentioning that neither the heat treatment nor the machining process were conducted on specimens before the experimental campaign, since the aim of this work was to study only the laser velocity, that is, without any additional varying concomitant effect that could mislead the interpretation of the experimental results. After that, the porosity was qualitatively analysed using both optical and scanning electron microscope (SEM). The entire surface of the specimens was analyzed with a resolution of 500 µm in order to identify the zones with larger pores. After that, different scanning zooms (10 and 50 µm) were applied to focus on the zones where pores had been observed. In cases where the resolution of 500 µm was not enough to identify pores in any part of the specimen, 50 µm was used to check the entire surface. It is important to remark that smaller pores could not be identified by the applied resolution. Still, the authors assumed that the influence of those pores on the fatigue life could be disregarded, compared to the pores identified in this study.
Residual Stresses Measurement Procedure
The measurement of the residual stresses was performed by way of the hole drilling strain gage method according to ASTM E837-13a [32] using an MTS-300 RS measurement machine supplied by SINT Technology, as can be seen in Figure 2. The parameters selected included a drilling speed of 0.2 mm/min, a drill delay of 2-3 s and an acquisition delay set to 5-10 s. The RESTAN software (SINT Technology) was used to obtain the residual stresses.
Tensile and Fatigue Characterization Procedure
The tensile tests were developed according to EN 2002 [33]. A strain rate of 0.05 mm/min was used for yield stress σ ys at 0.2 % and 2 %/min for tensile strength σ r . The displacement was measured using DIC equipment.
The fatigue tests were conducted for sinusoidal load at R = 0.1 and at ambient temperature, according to ASTM E466-07 [34], at a frequency of 6 Hz with stress ranging from 100 to 600 MPa, in a servohydraulic MTS Bionix. As previously mentioned, a total of 7 samples were tested for laser velocity v = 1200 mm/s and 8 samples for laser velocity v = 1900 mm/s.
Study of Porosity and Residual Stresses
This section presents the results obtained from numerical and experimental studies related to the estimation and measurement of specimens porosity and residual stresses.
Numerical Study: Expected Porosity and Residual Stresses
Two different numerical studies were conducted: a porosity simulation for each of the different laser velocities considered in Batch 1, and a finite element simulation for the estimation of the residual stresses.
In the first case, the tool known as "Additive Science", from the Additive Suite developed by ANSYS [35], was used to simulate the expected porosity with different laser velocities. The following constraints in the dimensions of the melt pool were introduced as an input (see Figure 3 Based on the constraints on the melt pool, the geometrical mean results provided by Additive Science for each velocity are summarised in Table 2. Once the dimensions of the melt pool had been estimated, the software provided the expected porosity for each of the different laser velocities considered, as shown in Table 3, together with the corresponding energy density in the SLM process. The porosity was directly estimated by ANSYS as a function of the energy density applied, which depends on the power, velocity and the dimensions of the melt pool. It is also worth mentioning that this software only considered the porosity that was due to a lack of fusion, which was predominant at high velocities, and the porosity that was due to spherical vaporization was discarded. Then, in order to identify the kind of porosity to be experimentally found, the causal relationship between the energy density and the kind of porosity proposed by Dilip et al. [36] was used: spherical pores correspond with energy density >60 J/mm 3 , a lack of pores (or completely net) corresponds with energy density in the interval 55-60 J/mm 3 and sharper pores correspond with energy density <55 J/mm 3 . Then, according to these limits, only the velocity 800 mm/s was expected to produce spherical pores, but with a lower percentage of porosity. On the contrary, those velocities higher than 1300 mm/s were expected to give sharper pores, with the porosity percentage increasing as much as the velocity increases. According to the simulation performed and the results obtained, middle velocities ranging from 1100 to 1200 were not expected to produce pores, but completely net structures. Finally, it is worth mentioning that the values reported in Table 2 regarding the percentage of porosity are not related to experimental measurements, but to results obtained by the Additive Suite tool developed by ANSYS, in combination with the work reported by Dilip et al. [36], which studied the influence of processing parameters on the evolution of melt pool and porosity in Ti-6Al-4V alloy parts fabricated by selective laser melting. In the second case, the Additive Science tool allowed the residual stress to be numerically simulated, as can be seen in Figure 4, where the equivalent von Mises stress is depicted along the sample.
Porosity
The porosity was qualitatively analysed using both optical and scanning electron microscopes (SEM) for each of the laser speeds considered in Batch 1, as can be seen in Figure 5. As is well known, the geometrical forms of the pores are expected to be heterogeneous depending on the laser velocity, which occurred in this case; ranging from spherical ( Figure 5a) at low velocities and high laser power, to irregular and sharper (Figure 5d-f) for large velocities and low laser power, which is in accordance with the simulated results in Section 3.1. The former are known to be caused due to improper settings or processing parameters [12,37,38], while the latter are usually related in the literature to the argon gas entrapped during the manufacturing process [39][40][41]. Furthermore, the middle velocities were expected to produce a negligible porosity in comparison with the other velocities (see Table 2), which is corroborated in the micrographs in Figure 5b Figure 6 illustrates the experimental results of the maximum and minimum residual stresses along the distance for both velocities considered in Batch 1. The maximum values of the residual stresses for the higher velocity evolve steadily along the distance, while for the lower velocity, a peak is present at the 0.1 mm distance. The same behaviour is observed in the case of minimum residual stress but at a lower order of magnitude. In general terms, there is an inverse trend between the development of residual stresses and the laser speed, that is, the values for both maximum and minimum residual stresses are higher for the larger velocity until a certain distance of almost 0.6 mm, where both trends tend to be equal.
Study of Tensile and Fatigue Behaviour
Once the study of the expected porosity and residual stresses and its comparison with the experimental results was performed, the tensile and fatigue characterization was conducted, and is now described in this section.
Tensile Behaviour
Engineering stress-strain curves for both velocities considered in Batch 2 are illustrated in Figure 7. As can be seen, the linear-elastic regime is approximately the same in both cases, but with a larger yield strength the lower the laser speed. In the plastic zone, however, the samples manufactured at a lower laser velocity produce a better tensile performance for the same strain value. Table 4 summarises the mechanical constants of the additively manufactured specimens for both velocities.
Fatigue Behaviour
Finally, the fatigue assessment of manufactured Ti-6Al-4V samples from Batch 3 was conducted according to the probabilistic model developed by [23]. In this model, the p-percentile curves in the S-N field are given as the following Weibull distribution: with B as the horizontal asymptote for the stress, that is, the fatigue strength, C as the vertical asymptote for the lifetime, that is, no fatigue failure will occur below this limit, and λ, δ and β as the location, scale and shape Weibull parameters, such that (log ∆σ − C)(log N − B) > λ. Figure 8 depicts the estimated S-N fields for both velocities considered in this batch, which were estimated easily with ProFatigue software [42]. As can be seen, the inherent scatter of fatigue results is non-negligible, thus a probabilistic model is more suitable than a deterministic one. The fatigue performance on both velocities exhibits the same behaviour with no differences in the lifetime cycle for different given stress ranges, and the scatter bands are approximately similar. For this reason, speed manufacturing seems to not have a significant effect on the fatigue performance of additively manufactured Ti-6Al-4V. Finally, the experimental campaign retrieved from [6], corresponding with a lower laser velocity of v = 200 mm/s, was also estimated according to the Castillo-Canteli model and is superposed in Figure 9 for comparison purposes. As a result, a wide range of laser speeds were considered from 200 to 1900 mm/s, providing robustness to the final conclusions drawn from this work. Indeed, though having a lower laser velocity, the resulting S-N field is approximately the same with both previous velocities at 1200 and 1900 mm/s, enhancing the conclusion that there was a negligible effect on the fatigue performance.
Discussion
In this work, different laser speeds were considered in additively manufactured Ti-6Al-4V alloy to evaluate their influence on the fatigue performance, where any other additional concurrent effect was avoided (heat treatment, machining surfaces, etc.). The experimental fatigue results were estimated according to a probabilistic model developed by [23], and the resulting S-N curves were approximately the same with the scatter bands being nearly coincident. In other words, the laser speed effect is negligible and no influence is found on the fatigue lifetime of Ti-6Al-4V specimens. Then, in order to enhance this conclusion, an external experimental campaign retrieved from [6]-at a lower velocity from those previously considered, that is, v = 200 mm/s-was also estimated and compared with previous results. Though having a wide range of laser velocities, from 200 to 1900 mm/s, the resulting S-N fields are approximately the same and the scatter bands are nearly coincident. However, this evidence does not allow for the conclusion that the effect of laser speed is negligible in terms of the fatigue lifetime of the specimens, since other concurrent effects could be covering the effects associated with the laser velocity and misleading the conclusions. The authors postulate that the high surface roughness obtained in the additive manufacturing process (R a = 3.27, R z = 15.83), together with the few differences in porosities depending on both laser speeds, lead to all cracks originating from the surfaces of the specimens, misleading conclusions about the potential effects of the porosities on the fatigue lifetime. Future work could consider the prior polishing of the specimens in order to improve the surface roughness, thus inducing the failure to occur from the pores.
The authors want to remark that the tensile and fatigue properties presented in this paper are related to only one SLM building direction. Taking into account that the material properties of metals manufactured by SLM cannot be considered isotropicthat is, dependent on the testing direction-the conclusions of this paper must be taken into consideration only for the manufacturing direction in which the specimens were made, since different results could be achieved for other directions. Furthermore, the relationship between speed manufacturing and the resulting mechanical properties is not straightforward in additively manufactured samples, as other variables are implied, such as the melt pool depth and temperature required for complete melting or evaporation; the effects of these variables were not considered in this study but are proposed as the basis for further work.
Conclusions
− The experimental values of the residual stresses increase for lower laser speeds for both maximum and minimum values. − The expected porosity was simulated for different laser velocities, establishing limiting energy densities to identify the kind of pores: spherical, sharper or absent. − The porosity was qualitatively analysed for seven different velocities, corroborating that, for lower speeds, the pores are spherical while for larger speeds they are sharper and more irregular. On the contrary, for middle velocities no pores were detected. − Tensile experimental results at two different laser speeds showed an influence on the mechanical properties of Ti-6Al-4V alloys, especially in the plastic regime. − A probabilistic model was used to estimate the fatigue lifetime for two different velocities, concluding that this effect is negligible in comparison with other concurrent variables, such as surface defects or roughness. − The influence of speed manufacturing will only be non-negligible when other concurrent effects, such as those caused by the machining process or heat treatments, are softened or relaxed. | 2021-09-01T15:09:34.381Z | 2021-06-25T00:00:00.000 | {
"year": 2021,
"sha1": "bf06d8725b0caff8f1dff17b83f9c80cccb1cb66",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/11/7/1022/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ba3d7b230b1087ea42e9e71edd02e9b1ab0d11d4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
19053471 | pes2o/s2orc | v3-fos-license | The iterative nature of person construal: Evidence from event-related potentials
Abstract Recently, a dynamic-interactive model of person construal (DI model) has been proposed, whereby the social categories a person represents are determined on the basis of an iterative integration of bottom-up and top-down influences. The current study sought to test this model by leveraging the high temporal resolution of event-related brain potentials (ERPs) as 65 participants viewed male faces that varied by race (White vs Black), fixating either between the eyes or on the forehead. Within face presentations, the effect of fixation, meant to vary bottom-up visual input, initially was large but decreased across early latency neural responses identified by a principal components analysis (PCA). In contrast, the effect of race, reflecting a combination of top-down and bottom-up factors, initially was small but increased across early latency principal components. These patterns support the DI model prediction that bottom-up and top-down processes are iteratively integrated to arrive at a stable construal within 230 ms. Additionally, exploratory multilevel modeling of single trial ERP responses representing a component linked to outgroup categorization (the P2) suggests change in effects of the manipulations over the course of the experiment. Implications of the findings for the DI model are considered.
Introduction
Traditional models of person perception hold that, upon seeing a person, perceivers rely on visual information to place him or her into a relevant social category (e.g. male or female) (Fiske and Neuberg, 1990;Bodenhausen and Macrae, 1998). Activation of social categories is assumed to be automatic, supported by evidence from a variety of behavioral paradigms showing that the activation of category-related information occurs even when perceivers are under cognitive load (Macrae et al., 1994); when categories are irrelevant to the participant's task (Fazio et al., 1995); and when category-related primes are presented subliminally (Devine, 1989;Lepore and Brown, 1997). Activation of social categories subsequently impacts a number of downstream consequences, including stereotype activation (Hehman et al., 2013), evaluative associations (Livingston and Brewer, 2002), non-verbal behavior (Dovidio et al., 1997) and trust (Stanley et al., 2011).
Recently, research on person construal has focused on the antecedents of social categorization rather than its consequences (Kawakami et al., 2017). In particular, the dynamic interactive theory of person construal (DI Model) proposes a more complex process whereby categorization decisions are not solely dictated by the visual information being perceived (Fodor, 1983), but rather reflect integration of bottom-up and top-down processes . This idea incorporates knowledge about the organization of neural networks that allow for top-down inputs on primary visual cortical areas (Di Russo et al., 2003;Collins and Olson, 2014;Vetter and Newen, 2014;Teufel and Nanay, 2017) and the bidirectional interplay between cognition and perception (Gilbert and Li, 2013).
The DI model suggests that, when faces are the objects of perception, perceptual cues in target faces partially activate multiple competing social categories, which resolve over iterations that cycle information through higher-order and lowerorder systems to arrive at a stable representation Stolier and Freeman, 2016a,b). According to this model, the active representation of the face (for example, whether the person is White or Black) is initially informed largely by bottom-up processes operating on information in visual cortex and primarily reflecting objective sensory information, such as skin tone and hair texture. Subsequently, this initial, tentative representation activates higher-order neural systems that access learned information, such as stereotypes, expectations, motivations and goals of the perceiver, which then influence the active representation in a top-down manner. For example, seeing a racially ambiguous person in a business suit versus a janitor's uniform changes the likelihood that he or she will be categorized as Black or White because of learned associations between social status and race .
This integrative process provides a mechanism by which top-down variables can influence early social categorization processes, consistent with findings from a number of recent studies. For example, conditions of economic scarcity (Rodeheffer et al., 2012;Ho et al., 2013;Krosch and Amodio, 2014), political orientation (Krosch et al., 2013), semantic labels (Tskhay and Rule, 2015) and motivation to be unbiased (Chen et al., 2014) all have been shown to affect the categorization of racially ambiguous faces.
The current study expands on this prior work in several ways. Whereas many previous studies have used faces morphed along a racial continuum to vary the visual information they convey (Krosch and Amodio, 2014), the current study included a visual fixation manipulation to change the bottomup influence of visual information without changing the stimuli themselves. Previous research has shown that varying fixation is effective in changing the extent to which racial category information is extracted from faces Lewis, 2006, 2011). Here, fixation location varied between the eyes and the forehead. Fixating between the eyes is the default in spontaneous face processing (Kawakami et al., 2014;Peterson and Kanwisher, 2015), and therefore is thought to convey more categoryrelevant information (Hills and Lewis, 2006). In contrast, the forehead is an unusual fixation location that conveys little category-relevant information. In this way, initial attention to sensory information could be manipulated without altering the faces, thereby facilitating examination of the effect of bottomup processes.
Social category information also was manipulated by presenting faces that varied by race. Perceiving race involves both bottom-up processes, including differences in brightness and contrast related to skin tone and spatial frequencies reflecting variability in facial physiognomy (Hayward et al., 2008;Zhao and Bentin, 2011), and top-down processes including accessing learned information that associates differences in facial features with distinct racial categories (Levin and Banaji, 2006). The influence of top-down processes in social categorization is analogous to the way learning and verbal labels encourage the perception of a continuous band of light frequencies as separable colors (Collins and Olson, 2014). Here, bottom-up differences were minimized by converting the images to gray scale and adjusting luminance. While not a pure distinction, incorporating experimental manipulations that differentially rely on bottom-up and top-down processes allowed us to examine the time course of their integration. In accordance with the DI model , we expected the effect of fixation (mainly representing differences in bottom-up processes) to be large upon initial perception of a face but to decrease as person construal continued, whereas the effect of race (representing differences in both top-down and bottom-up processes) was expected to be small initially but to increase as processing iterations unfold.
Event-related brain potentials (ERPs) were recorded to allow observation of this theorized integration over time [see Amodio et al., (2014), for background on the ERP approach]. Two methods of analyzing ERP data were used to test hypotheses derived from the DI model: (1) a traditional approach examining mean amplitude of a scalp-recorded component previously associated with social categorization (the P2 or P200; Ito and Bartholow, 2009), and (2) a principal components analysis (PCA) approach examining a sequence of underlying components contributing to early face processing.
The P2 generally peaks 150-250 ms post-stimulus along the scalp midline and has been associated with early orienting of attention to threatening or distinctive stimuli (Correll et al., 2006;Kubota and Ito, 2007). Outgroup faces consistently elicit larger P2s than ingroup faces (Willadsen-Jensen and Ito, , 2008Amodio, 2010;Dickter and Kittel, 2012). This occurs regardless of task relevance Urland, 2003, 2005;Kubota and Ito, 2007;He et al., 2009) or context (Correll et al., 2006;Dickter and Bartholow, 2007;Willadsen-Jensen and Ito, 2008), consistent with the notion that ingroup-outgroup distinctions occur spontaneously. Importantly, prior research indicates the P2 is sensitive to category distinctions, not simply to low-level perceptual features of faces. Specifically, Dickter and Bartholow (2007) found that while Black faces elicited larger P2 amplitude than White faces among White participants, the opposite pattern emerged among Black participants. We expected to replicate this well-established effect, such that Black faces elicit larger P2s than White faces in a predominantly White sample.
A concern with the traditional measurement of the P2 as mean amplitude within a particular time window is that it effectively removes the inherently multivariate nature of the ERP, eliminating its main advantages-its millisecond-level temporal resolution and continuous measurement over time. Therefore, we also used PCA to investigate predicted changes in the effects of our manipulations over a sequence of quickly unfolding neural responses that both precede and comprise the P2. The scalp-recorded ERP waveform represents the summation of neural activity that overlaps in time and space (Luck, 2005). PCA allows decomposition of this waveform into unique clusters of variance that meaningfully reflect distinct, underlying psychological processes (Dien and Frishkoff, 2005). Based on the DI model, we hypothesized that fixation, primarily representing differences in the influence of bottom-up processing, would have a large effect on early components but then diminish in subsequent components. Conversely, we hypothesized that race, which operationalizes more top-down differences in categorical perception, would have a small effect initially but then increase as neurocognitive iterations progressed.
Here, a multilevel modeling (MLM) approach was used to statistically test the effect of race and fixation on early-latency neural responses to faces. MLM has been advocated as more appropriate than repeated-measures ANOVA for psychophysiological data (Kristjansson et al., 2007;Vossen et al., 2011;Tibon and Levy, 2015;Tremblay and Newman, 2015), because (1) MLMs have more relaxed assumptions regarding sphericity, which psychophysiological data often violate; (2) MLMs allow simultaneous parsing of variance associated with different grouping variables, including subjects, electrodes or stimulus items, thereby reducing error variance; (3) MLMs handle unbalanced or missing data, such that individuals with missing observations can be retained in the analysis; and (4) MLMs model effects of both categorical and continuous predictors simultaneously. These advantages make MLM a highly flexible and powerful analytic technique for ERP data (Page-Gould, 2017).
Traditionally, ERP responses are averaged over tens or hundreds of trials to extract the signal of interest (e.g. amplitude of a given component) from background EEG responses unrelated to stimulus processing (Luck, 2005). Given MLM's ability to handle unbalanced data and parse variance in a way that reduces error variance, data from individual trials can be modeled separately, thereby permitting examination of changes in the effects of interest over the course of many trials. While not directly pertinent to testing DI model predictions (which focus on events within trials), we present exploratory findings using this across-trials approach as a way of investigating stability and change in P2 amplitude in response to our manipulations over the course of the experiment.
Finally, the current study employed two different tasks to examine whether the task-relevance of person construal affects the applicability of the DI model. The first task was based on traditional evaluative priming paradigms (Fazio et al., 1995;Livingston and Brewer, 2002), in which faces are irrelevant to the task of categorizing words as positive or negative. In contrast, faces were directly task-relevant in the second task as participants were asked to simply categorize them by race.
Participants
Sixty-five individuals (34 women, 31 men) participated in exchange for credit towards a research requirement in an Introductory Psychology course, or for monetary compensation. Participants ranged from 18 to 48 years old (M ¼ 20.4). Sixty selfidentified as White, two identified as Asian and three identified as more than one race. None identified as African-American.
Measures and procedure
Two computer tasks were administered using E-Prime (Psychology Software Tools, Inc., USA). Participants were seated $40 inches from a 20-inch CRT monitor refreshing at 60 Hz. EEG data were recorded while each participant first completed the evaluative priming task and then the race categorization task. 1 Evaluative priming task. The evaluative priming task was modified from tasks used previously (Fazio et al., 1995) and is designed to measure bias in evaluative associations with African-American and European-American men. During each trial, a fixation cross was presented in the center of the screen (jittered: either 500, 700 or 900 ms), followed by a face prime (310 ms), then a blank screen (50 ms) and then a target word (200 ms), followed by a visual mask (600 ms). Prime stimuli consisted of photographs of Black and White men's faces with neutral expressions (taken from Ma et al., 2015). In order to reduce differences in low-level perceptual features across faces, the photographs were converted to gray scale and the brightness and contrast of the images were adjusted to be roughly equivalent across stimuli; differences could not be completely eliminated, however. Additionally, the location of the face prime varied so that the fixation cross preceded either the middle of the forehead or between the eyes (each face stimulus was presented once in each fixation position). Target stimuli consisted of positive and negative words that were somewhat visually degraded (see Supplementary Material for a complete list). Participants identified the valence of the target word using two keys on a ms-accurate keyboard using the index fingers of each hand; response mapping varied randomly across participants. Failure to respond within 800 ms of target onset elicited a 'TOO SLOW' warning displayed for 1000 ms. The ITI was 600 ms.
Participants completed 16 practice trials, followed by 512 experimental trials. Trial type (e.g. Black-eyes-positive word, Black-eyes-negative word, etc.) varied randomly, with 64 trials of each type in total. The same eight positive and eight negative words were used in the practice and experimental trials. Thirtytwo faces of each race were used in the experimental trials; a different set of faces was used in the practice trials.
Race categorization task. In the race categorization task, participants viewed the same faces as in the experimental trials of the priming task, again presented in both fixation positions.
Participants were asked to simply categorize the faces by race using two buttons on a keyboard. During each trial, a fixation cross was presented (jittered: 500, 700 or 900 ms), followed by a face (270 ms) presented either in the eyes-fixation or foreheadfixation position, which was then masked (530 ms). Failure to respond within 800 ms following target face onset elicited a 'TOO SLOW' warning displayed for 1000 ms. The ITI was 600 ms. Participants completed eight practice trials followed by 256 experimental trials. Trial type varied randomly, with 64 trials of each type being presented total.
Electrophysiological recording and processing EEG data were collected using 20 tin electrodes embedded in a stretch-lycra cap (Electro-Cap, International, Eaton, OH) and placed in standard 10-20 locations (American Encephalographic Society, 1994). 2 All scalp electrodes were referenced online to the right mastoid; an average mastoid reference was derived offline. Signals were amplified with a Neuroscan Synamps amplifier (Compumedics, Charlotte, NC), filtered on-line at 10-40 Hz at a sampling rate of 1000 Hz. Impedances were kept below 10 KX. Ocular artifacts (i.e. blinks) were corrected from the EEG signal using a regression-based procedure (Semlitsch et al., 1986). Trials containing voltage deflections of 675 microvolts (lV) were discarded, as were trials that contained large muscle artifacts as determined by visual inspection.
P2 quantification. Grand averages (ERP activity averaged across trials and participants) revealed a positive-going deflection peaking roughly 160 ms following the presentation of a face and maximal at the centro-parietal midline (CPz), consistent with previous characterizations of the P2 during face processing (Ito and Urland, 2005;Dickter and Bartholow, 2007). The P2 was quantified in both tasks as the mean amplitude from 130 to 190 ms post-face onset (30 ms before and after the peak at CPz) at seven central and centro-parietal locations (Cz, C3, C4, CPz, CP3, CP4 and Pz).
Statistical approach. The R package 'lme4' (Bates et al., 2015b) was used to fit multilevel models for data analysis. We allowed for covariances between random slopes and intercepts, using model-specification procedures described by Bates et al., (2015a) to determine the most appropriate random effects structure. This involved starting with a maximal model and then removing random slopes based on the magnitude of the correlations between random effects. Estimated random effect variances and correlations can be found in the Supplementary Material. Satterthwaite approximations were used to estimate degrees of freedom and to obtain two-tailed P values; in situations where the degrees of freedom were above 200, we report the results as z statistics. Data and code used for analysis can be found at https://github.com/hiv8r3/ERP-fix-analyses.
Results
Only trials on which correct responses were given were used in analyses. Reaction time (RT) and ERP data from the priming task for two subjects were discarded because accuracy was > 3 SDs below the mean (65.6% and 50.2%, respectively). Data from the categorization task for one subject were similarly discarded (60.9% accurate). Mean RTs and accuracy rates are presented in Table 1.
Reaction time
Evaluative priming task. Race of the face prime, valence of the target word and fixation were included in the model as predictors (dummy-coded: Black ¼ 0, White ¼ 1; negative ¼ 0, positive ¼ 1; eyes ¼ 0, forehead ¼ 1). The most appropriate random effects structure was determined to be one in which the intercept and effect (slope) of word valence varied by subject, and the intercept varied by stimulus. The Race  Word Valence interaction was significant, b ¼ 5.86, z ¼ 2.37, P ¼ 0.018. The pattern of means associated with this interaction indicated that responses were faster to positive than negative words following both Black and White faces (Figure 1), but this facilitation effect was slightly (but significantly) larger following Black faces (M ¼ 15.5 ms) compared to White faces (M ¼ 12 ms). A main effect of Fixation also emerged, b ¼ À3.89, z ¼ À2.23, P ¼ 0.026, such that words were evaluated more quickly following a forehead fixation than an eyes fixation. No other effects were significant; additional analyses can be found in the Supplementary Material.
Race categorization task. Race of the face prime and fixation were included as predictors (dummy-coded as before). The most appropriate random effects structure was determined to be one in which intercept and slopes of race and fixation (but not their interaction) varied by subject, and the intercept varied by face stimulus. A main effect of Fixation, b ¼ 5.78, z ¼ 2.91, P ¼ 0.004, a marginal effect of Race, b ¼ 5.68, t(124) ¼ 1.80, P ¼ 0.074 and no interaction, b ¼ À0.38, z ¼ À0.144, P ¼ 0.886, emerged (Figure 1).
Primary ERP results: effects within trials
Traditional P2 amplitude analysis. We first tested the effects of race (Black ¼ 0, White ¼ 1) and fixation (eyes ¼ 0, forehead ¼ 1) on P2 amplitude using a traditional mean amplitude approach. Grand average ERP waveforms depicting the P2 are given in Figure 2. The random effects structure allowed the intercept, slopes of race, fixation and their interaction to vary by subject and the intercept to vary by electrode nested within subject. A significant main effect of Race was estimated in both the priming task, b ¼ À0.79, t(61.62) ¼ À5.83, P < 0.001, and the categorization task, b ¼ À1.20, t(63.36) ¼ À4.81, P < 0.001, such that Black faces elicited larger (more positive) P2s than White faces. A significant main effect of Fixation also emerged in both the priming task, b ¼ À0.39, t(62.15) ¼ À2.03, P ¼ 0.047, and the categorization task, b ¼ À0.69, t(63.48) ¼ À2.95, P ¼ 0.005; larger P2s were elicited in the eyes-fixation than the forehead-fixation condition. The Race  Fixation interaction was not significant in either task, Ps > 0.34. Additional analyses can be found in the Supplementary Material. Principal components analysis. The primary hypothesis of the DI model (i.e. that the influence of variables representing bottom-up and top-down contributions changes as person construal progresses within individual trials) was tested by subjecting ERP responses to a sequential temporospatial PCA (Dien and Frishkoff, 2005), using the Matlab PCA ERP Toolbox (Dien, 2010). Separate PCAs were computed for the categorization and priming task data. Given that the presentation of the face was interrupted in the evaluative priming task after 360 ms, and because we were interested only in early person construal processes, we examined PCA components that emerged within 300 ms of face presentation. Details concerning extraction of components can be found in the Supplementary Materials. To facilitate interpretation of the PCA results, the portion of the original data set represented by each temporospatial factor combination was reconstructed (i.e. in microvolts) into factor waveforms by multiplying factor scores by their corresponding loadings and SDs. These reconstructed factor waveforms were then ordered temporally (henceforth referred to as Virtual Factors [VFs] 1 through 3 representing their temporal order) and viewed in comparison with the grand average ERPs (Figure 3).
To investigate the effects of race and fixation on each virtual factor, the mean amplitude of each factor was calculated separately for each condition and individual within the two tasks. In the evaluative priming task, VF-1, which peaked at 115 ms poststimulus onset and was maximal at Pz, was quantified as mean amplitude 80-140 ms post-stimulus. VF-2, which peaked at 148 ms and was maximal at FCz, was quantified as mean amplitude 115-180 ms post-stimulus. VF-3 peaked at 179 ms and was maximal at CPz, and was quantified as mean amplitude 145-230 ms post-stimulus. 3 Mean VF amplitudes were subjected to MLMs with Race and Fixation (but not their interaction) as predictors and a random effects structure where the intercept and slopes of both effects varied by subject and the intercept varied by electrodes nested within subject. Predictors were effect-coded. Results across the three models revealed an increase in the (absolute-value) effect of race across the three virtual factors, while the (absolute value) effect of fixation decreased across the three virtual factors (Table 2, Figure 4). Specifically, the 95% confidence intervals for each estimate indicate a similar magnitude of the effect of Race on VF-1 and VF-2 but a statistical increase in the magnitude of the effect of Race from VF-2 to VF-3. In contrast, the magnitude of the effect of Fixation decreases from VF-1 to VF-3, although the magnitude of the effect on VF-2 does not statistically differ from either VF-1 or VF-3.
Using data from the race categorization task, a temporospatial PCA revealed three components that matched VF-1, VF-2 and VF-3 from the priming task in timing and location: VF-1 peaked at 113 ms post-stimulus and was maximal at Pz; VF-2 peaked at 143 ms and was maximal at FCz; and VF-3 peaked at 172 ms and was maximal at Cz. Because of these similarities and the fact that Displays grand average waveforms locked to face onset for each task, averaged across C3, Cz, C4, CP3, CPz, CP4 and Pz. Positive amplitude is plotted upward. P2 mean amplitude was calculated from 130 to 160 ms following face presentation (shaded area). they were elicited by the same face stimuli, these components were judged to represent similar processes across tasks. Quantification and analyses mirrored those for the priming task data, and a similar pattern was found: the effect of race increased as processing continued, while the effect of fixation decreased (Table 2, Figure 4). Examination of the 95% confidence intervals revealed the same pattern of results as in the categorization task.
Exploratory ERP results: effects across trials
Mean P2 amplitudes (130-190 ms post-face onset) from individual trials over the course of each task as a function of the race and fixation manipulations are plotted in Figure 5. Across both tasks, the data suggest an overall sensitization of the P2 (increasing across trials) and differing effects of race and fixation. Specifically, whereas the effect of race was evident from the earliest trials in both tasks, an effect of fixation emerged only as each task progressed such that P2s became larger in the eyes-fixation condition than the forehead-fixation condition. Moreover, race and fixation appeared to interact as the task progressed. These trends were confirmed by MLMs conducted separately with data from each task (Trial was added as a continuous predictor and rescaled to range from 0 to 10 in each task), the results of which are given in Table 3. The presence of significant Race  Fixation  Trial interactions in both models confirms that the slopes related to each effect differed, i.e. that the increases in P2 amplitude over the course of the tasks were asymmetrical across the four conditions. To probe this interaction, slope estimates and 95% confidence intervals were calculated in accordance with Bauer and Curran (2005) (Table 4). All estimates are significantly different from zero, demonstrating positive change in P2 amplitude over the course of both tasks in all classes of stimuli. However, in both tasks, P2 amplitude in the Black-eyes condition increased more than in the other three conditions, as indicated by lack of overlap in the confidence intervals.
Discussion
The purpose of this study was to directly test elements of the DI model of person construal, using ERP data acquired while participants viewed faces of different races. The primary innovation of the DI model is its characterization of person construal as an iterative process in which bottom-up perceptual information is integrated with (top-down) stored representations related to social categories . A key assumption of this model is that bottom-up processes have a larger initial effect, while effects of top-down processes emerge later in processing. Here, this basic premise was tested using a fixation manipulation to control the visual information to which perceivers initially attended upon seeing faces of White and Black men.
We used multiple methods to investigate the ERP data from this study. A traditional mean amplitude approach to the P2 showed that, as in previous studies (Ito and Urland, 2003;Dickter and Bartholow, 2007) Black (outgroup) faces elicited larger P2s than White (ingroup) faces, regardless of fixation location. Additionally, fixating on the eyes elicited larger P2s than fixating on the forehead. Although not predicted, this effect is consistent with evidence that faces with direct gazes are arousing and capture attention (Gale et al., 1978;Senju and Hasegawa, 2005). Race and fixation location did not interact in this analysis, however.
Next, we examined the incorporation of bottom-up and topdown factors early in processing by testing the effects of race and fixation on a sequence of early components identified by a temporospatial PCA. In accordance with DI model predictions, we expected the effect of fixation (mainly representing differences in bottom-up processes) to be large upon initial perception of a face but to decrease over subsequent processing steps, whereas the effect of race (representing differences in a combination of top-down and bottom-up processes) was expected to be small initially but to increase over processing iterations.
Consistent with these predictions, the effect of fixation was evident in the earliest component (80-140 ms following face onset) and decreased over the next 100 ms. In contrast, the effect of race on the first two components was small but increased dramatically in the third component. The very early emergence of VF-1 and its largely posterior scalp distribution suggest this component reflects activity in visual cortical circuits that is responsive to low-level stimulus features, such as the more complex spatial frequencies around the eyes relative to the forehead (Keil, 2009), and that is responsible for amplification of sensory information flowing to other parts of the visual attention pathway (Hillyard and Anllo-Vento, 1998). The temporal and spatial overlap between VF-3 and the P2 evident in the grand averages suggests that VF-3 directly contributed to the P2. This possibility is bolstered by the fact that the P2 is known to be highly sensitive to distinguishing social categories (Ito and Urland, 2003;Dickter and Bartholow, 2007), and that social category information (in this case, race) had a pronounced effect on VF-3 but a smaller effect on the preceding components.
More importantly, the increasing effect of race across the PCA-derived factors suggests that learned racial categories accessed from higher-level memory percepts contribute to the active representation of the social category in a top-down manner over time (Collins and Olson, 2014). Of course, it is important to acknowledge that stimulus features eliciting bottom-up and top-down processing were somewhat confounded in the current study. Given that race-related differences reflect both low-level, stimulus-driven and higher-level learned features, categorization by race represents a combination of bottom-up and top-down processes (Levin and Banaji, 2006). Indeed, the significant effect of race on the amplitude of VF-1 is likely due to low-level visual differences between faces of different races, despite efforts to equate stimuli on those dimensions. However, the increasing effect of race suggests learned racial categories accessed from higher-level memory percepts contribute to the active representation in an integrative way over time (Collins and Olson, 2014). Future research could extend this finding by using faces that do not differ in their low-level stimulus properties, as in a minimal groups design (Ratner and Amodio, 2013), to avoid bottom-up and top-down confounds. Another concern with the current design is that task order and the taskrelevance of race categorization were confounded. Thus, inferences concerning the independence of the observed patterns from perceivers' goals should be tempered. Still, the fact that such similar patterns emerged in both tasks is encouraging.
The exploratory analyses examining change in P2 amplitude across trials suggested that P2 amplitude increased across both tasks, and that this increase was most pronounced for Blackeyes trials. This pattern may be the result of participants' increasing ability to extract category-related information as the task continues, especially when focusing on the eyes-a highly practiced fixation location (Kawakami et al., 2014)-and especially for outgroup faces, which elicit increased attention (Dickter and Bartholow, 2007). However, because inferences about the meaning of the face-elicited P2 are based on methods that assume the signal associated with the P2 is constant, analyzing the P2 with this new approach impacts the (reverse) inferences we make about its psychological significance (e.g. Poldrack, 2006, but see Hutzler, 2014, and so we view this conclusion with caution. It is also important to emphasize that examination of change in P2 amplitude across trials is not relevant to testing DI model predictions, which focus on changes that should occur within trials (i.e. within construal events) as a function of various manipulations.
Interestingly, the P2's sensitivity to race was not reflected in priming task behavioral responses. The typical pattern of response facilitation for negative words following Black faces (Fazio et al., 1995) was not seen in either fixation condition. Instead, participants were quicker to respond to positive than negative words in both race conditions, and this effect was consistent over the course of the task (see Supplementary Material). The phenomenon of evaluative priming is sensitive to a number Note. Numbers in brackets are the 95% confidence interval around the estimate.
Trial has been rescaled to range between 0 and 10 for both tasks. of parameters (Spruyt et al., 2011). Thus, it could be that the SOA used here was too long to produce the behavioral priming phenomenon (see Supplementary Material for more extensive discussion). However, differentiation by race in the P2 and early PCA components provides evidence that categorization occurred, despite lack of behavioral evidence that this categorization had downstream consequences related to prejudice. These data are consistent with the idea that behavioral priming phenomena rely on response output processes, such as response conflict (Klinger et al., 2000;Bartholow et al., 2009), which are more sensitive to SOA and other task parameters (Spruyt et al., 2007) than the initial categorization of faces.
In conclusion, the current study provides a novel demonstration using PCA that bottom-up and top-down processes integrate information in an iterative way to arrive at a stable person construal. Remarkably similar neural responses to faces were observed regardless of the relevance of social categorization for perceivers' task goals, suggesting automaticity of relevant construal processes. The temporal sensitivity of EEG and the ability for PCA to separate closely occurring but unique sources of variation in brain activity allows relatively direct access to this integration, which occurs before any behavioral response can be made, and speaks to the power of using covert measures of brain activity to investigate early and quickly unfolding processes of person construal.
Supplementary data
Supplementary data are available at SCAN online.
Funding
This research was supported in part by a Life Sciences Graduate Fellowship and a Graduate Research Award from the University of Missouri. Preparation of this manuscript was supported by grant R01 AA020970 from the National Institute on Alcohol Abuse and Alcoholism, and by grant 1460719 from the National Science Foundation. | 2018-04-03T03:41:25.667Z | 2017-04-11T00:00:00.000 | {
"year": 2017,
"sha1": "531798e5145f886aedaea056c495364bd93d53ff",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/scan/article-pdf/12/7/1097/27105116/nsx048.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "531798e5145f886aedaea056c495364bd93d53ff",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
253327866 | pes2o/s2orc | v3-fos-license | Calibrating networks of low-cost air quality sensors
. Ambient fine particulate matter (PM 2 . 5 ) pollution is a major health risk. Networks of low-cost sensors (LCS) are increasingly being used to understand local-scale air pollution variation. However, measurements from LCS have uncertainties that can act as a potential barrier to effective decision making. LCS
ters in complex urban environments (Brantley et al., 2019;deSouza et al., 2020a). Therefore, dense monitoring networks are often needed to capture relevant spatial variations. Due to their costliness, Environmental Protection Agency (EPA) air quality reference monitoring networks are sparsely positioned across the US (Apte et al., 2017;Anderson and Peng, 2012).
Most low-cost PM sensors rely on optical measurement techniques. Optical instruments face inherent challenges that introduce potential differences in mass estimates compared to reference methods (Barkjohn et al., 2021;Crilley et al., 2018;Giordano et al., 2021;Malings et al., 2020): 1. Optical methods do not directly measure mass concentrations; rather, they estimate mass based on calibrations that convert light scattering data to particle number and mass. LCS come with factory-supplied calibrations but, in practice, must be re-calibrated in the field to ensure accuracy, due to variations in ambient particle characteristics and instrument drift.
2. High relative humidity (RH) can produce hygroscopic particle growth, leading to dry-mass overestimation, unless particle hydration can accurately be taken into account or the particles are desiccated by the instrument.
3. LCS are not able to detect particles with diameters below a specific size, which is determined by the wavelength of laser light within each device and is generally in the vicinity of 0.3 µm, whereas the peak in pollution particle number size distribution is typically smaller than 0.3 µm.
4. The physical and chemical parameters describing the aerosol (particle size distribution, shape, indices of refraction, hygroscopicity, volatility, etc.), which might vary significantly across different microenvironments with diverse sources, impact light scattering; this in turn affects the aerosol mass concentrations reported by these instruments.
The need for field calibration to correct LCS measurements is particularly important. This is typically done by colocating a small number of LCS with one or a few reference monitors at a representative monitoring location or locations. The co-location could be carried out for a brief period before and/or after the actual study or may continue at a small number of sites for the duration of the study. In either case, the co-location provides data from which a calibration model that relates the raw output of the LCS as closely as possible to the desired quantity as measured by the reference monitor is developed. Thereafter, the calibration model is transferred to other LCS in the network based upon the presumption that ongoing sampling conditions are within the same range as those at the collocation site(s) during the calibration period.
Calibration models typically correct for (1) systematic error in LCS by adjusting for bias using reference monitor measurements, and (2) the dependence of LCS measurements on environmental conditions affecting the ambient particle properties, such as relative humidity (RH), temperature (T ), and/or dew-point (D). Correcting for RH, T , and D is carried out through either (a) a physics-based approach that accounts for aerosol hygroscopic growth given particle composition using κ-Köhler's theory, or (b) empirical models, such as regression and machine learning techniques. In this paper, we focus on the latter, as it is currently the most widely used (Barkjohn et al., 2021). Previous work has also shown that the two approaches yield comparable improvements in the case of PM 2.5 LCS (Malings et al., 2020).
Prior studies have used multivariate regressions, piecewise linear regressions, or higher order polynomial models to account for RH, T , and D in these calibration models (Holstius et al., 2014;Magi et al., 2020;Zusman et al., 2020). More recently, machine learning techniques such as random forests, neural networks, and gradient-boosted decision trees have been used (Considine et al., 2021;Liang, 2021;Zimmerman et al., 2018). Researchers have also started including additional covariates in their models besides that which is directly measured by the LCS, such as time of day, seasonality, wind direction, and site type, which have been shown to yield significantly improved results (Considine et al., 2021).
Past research has shown that there are several important decisions in addition to the choice of calibration model that need to be made during calibration and that can impact the results (Bean, 2021;Giordano et al., 2021;Hagler et al., 2018). These include (a) the kind of reference air quality monitor used, (b) the time interval (e.g., hour or day) over which to average measurements used when developing the calibration model, (c) how cross-validation (e.g., leaving one site out or 10-fold cross-validation) is carried out, and (d) how long the co-location experiment takes place.
Calibration models are typically evaluated based on how well the corrected data agree with measurements from reference monitors at the corresponding co-location site. A commonly used metric is the Pearson correlation coefficient, R, which quantifies the strength of the association. However, it is a misleading indicator of sensor performance when measurements are observed close to the limit of detection of the instrument. Therefore, root mean square error (RMSE) is often included in practice. Unfortunately, neither of these metrics captures how well the calibration method developed at the co-located sites transfers to the rest of the network in both time and space.
If the conditions at the co-location sites (meteorological conditions, pollution source mix) for the period of colocation are the same as for the rest of the network during the total operational period, the calibration model developed at the co-location sites can be assumed to be transferable to the rest of the network. In order to ensure that the sampling conditions at the co-location site are representative of sampling conditions across the network, most researchers tend to deploy monitors in the same general sampling area as the network (Zusman et al., 2020). However, it is difficult to definitively test if the co-location site during the period of colocation is representative of conditions at all monitors in the network; ambient PM concentrations can vary on scales as small as a few meters. Furthermore, LCS are often deployed specifically in areas where the air pollution conditions are poorly understood, meaning that representativeness cannot be assessed in advance.
In order to evaluate whether calibration models are transferable in time, we test if models generated using typical short-term co-locations at specific co-location sites perform well during other time periods at all co-location sites. Where multiple co-location sites exist, one way to evaluate how transferable calibration models are in space is to leave out one or more co-location sites and to test if the calibration model is transferable to the left-out sites. This method was used in recent work evaluating the feasibility of developing a US-wide calibration model for the PurpleAir low-cost sensor network (Barkjohn et al., 2021;Nilson et al., 2022).
Although these approaches are useful, co-location sites are sparse relative to other sites in the network. Even in the Pur-pleAir network (which is one of the densest low-cost networks in the world), there were only 39 co-location sites in 16 US states, a small fraction of the several thousand Pur-pleAir sites overall (Barkjohn et al., 2021). It is thus important to develop metrics to test how sensitive the spatial and temporal trends of pollution derived from the entire network are to the calibration model applied. Finally, a key use case of LCS networks is to identify hotspots. It is important to also evaluate how sensitive the hotspot identified in an LCS network is to the calibration model applied.
Examining the reliability of calibration models is timely, because more researchers are opting to use machine learning models. Although in most cases, such models have yielded better results than traditional linear regressions, it is important to examine if these models are overfitted to conditions at the co-location sites, even after appropriate cross-validation, and how transferable they are to the rest of the network. Indeed, because of concerns of overfitting, some researchers have explicitly eschewed employing machine learning calibration models altogether (Nilson et al., 2022). It is important to test under what circumstances such concerns might be warranted. This paper uses a dense low-cost PM 2.5 monitoring network deployed in Denver, the "Love My Air" network deployed primarily outside the city's public schools, to evaluate the transferability of different calibration models in space and time across the network. To do so, new metrics are proposed to quantify the Love My Air network's spatial and temporal trend uncertainty due to the calibration model applied. Finally, for key LCS network use cases, such as hotspot detection, tracking high pollution events, and evaluating pollution trends at a high temporal resolution, the sensitivity of the results to the choice of calibration model is evaluated. The methodologies and metrics proposed in this paper can be applied to other low-cost sensor networks, with the understanding that the actual results will vary with study region.
2 Data and methods
Data sources
Between 1 January and 30 September 2021, Denver's Love My Air sensor network collected minute-level data from 24 low-cost sensors deployed across the city outside of public schools and at 5 federal equivalent method (FEM) reference monitor locations (Fig. 1). The Love My Air sensors are Canary-S models, equipped with a Plantower 5003, made by Lunar Outpost Inc. The Canary-S sensors detect PM 2.5 , T , and RH and upload minute-resolution measurements to an online platform via cellular data network.
We found that RH and T reported by the Love My Air sensors were well correlated with that reported by the reference monitoring stations. We used the Love My Air LCS T and RH measurements in our calibration models, as they most closely represent the conditions experienced by the sensors.
Data cleaning protocol for measurements from the Love My Air network
A summary of the data cleaning and data preparation steps carried out on the Love My Air data from the entire network are listed below: Figure 1. Locations of all 24 Love My Air sensors. Sensors displayed with an orange triangle indicate that they were co-located with a reference monitor. The labels of the co-located sensors include the name of the reference monitor with which they were co-located after a hyphen.
5. From inspection, one of the monitors, CS13, worked intermittently in January and February before resuming continuous measurement in March (Fig. S1 in the Supplement). When CS13 worked intermittently, large spikes in the measurements were observed, likely due to power surges. We thus retained measurements taken after 1 March 2021 for this monitor. The total number of hourly measurements was thus reduced to 146 583.
Love My Air sensors (indicated by Sensor ID) were colocated with FEM reference monitors, from which we obtained high-quality hourly PM 2.5 measurements, at the following locations (Table 1) 2.1.2 Data preparation steps for preparing a training dataset used to develop the various calibration models A summary of the data preparation steps for preparing a training dataset used to develop the various calibration models is described below: 1. We joined hourly averages from each of the seven colocated Love My Air monitors with the corresponding FEM monitor. We had a total of 35 593 co-located hourly measurements for which we had data for both the Love My Air sensor and the corresponding reference monitor. Figure S2 displays time-series plots of PM 2.5 from all co-located Love My Air sensors. Figure S3 displays time-series plots of PM 2.5 from the corresponding reference monitors.
2. The three Love My Air sensors co-located at the I25 Globeville sites (CS2, CS3, CS4) agreed well with each other (Pearson correlation coefficient = 0.98) (Figs. S4 and S5). To ensure that our co-located dataset was well balanced across sites, we only retained measurements from CS2 at the I25 Globeville site. We were left with a total of 27 338 co-located hourly measurements that we used to develop a calibration model. Figure S6 displays Reference monitors at La Casa, CAMP, I25 Globeville, and I25 Denver also reported minute-level PM 2.5 concentrations between 23 April, 11:16, and 30 September, 22:49 local time. We also joined minute-level Love My Air PM 2.5 concentrations with minute-level reference data at these sites. We had a total of 1 062 141 co-located minute-level measurements during this time period. As with the hourly averaged data, we only retained data from one of the Love My Air sensors at the I25 Globeville site and were thus left with 815 608 min-level measurements from one LCS at each of the four co-location sites. Table S1 in the Supplement has information on the minutelevel co-located measurements. The data at the minute-level displays more variation and peaks in PM 2.5 concentrations than the hourly averaged measurements (Fig. S7), likely due to the impact of passing sources. It is also important to mention that minute-level reference data may have some additional uncertainties introduced due to the finer time resolution. We will use the minute-level data in the Supplement analyses only. Thus, unless explicitly referenced, we will be reporting results from hourly averaged measurements.
Deriving additional covariates
We derived dew point (D) from T and RH reported by the Love My Air sensors using the weathermetrics package in the programming language R (Anderson and Peng, 2012), as D has been shown to be a good proxy of particle hygroscopic growth in previous research (Barkjohn et al., 2021;Clements et al., 2017;Malings et al., 2020). Some previous work has also used a nonlinear correction for RH in the form of RH 2 /(1 − RH), which we also calculated for this study (Barkjohn et al., 2021).
We extracted hour, weekend, and month variables from the Canary-S sensors and converted hour and month into cyclic values to capture periodicities in the data by taking the cosine and sine of hour × 2π/24 and month × 2π/12, which we designate as cos_time, sin_time, cos_month, and sin_month, respectively. Sinusoidal corrections for seasonality have been shown to improve the accuracy of PM 2.5 measurements in machine learning models (Considine et al., 2021).
Defining the calibration models used
The goal of the calibration model is to predict, as accurately as possible, the "true" PM 2.5 concentrations given the concentrations reported by the Love My Air sensors. At the colocated sites, the FEM PM 2.5 measurements, which we take to be the "true" PM 2.5 concentrations, are the dependent variable in the models.
P. deSouza et al.: Calibrating networks of low-cost air quality sensors
We evaluated 21 increasingly complex models that included T , RH, and D as well as metrics that captured the time-varying patterns of PM 2.5 to correct the Love My Air PM 2.5 measurements (Tables 2 and 3).
Sixteen models were multivariate regression models that were used in a recent paper (Barkjohn et al., 2021) to calibrate another network of low-cost sensors: the PurpleAir, which relies on the same PM 2.5 sensor (Plantower) as the Canary-S sensors in the current study. As T , RH, and D are not independent (Fig. S8), the 16 linear regression models include adding the meteorological conditions considered as interaction terms instead of additive terms. The remaining five calibration models relied on machine learning techniques.
Machine learning models can capture more complex nonlinear effects (for instance, unknown relationships between additional spatial and temporal variables). We opted to use the following machine learning techniques: random forest (RF), neural network (NN), gradient boosting (GB), Super-Learner (SL); these have been widely used in calibrating LCS. A detailed description of each technique can be found in Sect. S1 in the Supplement. All machine learning models were run using the caret package in R (Kuhn et al., 2020).
We used both leave-one-site-out (LOSO) ( Table 2) and leave-out-by-date -where we left out a three-week period of data at a time at all sites (LOBD) ( Table 3) -cross-validation (CV) methods to avoid overfitting in the machine learning models. For more details on the cross-validation methods used to avoid overfitting in the machine learning models, refer to Sect. S2.
Corrections generated using different co-location time periods (long-term, on-the-fly, short-term) As described earlier, co-location studies in the LCS literature have been conducted over different time periods. Some studies co-locate one or more LCS for brief periods of time before or after an experiment, whereas others co-locate a few LCS for the entire duration of the experiment. These studies apply calibration models generated using the co-located data to measurements made by the entire network over the entire duration of the experiment. We attempt to replicate these study designs in our experiment to evaluate the transferability of calibration models across time by generating four different corrections: C1 Entire dataset correction. The 21 calibration models were developed using data at all co-location sites for the entire period of co-location.
C2 On-the-fly correction. The 21 calibration models to correct a measurement during a given week were developed using data across all co-located sites for the same week of the measurement.
C3 Two-week winter correction. The 21 calibration models were developed using co-located data collected for a brief period (two weeks) at the beginning of the study (1-4 January 2021). They were then applied to measurements from the network during the rest of the period of operation.
C4 Two-week winter + two-week spring. The 21 calibration models were developed using co-located data collected for two two-week periods in different seasons (1-4 January 2021 and 1-14 May 2021). They were then applied to measurements from the network during the rest of the period of operation.
Although models developed using co-located data over the entire time period (C1) tend to be more accurate over the entire spatiotemporal dataset, it is inefficient to re-run large models frequently (incorporating new data). On-the-fly corrections (such as C2) can help characterize short-term variation in air pollution and sensor characteristics. The duration of calibration is a key question that remains unanswered (Liang, 2021). We opted to test corrections C3 and C4, as many low-cost sensor networks rely on developing calibration models based on relatively short co-location periods (deSouza et al., 2020b;West et al., 2020;Singh et al., 2021). Each of the 21 calibration models considered was tested under four potential correction schemes (C1, C2, C3, and C4).
Evaluating the calibration models developed under the four different correction schemes
We first qualitatively evaluate transferability of the calibration models from the co-location sites to the rest of the network by comparing the distribution of T and RH at the colocation sites during time-periods used to construct the calibration models with that experienced over the entire course of network operation (Fig. 2). We then evaluate how well different calibration models perform when using the traditional methods of model evaluation (Tables 2, 3, S2). We attempt to quantify the degree of transferability of the calibration models in time by asking how well calibration models developed during short-term colocations (corrections: C3 and C4) perform when transferred to long-term network measurements. To answer this question, we evaluated calibration models using corrections C3 and C4 only for the time period over which the calibration models were developed, which was 1-4 January 2021 for C3, and 1-4 January 2021 and 1-14 May 2021 for C4 (Table S2). We compared the performance of C3 and C4 corrections dur- Table 2. Performance of the calibration models as captured using root mean square error (RMSE) and Pearson correlation (R). LOSO CV was used to prevent overfitting in the machine learning models. All corrected values were evaluated over the entire time period (1 January-30 September 2021).
On-the-fly Correction Correction developed on correction developed using developed using data during developed measurements measurements the entire period using data for made in the from the first two of network the same week first two weeks weeks of January operation of measurement of January and the first two weeks in May Table 3. Performance of the calibration models using the C1 correction as captured using root mean square error (RMSE) and Pearson correlation (R). LOBD CV was used to prevent overfitting in the machine learning models. ing this time period with that obtained from applying these models over the entire time period of the network (Table 2).
ID
We next ask how well calibration models developed at a small number of co-locations sites transfer in space to other sites using the methodology detailed in the next subsection.
Evaluating transferability of calibration models over space
To evaluate how transferable the calibration technique developed at the co-located sites was to the rest of the network, we left out each of the five co-located sites in turn and, using data from the remaining sites, ran the models proposed in Tables 2 and 3. We then applied the models generated to the left-out site. We report the distribution of RMSE from each calibration model considered at the left-out sites using box plots (Fig. 3). For correction C1, we also left out a threeweek period of data at a time and generated the calibration models based on the data from the remaining time periods at each site. For the machine learning models (Models 17-21), we used CV = LOBD. We plotted the distribution of RMSE from each model considered for the left-out three-week period (Fig. 3).
We statistically compare the errors in predictions for each test dataset with errors in predictions from using all sites in our main analysis. Such an approach is useful for understanding how well the proposed correction can transfer to other areas in the Denver region. To compare statistical differences between errors, we used t tests if the distribution of errors were normally distributed (as determined by a Shapiro-Wilk test); if not, we used Wilcoxon signed rank tests, using a significance value of 0.05.
We have only five co-location sites in the network. Although evaluating the transferability among these sites is useful, as we know the true PM 2.5 concentrations at these sites, we also evaluated the transferability of these models in the larger network by predicting PM 2.5 concentrations using the models proposed in Tables 2 and 3 at each of the 24 sites in the Love My Air network. For each site, we display time series plots of corrected PM 2.5 measurements in order to visually compare the ensemble of corrected values at each site (Fig. 4).
We next propose different metrics to quantify the uncertainty in spatial and temporal trends in PM 2.5 reported by the LCS network as introduced by the choice of calibration model applied in the subsection below.
Evaluating sensitivity of the spatial and temporal trends of the low-cost sensor network to the method of calibration
We evaluate the spatial and temporal trends in the PM 2.5 concentrations corrected using the 89 different calibration models, using similar methods to those described in where Conc hi and Conc di are 1 January-30 September 2021 averaged PM 2.5 concentrations estimated from correction h and d for site i. N is the total number of sites.
2. The temporal RMSD (Fig. 6) We characterized the uncertainty in the "corrected" PM 2.5 estimates at each site across the different models using two metrics: a normalized range (NR) (Fig. 7a) and uncertainty, calculated from the 95 % confidence interval (CI), assuming a t statistical distribution (Fig. 7b). NR for a given site represents the spread of PM 2.5 across the different correction approaches.
C kt is the PM 2.5 concentration at hour t from the kth model from the ensemble of K (which, in this case, is 89) correction approaches. C t represents the ensemble mean across the K different products at hour t. M is the total number of hours in our sample for which we have PM 2.5 data for the site under consideration.
For our sample (K = 89), we assume that the variations in PM 2.5 across multiple models follows the Student t distribution, with the mean being the ensemble average. The confidence interval (CI) for the ensemble mean at a given time t is CI t = C t + t * SD t √ K , where C t represents the ensemble mean at time t; t * is the upper (1−CI) 2 critical value for the t distribution with K-1 degrees of freedom. For K = 89, t * for the 95 % double-tailed confidence interval is 1.99. SD t is the sample standard devi- 4. We define an overall estimate of uncertainty as fol- , which can also be expressed as uncertainty
Evaluating the sensitivity of hotspot detection across the network of sensors to the calibration method
One of the key use cases of low-cost sensors is hotspot detection. We report the labels of sites that are the most polluted using calibrated measurements from the 89 different models using hourly data. We repeat this process for daily, weekly, and monthly averaged calibrated measurements. We ignore missing measurements from the network when calculating time-averaged values for the different time periods considered. We report the mean number of sensors that are ranked "most polluted" across the different correction functions for the different averaging periods (Fig. 8). We do this to identify if the choice of the calibration model impacts the hotspot identified by the network (i.e., depending on the calibration model, different sites show up as the most polluted).
Supplementary analysis: evaluating transferability of calibration models developed in different pollution regimes
We evaluated model performance for true and reference PM 2.5 concentrations > 30 µg m −3 and ≤ 30 µg m −3 , as Nilson et al. (2022) have shown that calibration models can have different performances in different pollution regimes. We chose to use 30 µg m −3 as the threshold, as these concentrations account for the greatest differences in health and air P. deSouza et al.: Calibrating networks of low-cost air quality sensors pollution avoidance behavior impacts (Nilson et al., 2022). Lower concentrations (PM 2.5 ≤ 30 µg m −3 ) represent most measurements observed in our network; better performance at these levels will ensure better day-to-day functionality of the correction. High PM 2.5 (> 30 µg m −3 ) concentrations in Denver typically occur during fires. Better performance of the calibration models in this regime will ensure that the LCS network can accurately capture pollution concentrations under smoky conditions. In order to compare errors observed in the two different concentration ranges, in addition to reporting R and RMSE of the calibration approaches, we also report the normalized RMSE (normalized by the mean of the true concentrations) (Tables S3 and S4).
Supplementary analysis: evaluating transferability of calibration models developed across different time aggregation intervals
One of the key advantages of LCS is that they report high frequency (timescales shorter than an hour) measurements of pollution. As reference monitoring stations provide hourly or daily average pollution values, most often, the calibration model is developed using hourly averaged data and is then applied to the unaggregated, high-frequency LCS measurements. We applied the calibration models described in Tables 2 and 3 developed using hourly averaged co-located measurements on minute-level measurements from the colocated LCS described in Table S1. We evaluated the performance of the corrected high-frequency measurements against the "true" measurements from the corresponding reference monitor using the metrics R and RMSE (Tables S5 and S6).
Results
We first report how representative meteorological conditions at the co-located sites were of the overall network. Temperature at the co-located sites across the entire period of the experiment (from 1 January to 30 September 2021) were similar to those at the rest of Love My Air network (Fig. 2a). The sensor CS19 is the only one that recorded lower temperatures than those at any of the other sites, likely due to it being in the shade. Relative humidity at the co-located sites (three of the four co-located sites have a median RH close to 50 % or higher) is higher than at the other sites in the network (7 of the 12 other sites have a median RH < 50 %) (Fig. 2b). The similarity in meteorological conditions at the co-located sites with those experienced by the rest of the network suggests that models developed using long-term data (C1) are likely to be transferable to the overall network.
We also compared meteorological conditions during the development of corrections C3 (1-4 January 2021) and C4 (1-4 January 2021 and 1-14 May 2021) to those measured during the duration of network operation (C3: Figs. S10 and S11; C4: Figs. S12 and S13). Unsurprisingly, tempera- tures at the co-located sites during the development of C4 were more representative of the network than C3, although they were, on average, lower (median temperatures ∼ 10-17 • C) than the average temperatures experienced by the network (median temperatures ∼ 5-23 • C). RH values at colocated sites during C3 and C4 tend to be higher than conditions experienced by Love My Air sensors CS8, CS10, CS15, CS16, CS17, CS18, and CS20, likely due to the different microenvironments experienced at each site. The differences in meteorological conditions at the co-located sites for the time period of calibration model developed with those experienced by the rest of the network suggest that models developed using short-term data (C3, C4) are not likely to be transferable to the overall network.
When we evaluate the performance of applying each of the 89 calibration models on all co-located data, we find that, based on R and RMSE values, the on-the-fly C2 correction performed better overall than the C1, C3, and C4 corrections for most calibration model forms (Tables 2 and 3).
Within corrections C1 and C2, we found that an increase in complexity of the model form resulted in a decreased RMSE. Overall, Model 21 yielded the best performance (RMSE = 1.281 µg m −3 when using the C2 correction, 1.475 µg m −3 when using the C1 correction with a LOSO CV, and 1.480 µg m −3 when using a LOBD correction). In comparison, the simplest model yielded an RMSE of 3.421 µg m −3 for the C1 correction and 3.008 µg m −3 when using the C2 correction. For correction C1, using a LOBD CV (Table 3) with the machine learning models resulted in better performance than using a LOSO CV (Table 2), except for Model 21, which is an RF model with additional timeof-day and month covariates, for which performance using the LOSO CV was marginally better (RMSE: 1.475 µg m −3 versus 1.480 µg m −3 ).
We also found that, for corrections of short-term calibrations (C3 and C4), more complex models yielded a better performance (for example the RMSE for Model 16: 2.813 µg m −3 , RMSE for Model 2: 3.110 µg m −3 , generated using the C3 correction) when evaluated alone during the period of co-location (Table S2). However, when models generated using the C3 and C4 corrections were transferred to the entire time period of co-location, we found that more complex multivariate regression models (Models 13-16) and the machine learning model (Model 21) that include cos_time performed significantly worse than the simpler models (Table 2). In some cases, these models performed worse than the uncorrected measurements. For example, applying Model 16 generated using C3 on the entire dataset resulted in an RMSE of 32.951 µg m −3 compared to 6.469 µg m −3 for the uncorrected measurements.
Including data from another season, spring, in addition to winter in the training sample (C4) resulted in significantly improved performance of calibration models over the entire dataset compared to C3 (winter), although it did not result in an improvement in performance for all models compared to the uncorrected measurements. For example, Model 16 generated using C4 yielded an RMSE of 6.746 µg m −3 . Among the multivariate regression models, we found that models of the same form that corrected for RH instead of T or D did best. The best performance was observed for models that included the nonlinear correction for RH (Model 12) or included an RH ×T term (Model 5) ( Table 2).
Evaluating transferability of the calibration algorithms in space
Large reductions in RMSE are observed when applying simple linear corrections (Models 1-4) developed using a subset of the co-located data to the left-out sites (Fig. 3a, c, d, e) or time periods (Fig. 3b) across C1, C2, C3, and C4. Increasing the complexity of the model does not result in marked changes in correction performance on different test sets for C1 and C2. Although the performance of the corrected datasets did improve on average for some of the complex models considered (Models 17, 20, 21, for example, vis-a-vis simple linear regressions when using the C1 correction) (Fig. 3a, b), this was not the case for all test datasets considered, as evidenced by the overlapping distributions of RMSE performances (e.g., Model 11 using the C2 correction resulted in a worse fit for one of the test datasets). For C3 and C4, the performance of corrections was worse across all datasets for the more complex multivariate model formulations (Fig. 3d, e), indicating that using uncorrected data is better than using these corrections and calibration models. Wilcoxon tests and t tests (based on whether Shapiro-Wilk tests revealed that the distribution of RMSEs was normal) revealed significant improvements in the distribution of RMSEs for all corrected test sets vis-a-vis the uncorrected data. Across the different models, there was no significant difference in the distribution of RMSE values from applying C1 and C2 corrections to the test sets. For corrections C3 and C4, we found significant differences in the distribution of RMSEs obtained from running different models on the data, implying that the choice of model has a significant impact on transferability of the calibration models to other monitors.
The time series of corrected PM 2.5 values for Models 1, 2, 5, 16, and 21 (RF using additional variables) (using CV = LOSO for the machine learning Models 17 and 21) for corrections generated using C1, C2, C3, and C4 are displayed in Fig. 4 for Love My Air sensor CS1. These subsets of models were chosen, as they cover the range of model forms considered in this analysis.
From Fig. 4, we note that, although the different corrected values from C1 and C2 track each other well, there are small systematic differences between the different corrections. Peaks in corrected values using C2 tend to be higher than those using C1. Peaks in corrected values using machine learning methods using C1 are higher than those generated from multivariate regression models. Figure 4 also shows marked differences in the corrected values from C3 and C4. Specifically, Model 16 yields peaks in the data that corrections using the other models do not generate. This pattern was consistent when applying this suite of corrections to other Love My Air sensors.
3.2 Evaluating sensitivity of the spatial and temporal trends of the low-cost sensor network to the method of calibration The spatial and temporal RMSD values between corrected values generated from applying each of the 89 models using the four different correction approaches across all monitoring sites in the Love My Air network are displayed in Figs. 5 and 6, respectively. There is a larger temporal variation (max 32.79 µg m −3 ) in comparison to spatial variations displayed across corrections (max 11.95 µg m −3 ). Model 16 generated previously proposed using (a) correction C1 when leaving out a co-location site in turn and then running the generated correction on the test site (note that for machine learning models (Models 17-21), we performed CV using a LOSO CV as well as a LOBD CV approach.); (b) correction C1 when leaving out three-week periods of data at a time and generating corrections based on the data from the remaining time periods across each site, and evaluating the performance of the developed corrections on the held-out three weeks of data (note that for machine learning models (Models 17-21), we performed CV using a LOBD CV approach); (c) correction C2 when leaving out a co-location site in turn and then running the generated correction on the test site; (d) correction C3 when leaving out a co-location site in turn and then running the generated correction on the test site; (e) correction C4 when leaving out a co-location site in turn and then running the generated correction on the test site. Each point represents the RMSE for each test dataset permutation. The distribution of RMSEs is displayed using box plots and violin plots.
using the C3 correction has the greatest spatial and temporal RMSD in comparison with all other models. Models generated using the C3 and C4 corrections displayed the greatest spatial and temporal RMSD vis-a-vis C1 and C2. Figures S14-S17 display spatial RMSD values between all models corresponding to corrections C1-C4, respectively, to allow for a zoomed-in view of the impact of the different model forms for the four corrections. Similarly, Figs. S18-S21 display temporal RMSD values between all models corresponding to corrections C1-C4, respectively. Across all models, the temporal RMSD between models is greater than the spatial RMSD.
The distribution of uncertainty and the NR in hourly calibrated measurements over the 89 models by monitor are displayed in Fig. 7. Overall, there are small differences in uncertainties and NR of the calibrated measurements across sites. The average NR and uncertainty across all sites are 1.554 (median: 0.9768) and 0.044 (median: 0.033), respectively. We note that, although the uncertainties in the data are small, the average normalized range tends to be quite large.
Evaluating the sensitivity of hotspot detection across the network of sensors to the calibration method
Mean (95 % CI) PM 2.5 concentrations across the 89 different calibration models listed in Tables 1 and 2 at each Love My Air site for the duration of the experiment (1 January-30 September 2021) are displayed in Fig. S22. Due to overlap between the different calibrated measurements across sites, the ranking of sites based on pollutant concentrations is dependent on the calibration model used. Every hour, we ranked the different monitors for each of the 89 different calibration models in order to evaluate how sensitive pollution hotspots were to the calibration model used. We found that there were, on average, 4.4 (median = 5) sensors that were ranked most polluted. When this calculation was repeated using daily averaged calibrated data, there were, on average, 2.5 (median = 2) sensors that were ranked the most polluted. The corresponding value for weekly calibrated data was 2.4 (median = 1) and for monthly data was 3 (median = 3) (Fig. 8).
Supplementary analysis: evaluating transferability of calibration models developed in different pollution regimes
When we evaluated how well the models performed at high PM 2.5 concentrations (> 30 µg m −3 ) versus lower concentrations (≤ 30 µg m −3 ), we found that multivariate regression models generated using the C1 correction did not perform well in capturing peaks in PM 2.5 concentrations (normalized RMSE > 25 %) (Tables S3 and S4). Multivariate regression models generated using the C2 correction performed better than those generated using C1 (normalized RMSE ∼ 20 %-25 %). Machine learning models generated using both C1 and C2 corrections captured PM 2.5 peaks well (C1: normalized RMSE ∼ 10 %-25 %, C2: normalized RMSE ∼ 10 %-20 %). Specifically, the C2 RF model (Model 21) yielded the lowest RMSE values (4.180 µg m −3 , normalized RMSE: 9.8 %) of all models considered. The performance of models generated using C1 and C2 corrections in the low-concentration regime was the same as that over the entire dataset. This is because most measurements made were < 30 µg m −3 .
Models generated using C3 and C4 had the worst performance in both concentration regimes and yielded poorer agreement with reference measurements than even the uncorrected measurements. As in the case with the entire dataset, more complex multivariate regression models and machine learning models generated using C3 and C4 performed worse than more simple models in both PM 2.5 concentration intervals (Tables S3 and S4).
Supplementary analysis: evaluating transferability of calibration models developed across different time aggregation intervals
We then evaluated how well the models generated using C1, C2, C3, and C4 corrections performed when applied to minute-level LCS data at co-located sites (Tables S5 and S6). We found that the machine learning models generated using C1 and C2 improved the performance of the LCS. Model 21 (CV = LOSO) generated using C1 yielded an RMSE of 15.482 µg m −3 compared to 16.409 µg m −3 obtained from the uncorrected measurements. The more complex multivariate regression models yielded a significantly worse performance across all corrections. (Model 16 generated using C1 yielded an RMSE of 41.795 µg m −3 ). As in the case with the hourly averaged measurements, using correction C1, LOBD CV instead of LOSO for the machine learning models resulted in better model performance, except for Model 21. Few models generated using C3 and C4 resulted in improved performance when applied to the minute-level measurements (Tables S5 and S6).
Discussion and conclusions
In our analysis of how transferable the correction models developed at the Love My Air co-location sites are to the rest of the network, we found that, for C1 (corrections developed on the entire co-location dataset) and C2 (on-the-fly corrections), more complex model forms yielded better predictions (higher R, lower RMSE) at the co-located sites. This is likely because the machine learning models were best able to capture complex, non-linear relationships between the LCS measurements, meteorological parameters, and reference data when conditions at the co-location sites were representative of that of the rest of the network. Model 21, which included additional covariates intended to capture periodicities in the data, such as seasonality, yielded the best performance, suggesting that, in this study, the relationship P. deSouza et al.: Calibrating networks of low-cost air quality sensors Figure 7. Distribution of (a) uncertainty and (b) normalized range (NR) in hourly calibrated measurements across all 89 calibration models at each site using the methodology described in Sect. 2.3.5. Figure 8. Variation in the number of sites that were ranked as "most polluted" across the 89 different calibration models for different time-averaging periods displayed using box plots.
between LCS measurements and reference data varies over time. One possible reason for this could be the impact of changing aerosol composition in time, which has been shown to impact the LCS calibration function (Malings et al., 2020).
When examining the short term, C3 (corrections developed on two weeks of co-located data at the start of the experiment) and C4 (corrections developed on two weeks of colocated data in January and two weeks of co-located data in a May) corrections, we found that, although these corrections appeared to significantly improve LCS measurements during the time period of model development (Table S2), when transferred to the entire time period of operation, they did not perform well ( Table 2). Many of the models, especially the more complex multivariate regression models, performed significantly worse than even the uncorrected measurements. This result indicates that calibration models generated during short time periods, even if the time periods correspond to different seasons, may not necessarily transfer well to other times, likely because conditions during co-location (aerosoltype, meteorology) are not representative of that of network operating conditions. Our results suggest the need for statistical calibration models to be developed over longer time periods that better capture different LCS operating conditions. However, for C3 and C4, we did find models that relied on nonlinear formulations of RH, which serve as proxies for hygroscopic growth, yielded the best performance as compared to more complex models (Table 2). This suggests that physics-based calibrations are potentially an alternative ap-proach, especially when relying on short co-location periods, and need to be explored further.
When evaluating how transferable different calibration models were to the rest of the network, we found that, for C1 and C2, more complex models that appeared to perform well at the co-location sites did not necessarily transfer best to the rest of the network. Specifically, when we tested these models on a co-located site that was left out when generating the calibration models, we found that some of the more complex models using the C2 correction yielded a significantly worse performance at some test sites (Fig. 3). If the corrected data were going to be used to make site-specific decisions, then such corrections would lead to important errors. For C3 and C4, we observed a large distribution of RMSE values across sites. For several of the more complex models developed using C3 and C4 corrections, the RMSE values at some left-out sites were larger than observed for the uncorrected data, suggesting that certain calibration models could result in even more error-prone data than using uncorrected measurements. As the meteorological parameters for the duration of the C3 and C4 co-locations are not representative of overall operating conditions of the network, it is likely that the more complex models were overfit to conditions during the co-location, leading to them not performing well over the network operations.
For C1 and C2, we found that there were no significant differences in the distribution of the performance metric RMSE of corrected measurements from simpler models in comparison to those derived from more complex corrections at test sites (Fig. 3). For C3 and C4, we found significant differences in the distribution of RMSE across test sites, which indicates that these models are likely site specific and not easily transferable to other sites in the network. This suggests that less complex models might be preferred when short-term co-locations are carried out for sensor calibration, especially when conditions during the short-term co-location are not representative of that of the network.
We found that the temporal RMSD ( Fig. 6) was greater than the spatial RMSD (Fig. 5) for the ensemble of corrected measurements developed by applying the 89 different calibration models to the Love My Air network. One of the reasons this may be the case is that PM 2.5 concentrations across the different Love My Air sites in Denver are highly correlated (Fig. S5), indicating that the contribution of local sources to PM 2.5 concentrations in the Denver neighborhoods in which Love My Air was deployed is small. Due to the low variability in PM 2.5 concentrations across sites, it makes sense that the variations in the corrected PM 2.5 concentrations will be seen in time rather than space. The largest pairwise temporal RMSD were all seen between corrections derived from complex models using the C3 correction.
Finally, we observed that the uncertainty in PM 2.5 concentrations across the ensemble of 89 calibration models (Fig. 7) was consistently small for the Love My Air Denver network. The normalized range in the corrected measurements, on the other hand, was large; however, the uncertainty (95 % CI) in the corrected measurements fall within a relatively small interval. The average normalized range tends to be quite large, likely due to outlier corrected values produced from some of the more complex models evaluated using the C3 and C4 corrections. Thus, deciding which calibration model to pick has important consequences for decision makers when using data from this network.
Our findings reinforce the idea that evaluating calibration models at all co-location sites using overall metrics like RMSE should not be seen as the only or the best way to determine how to calibrate a network of LCS. Instead, approaches like the ones we have demonstrated and metrics like the ones we have proposed should be used to evaluate calibration transferability.
We found that the detection of the "most polluted" site in the Love My Air network (an important use case of LCS networks) was dependent on the calibration model used on the network. We also found that, for the Love My Air network, the detection of the most polluted site was sensitive to the duration of time averaging of the corrected measurements (Fig. 8). Hotspot detection was most robust using weekly averaged measurements. A possible reason for this is that temporal variations in PM 2.5 in Denver varied primarily on a weekly scale, and therefore analysis conducted using weekly values resulted in the most robust results. Such an analysis thus provides guidance on the most useful temporal scale for decision making related to evaluating hotspots in the Denver network.
In supplementary analyses, when we evaluated the sensitivity of other LCS use cases to the calibration model applied, such as tracking high pollution concentrations during fire or smoke events, we found that different models yielded different performance results in different pollution regimens. Machine learning models developed using C1 and models developed using C2 were better than multivariate regression models generated using C1 at capturing peaks in pollution (> 30 µg m −3 ). All models using C3 and C4 yielded poor performance results in tracking high pollution events (Tables S3 and S4). This is likely because PM 2.5 concentrations during the C3 and C4 co-location tended to be low. The calibration model developed thus did not transfer well to other concentrations. When evaluating how well the calibration models developed using hourly aggregated measurements translated to high-resolution minute-level data (Tables S5 and S6), we observed that machine learning models generated using C1 and C2 improved the LCS measurements. More complex multivariate regression models performed poorly. All C3 and C4 models also performed poorly. This suggests that caution needs to be exercised when transferring models developed at a particular timescale to another. Note that, in this paper, because pollution concentrations did not show much spatial variation, we focus on evaluating transferability across timescales only. | 2022-08-23T03:21:57.959Z | 2022-11-02T00:00:00.000 | {
"year": 2022,
"sha1": "1c47ae40d10447823083916812f0a21857304731",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/articles/15/6309/2022/amt-15-6309-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9d37f56b6b80979c348b3aa20bd1b1c2d5c79a2c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
258619652 | pes2o/s2orc | v3-fos-license | Prevalence of depression and its association with quality of life among guardians of hospitalized psychiatric patients during the COVID-19 pandemic: a network perspective
Background The COVID-19 pandemic has greatly affected treatment-seeking behaviors of psychiatric patients and their guardians. Barriers to access of mental health services may contribute to adverse mental health consequences, not only for psychiatric patients, but also for their guardians. This study explored the prevalence of depression and its association with quality of life among guardians of hospitalized psychiatric patients during the COVID-19 pandemic. Methods This multi-center, cross-sectional study was conducted in China. Symptoms of depression and anxiety, fatigue level and quality of life (QOL) of guardians were measured with validated Chinese versions of the Patient Health Questionnaire – 9 (PHQ-9), Generalized Anxiety Disorder Scale – 7 (GAD-7), fatigue numeric rating scale (FNRS), and the first two items of the World Health Organization Quality of Life Questionnaire - brief version (WHOQOL-BREF), respectively. Independent correlates of depression were evaluated using multiple logistic regression analysis. Analysis of covariance (ANCOVA) was used to compare global QOL of depressed versus non-depressed guardians. The network structure of depressive symptoms among guardians was constructed using an extended Bayesian Information Criterion (EBIC) model. Results The prevalence of depression among guardians of hospitalized psychiatric patients was 32.4% (95% CI: 29.7–35.2%). GAD-7 total scores (OR = 1.9, 95% CI: 1.8–2.1) and fatigue (OR = 1.2, 95% CI: 1.1–1.4) were positively correlated with depression among guardians. After controlling for significant correlates of depression, depressed guardians had lower QOL than non-depressed peers did [F(1, 1,101) = 29.24, p < 0.001]. “Loss of energy” (item 4 of the PHQ-9), “concentration difficulties” (item 7 of the PHQ-9) and “sad mood” (item 2 of the PHQ-9) were the most central symptoms in the network model of depression for guardians. Conclusion About one third of guardians of hospitalized psychiatric patients reported depression during the COVID-19 pandemic. Poorer QOL was related to having depression in this sample. In light of their emergence as key central symptoms, “loss of energy,” “concentration problems,” and “sad mood” are potentially useful targets for mental health services designed to support caregivers of psychiatric patients.
Introduction
The coronavirus disease 2019 (COVID-19) was first reported in Wuhan, Hubei province of China at the end of 2019 and subsequently emerged in other parts of the world (1,2). Notwithstanding its negative impact on the health and security of humanity, the COVID-19 pandemic has also had pronounced effects on mental health status and quality of life (QOL) in various populations (3)(4)(5).
In times of pandemics, people with mental disorders are more vulnerable to respiratory tract infections (6). Possible correlates of this risk include higher smoking rates, poor personal hygiene and negligence of infection risks due to cognitive impairment as well as crowded living conditions and lack of personal protective equipment (PPE) in psychiatric wards (6)(7)(8)(9). As a result of such factors, it is reasonable to hypothesize that hospitalized psychiatric patients are more susceptible to COVID-19. Indeed, this contention was supported early in the pandemic when at least 50 hospitalized psychiatric patients and 30 mental health professionals in a major psychiatric hospital in Wuhan, China were diagnosed with COVID-19 in early 2020 (7,10). Additionally, a study based on electronic health records in the United States found that patients with psychiatric disorders had a higher risk for COVID-19 infection than those without psychiatric disorders (adjusted OR = 7.64 for depression; adjusted OR = 7.34 for schizophrenia) (11). Two other studies in Spain found that 45% of COVID-19 inpatients had history of psychiatric disorders and 37% of COVID-19 inpatients had medical conditions (12,13), supporting the view that psychiatric patients are more prone to COVID-19.
In order to minimize infection risk, policies to prevent unnecessary visits and social contacts in hospitals and psychiatric wards were implemented and multiple clinical services were curtailed during early stages of the COVID-19 pandemic (8,(14)(15)(16). Psychiatric patients and their guardians have been confronted with numerous barriers in accessing mental health services during the COVID-19 pandemic including difficulties in visiting psychiatrists, reduced access to psychotropic medications and hospital admissions, and problems with evaluating degree of compliance with recommended treatment protocols (8,16,17). All of these barriers to optimal psychiatric care could increase risk for depression and reduced quality of life (QOL), not only among patients but also among their guardians. Previous studies have revealed that guardians of adolescents with Type 1 diabetes and isolated COVID-19 patients suffered from higher levels of depression, anxiety and pandemic-related worry compared to adults who did not have family members who were ill during the COVID-19 pandemic (18)(19)(20); these findings underscore the importance of considering the mental health status of guardians who must care for psychiatric patients and undertake relevant obligations during the COVID-19 pandemic.
To date, the impact of the COVID-19 pandemic on the mental health status of psychiatric patients has been widely investigated (21)(22)(23)(24). In contrast, there has been a paucity of research on the mental health status and QOL of guardians of the hospitalized psychiatric patients. Documenting the prevalence of depression as well as its correlates and association with QOL among guardians of psychiatric patients during the pandemic is important for ensuring close support systems of patients are maintained and distressed caregivers also have access to interventions that reduce their own suffering.
Traditionally, epidemiological research on depression has adopted a latent factor approach (25) in which depression is regarded as an unobservable, latent factor and depressive symptoms are observable manifestations or indicators of depression (26). Key assumptions underlying the latent factor approach are that all symptoms are present or dependent upon one another and equally important in their contributions to overall depression levels (25,26). However, symptoms such as anhedonia, hopelessness and reduced energy often have robust associations with each other even when diagnostic criteria for MDD are not fulfilled (27,28). Such data highlight how traditional latent factor approaches cannot elucidate inter-relationships between different depressive symptoms although individual symptoms may play an important role in the onset and maintenance of depression (29,30). As an alternative to the traditional perspective, a network approach may provide more understanding of how depressive symptoms are interconnected, particular symptoms that are most influential for the syndrome within particular populations (31)(32)(33).
Based on the preceding overview, the initial aim of this study was to document the prevalence of depression, its correlates, and its association with QOL among guardians of hospitalized psychiatric patients during the COVID-19 pandemic. In addition, we used network analysis to generate a network model of interrelations between specific depressive symptoms within this understudied group.
Study setting and participants
This multi-center, cross-sectional study was conducted between May 24, 2020 and January 18, 2021 in seven tertiary psychiatric hospitals and psychiatric units of general hospitals in China. To avoid COVID-19 infection risk, data were collected using the WeChat-based QuestionnaireStar application as recommended in previous studies (34,35). Guardians needed to declare their health status using WeChat during the COVID-19 pandemic when they entered participating hospitals. Therefore, all guardians were presumed to be WeChat users. Guardians who visited hospitalized patients during the study period were consecutively invited to participate. Inclusion criteria were as follows: (1) age 18 years or older; (2) ability to read Chinese and understand the purpose and contents of the assessments; (3) status as a guardian (e.g., spouse, child, parent, other kin or friend) of a hospitalized psychiatric patient in participating hospitals; (4) provision of online electronic informed consent. Guardians with a psychiatric history or current psychiatric disorders were excluded from this study since this was a possible confounding factor to estimating depression prevalence for guardians as a population distinct from psychiatric patients. The study protocol was centrally approved by the research ethics committee of Beijing Anding Hospital, Capital Medical University and other participating hospitals.
The data collection form was designed using the QuestionnaireStar application. A Quick Response (QR) code linked to the informed consent and data collection form was scanned by the participants with their smart phones. Those who met eligibility criteria completed the assessment in participating hospitals on a voluntary, anonymous basis.
Data collection and assessment tools
Socio-demographic data assessed included age, gender, marital status, employment status, education level, urban versus rural residence, presence of chronic physical diseases, perceived financial status, frequency of social media use during the COVID-19 pandemic, and experience of difficulty in visiting mental health services during the pandemic.
Severity of depressive symptoms was assessed using the validated Chinese version of the Patient Health Questionnaire -9 (PHQ-9). The PHQ-9 consists of nine items, each rated on a frequency scale from 0 (not at all) to 3 (almost every day) (36,37). Higher PHQ-9 scores represent more severe depression (38). The reliability and validity of the PHQ-9 are satisfactory in Chinese populations (39,40). Participants were regarded "having clinically relevant depression" (having depression hereafter) if their total PHQ-9 score was ≥5 (38).
Severity of anxiety symptoms was assessed using the Chinese version of the Generalized Anxiety Disorder Scale -7 (GAD-7). The GAD-7 consists of seven self-report items, each of which is rated on a frequency scale from 0 (not at all) to 3 (almost every day) (41); higher GAD-7 scores reflect more severe anxiety symptoms. The GAD-7 Chinese version has been validated in Chinese populations (42,43). Level of fatigue was evaluated using a single-item fatigue numeric rating scale (FNRS) (44). FNRS scores range from 0 (no fatigue) to 10 (extreme fatigue).
Global quality of life (QOL) was assessed with the first two items of the World Health Organization Quality of Life Questionnaire -brief version (WHOQOL-BREF) (45,46). These items queried overall quality of life and general health status from 1 (extremely unsatisfied) to 5 (extremely satisfied) (47). This two-item QOL index has been validated and used widely in Chinese samples (48).
Data analyses
All data analyses were conducted using Statistical Analysis System (SAS) OnDemand for Academics (SAS Institute Inc., Cary, NC, United States) and R version 4.2.1 (49). Sociodemographic and emotional status differences between depressed versus non-depressed guardian subgroups were assessed using independent two-sample t-tests, Wilcoxon rank sum tests, and chi-square tests, as appropriate. Analysis of covariance (ANCOVA) was used to compare global QOL score differences between depressed versus non-depressed guardians after first controlling the impact of other measures on which there were subgroup differences in univariable analyses (i.e., covariates). Independent predictors of depression levels were evaluated using a multiple logistic regression analysis; depression was the dependent variable, and significant univariate correlates of depression subgroup status were predictors in the analysis. Age and sex are generally associated with mental health status and QOL in many populations (50); therefore, they were included as potential predictors in the multiple logistic regression model, even though neither had significant associations with depression in univariate analyses. In addition, independent predictors of depression were explored separately for firstdegree relatives (spouse, children, and parents). Two-sided p-values lower than 0.05 were considered to be statistically significant.
To capture the full spectrum of depression severity and increase external validity, the network structure of depressive symptoms was constructed for all guardians of hospitalized psychiatric patients rather than only the depressed guardians, as recommended by previous studies (51,52). An extended Bayesian Information Criterion (EBIC) model graphical least absolute shrinkage and selection operator (gLASSO) network model was adopted in this study. In the network structure, each individual symptom was a "node," and connections between symptoms were "edges." The centrality of each symptom was measured using strength, defined as the sum of the absolute weights of the edges connecting a certain node to all the other nodes. The size of a node represented the strength of a particular symptom. The thickness of each edge represented the strength of the association between two nodes. The color of an edge reflected the direction of the association with green edges indicating positive associations and red edges indicating negative associations between nodes.
Network stability was examined via the correlation stability coefficient (CS-C) using a case-dropping 1,000-time bootstrap method (53, 54). Preferably, a CS-C exceeds 0.5, with a minimum value requirement of 0.25 (55).
To examine the impact of anxiety symptoms and fatigue on the observed network structure of depressive symptoms, the network model of depression was re-estimated after adjusting for anxiety symptoms and fatigue. A flow network was applied to investigate relationships between individual depressive symptoms and QOL. R packages used in this study were networktools version 1
Sociodemographic and clinical characteristics of guardian sample
In total, 1,163 guardians of hospitalized psychiatric patients were invited to participate in this study; of these, 1,101 (94.7%) agreed to participate, fulfilled the eligibility criteria, and completed the assessment. Table 1 presents demographics and clinical characteristics of final guardian sample. The prevalence of depression among guardians of hospitalized psychiatric patients was 32.4% (95% CI: 29.7-35.2%).
Compared to their non-depressed peers, guardians with depression were more likely to report poorer financial status, difficulty visiting a mental health service during the pandemic, increased fatigue, and elevations in anxiety symptoms. Depressed guardians were also significantly less likely to report that their loved ones showed good compliance with medication during the pandemic and had a lower mean overall QOL level (all p-values<0.05; see Table 1). In contrast, depressed versus non-depressed guardian subgroups did not differ on any demographic measure.
Global QOL differences between depressed versus non-depressed guardians
After adjusting for other significant correlates of depression status, guardians with depression continued to have a significantly lower mean QOL level than non-depressed guardians had [F (1, 1,101) = 29.24, p < 0.001].
Predictors of depression among guardians of hospitalized psychiatric patients
The multiple logistic regression analysis indicated higher total GAD-7 scores (OR = 1.9, 95% CI: 1.8-2.1) and fatigue scores (OR = 1.2, 95% CI: 1.1-1.4) were the only unique, statistically significant predictors of elevated depression levels within the guardian sample (see Table 2). In a subgroup analysis of first-degree relative guardians, findings were similar to those for the whole sample (see Supplementary Table S1).
Network analysis
The network structure of depressive symptoms, as estimated with the EBIC glasso model, is shown in Figure 1. PHQ-9 items 4 (DEP-4, loss of energy), 7 (DEP-7, concentration difficulties), and 2 (DEP-2, sad mood) had the highest strengths in the network model of depressive symptoms. Exact centrality strength values are shown in Supplementary Table S2. The CS-C for network model strength was 0.75, indicating that centrality strength values in the network remained stable after dropping 75% of the sample (Figure 2).
The re-estimated network structure of depressive symptoms after adjusting for anxiety symptoms and fatigue is shown in Figure 3. Nodes with three highest strengths in the adjusted network ( Figure 3) were identical to those in the unadjusted network (Figure 1), suggesting that neither anxiety symptoms nor fatigue had a significant influence on the initial network model. Exact centrality strengths in the adjusted network model are shown in Supplementary Table S3. The flow network of depressive symptoms and QOL indicated PHQ-9 items 6 (DEP-6, guilt feelings), 7 (DEP-7, concentration difficulties) and 3 (DEP-3, sleep problems) were strongly connected with global QOL within the overall guardian sample (Figure 4). The weighted adjacency matrix of the network for global QOL and depressive symptoms was shown in Supplementary Table S4.
Supplementary subgroup network analyses showed that the network features in depressed guardians were similar to those found for the whole sample (Supplementary Figures S1, S2).
Discussion
To our knowledge, this is the first study to explore the prevalence of depression and its association with QOL among guardians of hospitalized psychiatric patients during the COVID-19 pandemic. The prevalence of depression among guardians was 32.4%. Since no prevalence data from previous studies of guardians of hospitalized psychiatric patients could be identified, it is not entirely clear whether the rate in this sample was elevated relative to related comparison groups. However, previous COVID-19 pandemic era studies (61, 62) on guardians to assisted living residents and guardians to persons with neurocognitive disorders reported rates of depression (38.8% and 36.3% respectively) similar to those of the present study. Given that approximately one third of guardians experienced depression across these three studies, depression among caregivers of vulnerable patient groups appears to be a noteworthy yet overlooked mental health problem during the COVID-19 pandemic.
The relatively high prevalence of depression among guardians in this study could be attributed to several reasons. First, the closure of clinical psychiatric services during the early COVID-19 pandemic phase could have contributed acute patient crises (8,63), including difficulties in visiting psychiatrists, reduced access to psychotropic medications, and/or barriers in maintaining medication compliance, all of which could exacerbate distress in patients as well as concerned family members including guardians. Second, news reports of increased nosocomial infections of COVID-19 within psychiatric hospitals could have aggravated guardians' pandemic-related worries (10). Third, cancellations of routine family visits to hospitals during the COVID-19 pandemic increased uncertainty about care for both patients and guardians. Finally, the PHQ-9 cutoff we adopted to identify depressed status may have contributed to this rate and is not necessarily identical to prevalence estimates that might be garnered from structured diagnostic interviews.
With respect to unique predictors of depression among guardians in our sample, higher GAD-7 total scores were positively correlated with depression scores. This finding aligns with previous studies Frontiers in Psychiatry 05 frontiersin.org indicating anxiety and depression are frequently comorbid with each other (64,65). To elaborate, a worldwide survey reported that almost 46% of patients with a lifetime prevalence of major depressive disorder (MDD) also have a lifetime history of anxiety disorder (66). Data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study found 53% of patients with MDD had significant concurrent anxiety symptoms (67). Depression and anxiety are also intertwined with one another over time (68); the presence of one condition may predispose the vulnerable to the other condition (69). Supporting biological foundations of comorbidity, genetic epidemiological studies suggest that depression and anxiety have a shared genetic etiology (70-73).
High levels of fatigue also emerged as a unique correlate of elevated depression scores in our sample. Paralleling comorbidity Frontiers in Psychiatry 06 frontiersin.org evidence for anxiety, fatigue is often viewed as comorbid with depression and is highly prevalent in a cluster of depressive symptoms (74-76), particularly within East-Asian samples who may somatize depressive symptoms (77). Neural pathway studies have also found chronic fatigue and depression have shared neurobiological mechanisms (78,79). In contrast to comorbidity interpretations, associations between depression and fatigue may be attributed to construct overlaps. Specifically, the diagnosis of depression and PHQ-9 include "loss of energy" as a criterion (36,37) that overlaps with fatigue. Network analysis indicated "loss of energy" (DEP-4) had the highest centrality strength in the structure of depressive symptoms in our guardian sample. This finding aligns with Hinz et al. (80) who reported "loss of energy" had the highest factor loading of any PHQ-9 item. In tandem, these results underscore the importance of loss of energy vis a vis other symptoms of depression. In community-based settings, "loss of energy" is frequently endorsed when people encounter depressing life events (81,82). Conversely, in psychiatric samples, the most central symptom is often "sad mood" (83,84). This discrepancy highlights potential differences in the expression of depression between psychiatric and non-psychiatric samples such as guardians in this study. "Loss of energy" may be more central to experiences of depression among guardians of hospitalized psychiatric patients, in part, due to adopting a less physically active lifestyle during lockdowns (85,86) and/or increased stress associated with potentially heavier caregiving burdens related to fulfilling the guardian role during a pandemic (87).
"Concentration difficulties" (DEP-7) had the second highest strength centrality in the network of depressive symptoms in our guardian sample. "Sad mood" and "anhedonia" are conventionally accepted as two core symptoms of MDD, in contrast to our finding that "concentration difficulties" emerged as the second most influential depressive symptom in guardians of hospitalized psychiatric patients. This could be explained, in part, by the fact that the PHQ-9 is a screening measure on depressive symptoms based on continuous severity ratings, rather than an MDD diagnosis. Nonetheless, more influential symptoms in the network model of depressive symptoms based on the PHQ-9 assessment align with symptoms of MDD based on DSM criteria as well as research based on samples with similar characteristics. Specifically, our centrality influence findings are consistent with a previous study in which individuals with an external locus of attribution were more vulnerable to concentration problems than those with an internal locus of attribution (88). A comparatively stronger external orientation may help to explain the centrality of "concentration difficulties" (DEP-7) in the network model of depressive symptoms among guardians since extra guardianship and caregiving responsibilities of this group may have increased the likelihood of emphasizing external influences as causes of stress experiences. Moreover, concentration problems may be more prominent when levels of depression severity are low (89); presumably, a majority of guardians in our study sample did not experience severe depression in light of the need for considerable competence in undertaking their role. Our data suggest that "concentration difficulties" could be an important yet easily overlooked indicator in populations that experience stress and undertake guardianship or caregiving responsibilities. Finally, after adjusting for significant correlates of depression including anxiety and fatigue, depressed guardians had significantly lower QOL levels than their non-depressed peers did. The negative depression-QOL association appears to be robust Network structure and strength of the depressive symptoms among guardians of hospitalized psychiatric patients (N = 1,101). Network stability of depressive symptoms among guardians of hospitalized psychiatric patients (N = 1,101).
Frontiers in Psychiatry 08 frontiersin.org given that it has also been observed in other populations including community-dwellers, older persons, and patients with cancer (93)(94)(95)(96). From a symptom-level perspective, "guilt feelings" (DEP-6), "concentration difficulties" (DEP-7) and "sleep problems" (DEP-3) had the strongest associations with global QOL in our guardian sample. As such, these symptoms could be useful targets for interventions designed to alleviate depression and improve QOL in this population. Network structure and strength of depressive symptoms among guardians of hospitalized psychiatric patients after adjusting for anxiety symptoms and fatigue (N = 1,101). Flow network of QOL and depressive symptoms among guardians of hospitalized psychiatric patients (N = 1,101).
Frontiers in Psychiatry 09 frontiersin.org Strengths of this study included its relatively large sample size, multi-center study design, and adoption of both a broad epidemiological perspective and a symptom-level perspective to evaluate depressive symptoms within an understudied population involved in the care of patients with psychiatric disorders. However, the study also had several methodological limitations. First, because a cross-sectional design was used, the time course of depression and changes in the expression of individual depressive symptoms over different phases of the pandemic could not be elucidated. On a related note, pre-versus post-pandemic rates of depression and network models could not be assessed due to the cross-sectional design and initiation of this study only after the COVID-19 pandemic had begun. Third, the network structure of depression was limited to PHQ-9 items so it is possible that the network structure might differ based on a different depression questionnaire or interview-based assessment. Fourth, although WeChat is widely used in China and all guardians were presumed to be WeChat users, recruitment based on consecutive (i.e., non-probability sampling) rather than random sampling, is more prone to selection biases. Finally, it is not clear how well our findings extend to guardian samples in other countries that have experienced high levels of morbidity and mortality from COVID-19 and have adopted different policies for managing the pandemic.
In conclusion, this study found approximately 1/3 of guardians of hospitalized psychiatric patients in China reported depression during the COVID-19 pandemic. Anxiety and fatigue emerged as unique correlates of depression in the sample. "Loss of energy" (DEP-4), "concentration difficulties" (DEP-7), and "sad mood" (DEP-2) were the most influential symptoms in the associated network model. These symptoms could be valuable targets in treatments for depression while strategies to reduce sleep problems and guilt may aid in improving QOL of guardians.
Data availability statement
The datasets presented in this article are not readily available because the Research Ethics Committee of Beijing Anding Hospital that approved the study prohibits the authors from making publicly available the research dataset of clinical studies. Requests to access the datasets should be directed to xyutly@gmail.com.
Ethics statement
The studies involving human participants were reviewed and approved by Research Ethics Committee of Beijing Anding Hospital, Capital Medical University. The patients/participants provided their electronic written informed consent to participate in this study. | 2023-05-12T13:38:41.773Z | 2023-05-12T00:00:00.000 | {
"year": 2023,
"sha1": "0a4bcd73185e33529e9f106133bd3f511e9f3479",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0a4bcd73185e33529e9f106133bd3f511e9f3479",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17921481 | pes2o/s2orc | v3-fos-license | On the relationship between individual and population health
The relationship between individual and population health is partially built on the broad dichotomization of medicine into clinical medicine and public health. Potential drawbacks of current views include seeing both individual and population health as absolute and independent concepts. I will argue that the relationship between individual and population health is largely relative and dynamic. Their interrelated dynamism derives from a causally defined life course perspective on health determination starting from an individual’s conception through growth, development and participation in the collective till death, all seen within the context of an adaptive society. Indeed, it will become clear that neither individual nor population health is identifiable or even definable without informative contextualization within the other. For instance, a person’s health cannot be seen in isolation but must be placed in the rich contextual web such as the socioeconomic circumstances and other health determinants of where they were conceived, born, bred, and how they shaped and were shaped by their environment and communities, especially given the prevailing population health exposures over their lifetime. We cannot discuss the “what” and “how much” of individual and population health until we know the cumulative trajectories of both, using appropriate causal language.
individual health purview) and public health (with its celebrated public or collective, 3 and thus, population health purview) (Jamrozik and Hobbs 2002;Arah 2005;Arah et al. 2006). This binary approach to health and medicine has also played an important role in differentiating public health from personal medical care (Acheson 1988;Arah 2009;Arah et al. 2006;Verweij and Dawson 2007). Arguably, the birth of public health ethics as distinguishable from clinical ethics also rests on this dichotomization of medicine (see (Beauchamp 1975(Beauchamp , 1983Dawson and Verweij 2007). This dichotomization could even be traced back to the polarizing approaches of individualism and collectivism in the social sciences (O'Neill 1973;Weale 1981;Ball 2001). This if-it's-not-individual-it's-collective approach begs the question if that is all there is to a possible relationship between individual health and population health. Is it possible to study the relationship between individual and population health entirely in terms of the individual or the collective? And if at all, could the same concept of health be easily mapped onto the population level as at the individual level?
This article will argue that neither individual nor population health is identifiable or even definable without informative contextualization within the other. For instance, a person's health cannot be seen only in isolation but must be placed in the rich contextual web such as the socioeconomic circumstances and other health determinants of where they were conceived, born, bred, and how they shaped and were shaped by their environment and communities, especially given the prevailing population health exposures over their lifetime. We cannot discuss the ''what'' and ''how much'' of individual and population health until we know the cumulative trajectories of both, using appropriate causal language. Indeed, the complementary relationship between individual and population health evokes important socially relevant causal inferences about both having the duality of being determinants and outcomes over time, and within and between places or societies. The causal interpretations accorded both types of health flow directly from and are foundational to their definitional and measurement concerns.
Lonely lives: from the concept and collective context of health to individual health Health as a concept is the focus of heated debates in the philosophy and medical literature. 4 This literature is overwhelmingly concerned with the health of the individual and the medical or healthcare interpretations and interventions at the level of the diseased individual. In those instances, the term healthcare is often used to imply both personal medical care and public health. 5 Currently, there are at least two major schools of thought on the concept of health, namely, the naturalist and the normativist theories of health (Boorse 1975(Boorse , 1977(Boorse , 1997Schramme 2007;Nordenfelt 1986Nordenfelt , 1995Nordenfelt , 2007. Within the normativist theory, there are weak and strong normativist views (Khushf 2007).
The naturalist theory of health, which claims to be descriptive, value-free and consistent with evolutionary theory, states that an individual is completely healthy if and only if all her organs function normally, that is, given a statistically normal environment, her organs make at least their statistically normal contribution to her survival or to the human species survival (Boorse 1977(Boorse , 1997Schramme 2007). Thus, a healthy person is easily identified through objective medical investigation. According to normativist criticisms (Nordenfelt 2007), the naturalist theory of health lays too much emphasis on internal processes, biology and the absence of disease, effectively excluding extrabiological considerations such as ''person,'' ''intentional action'' and ''cultural standards.'' On the other hand, the normativist account, which espouses a value-laden evaluative approach holds that an individual is completely healthy if and only if she has the ability, given standard circumstances, to reach all her vital or essential goals in life (Nordenfelt 2007). This latter theory depicts a continuum where health accommodates disease, takes a holistic contextual approach, and instrumentalizes health in the larger scheme of vital life goals.
Interestingly both theories of health, to some extent, see disease in terms of relevant organ dysfunction. For instance, according to the naturalist account, a person has a disease if and only if at least one of her organs functions subnormally, given a statistically normal environment, 3 I use ''collective'' to refer to a definable group of people who share or are motivated by at least one common interest or work together to achieve a common objective. ''Collective'' may give an objectionable sense of an aggregation, yet it has a powerful way of reminding us that every society or collective is made up of individuals who are bound in a rich tapestry (Arah 2009). 4 See the March 2007 issue of the journal Medicine, Health Care and Philosophy for illuminating discussions of the concept of health. 5 As argued elsewhere Arah 2005Arah , 2009Arah et al. 2006), (personal) medical care is best used to connote the more individually oriented healthcare services, usually involving oneon-one patient-physician interactions, whereas public health-in addition to its health of the public or population health meaning-is perhaps best described as the organized efforts aimed at collective mechanisms of ensuring the health of the collective or the healthful context for the interacting individuals within the collective. while the normativist theory asserts that a disease is a state or process in which the individual has at least one organ involved in any state that tends to reduce the individual's health. Although engaging in a debate on the merits of each theory of health is beyond the scope of this article, I want to point out that both theories appear to take the context or circumstances or environment of any health-disease continuum as merely observed or passive, not active, interventional or causal. 6 Yet, we all know that many diseases 7 arise from the complex interplay of the person and her context, be it social, psychological, physical, economic or not (Lalonde 1974;Evans and Stoddart 1990;van Oers 2002;Arah et al. 2006). My argument is that any concept of individual health must emphasize the role of the person's circumstances in health maintenance or even in disease causation, fleshing out the imbalance between the internal and external functionings. This imbalance is reflected in a recent attempt to characterize the origins of human disease (Mackenbach 2006): In all its manifestations, human disease is a reaction of organisms to, and/or a failure to cope with, one or more unbalancing changes in their internal environments. These are caused by one or more unfavourable exchanges with their external environments and/or failures in the structural and functional design of organisms. In the final analysis, human disease is attributable to the dependence of organisms on a fundamentally hostile external environment and to unfortunate evolutionary legacies.
To be sure, there is more to health than mere absence of disease. An emphasis on the notion of the context-or what naturalists call the ''statistically normal environment'' or the normativists call ''standard circumstances''-is needed to understand how health is promoted in a positive sense, maintained or disrupted, and to give meaning to the theory of health as a continuum rather than as a binary concept of health versus disease. Three important properties characterize the context of health and disease. Firstly, this context has to be seen in terms of internal-external balance between the individual and her context or environment. Secondly, the contextual balance must be causal in nature, at least in the counterfactual sense of being capable of leading to a different individual health if the balance were altered (Lewis 1973;Greenland 2000;Maldonado and Greenland 2002;Pearl 2000). Hume (1748, p. 115) defines a cause to be ''an object, followed by another …where, if the first object had not been, the second had never existed.'' An important aspect of this view of causation is its counterfactual concept: a certain outcome event (the ''second object,'' or effect) would not have occurred if, contrary to fact, an earlier event (the ''first object,'' or cause) had not occurred (Maldonado and Greenland 2002;Greenland 2005). Thirdly, context is cumulative. Early life insults can and have been known to persist into adult life (Kuh and Ben-Shlomo 2004), and to curb the ability to pursue life's vital goals (Nordenfelt 1995) or what one may have reason to value (Sen 1985(Sen , 1992. The foregoing properties redefine the context of health as being not merely observed but actually causative or determinant of the level, dynamics and distribution of health. This is in line with the popular use of the phrase ''determinants of health'' in the health literature (Arah and Westert 2005). 8 As we will see later on, the revitalization of the context part of the health concept allows us to evaluate the health relationship between individuals and across populations, in essence, linking individual and population health.
Populations without individuals: from the concept of health and context of interacting individuals to population health
Health is a very individual affair. Or is it? When Tolu broke her leg in a motor accident on a precariously narrow road in her home town in south-west Nigeria, it seemed fair to say it was Tolu's health, not that of her community or any such 6 To my understanding, both the naturalist and normativist define probability of health, Pr(H), in terms of ''given the biostatistically normal environment'' (Boorse 1997) or ''given standard circumstances'' (Nordenfelt 2007), what I will call the context C: thus, health probability is, simply put, Pr(H = h | C = c). However, this Pr(H = h | C = c) is not the same thing as Pr(H = h | do{C = c}), that is, what health would be if the context were seen as an external intervention or a causal one influenced by, say, active change of environment, lifestyle, interactions, and policies. Thus, C is not merely observed in the definition for it to be relevant to health, it must be causally relevant (hence, the ''do{C = c}'' calculus). The probability expression Pr(H = h | do{C = c}) is isomorphic to the potential outcomes or counterfactual framework of causality envisaged by Hume (1748) and Lewis (1973). For instance, allowing context or ''given standard circumstances'' to take on a causalinterventionist meaning is important for appreciating what Tolu's health would be if she moved from her deprived circumstances in the developing Nigeria to the safer affluence of England: Tolu's context is thus not only observed but was done by her ''changing'' her context. This topic of causality as interventionist even in so-called observed context versus mere description of observations as a substitute for causal inference using non-experimental data is the subject of recently renewed technical and philosophical interests (Spohn 1980;Pearl 1995Pearl , 2000Greenland 2000;Maldonado and Greenland 2002;Spirtes et al. 1993). 7 To the determinist, this might well include all diseases. 8 Unfortunately, the term ''determinants of health'' may leave an unsavory feeling that the relationship between individual (or even population) health and its context is rigidly deterministic. Although I personally see a role for determinism, I temper this to mean no more than probabilistic determinism, within a counterfactual framework (Hume 1739(Hume , 1748Lewis 1973;Pearl 2000).
Individual versus population health 237 collective to which she belonged, that was primarily compromised. It turned out that Tolu, who was a publicly employed physician, in her deprived town with few doctors, was on her way to the hospital, to respond to an emergency call from the local hospital to help out on a particularly busy day. She was supposed to be enjoying her off-duty rest on that day. Typically, she would attend to a lot of patients, many of whom suffered from infectious diseases, were malnourished, and had been victims of road traffic accidents, and so on. Being incapacitated by her injury, she was unable to attend to her patients who must now increase the workload of other already over-stretched doctors. The infants among the patients suffered disproportionately; they were more vulnerable and had illnesses that rapidly consumed them without prompt care. Unknown to most, the hospital was unable to save a number of such vulnerable patients who would have been seen by Tolu had she not been reduced to a patient herself by a complex web of social and personal circumstances. Her health was intricately linked to the health of her fellow townspeople. Not only did they suffer as a result of her inability to be a physician to them, but also they were subject to what (dangerous roads, deprivation, and other ''standard circumstances'') shaped Tolu's health and her pursuit of her vital goals (which included being able to cycle, being an attending physician to the needy, and so on). Actually, she chose to become a physician as a result of the telling experiences of growing up in the town's squalor. So, their lives, well-being and health, were co-dependent, at least on some level. In a sense, it was difficult for Tolu to remain healthy in a town full of so many suffering people. Indeed, it would be difficult to conclude that this town's population health was ideal, full or complete. The interacting individuals who made up the collective were often at risk of lessthan-full health, largely due to the collective ''standard circumstances'' they lived in, a context they sculptured or was sculptured for them in some way, and which also sculptured who and what they became. Admittedly, the foregoing illustration is a little dramatized. It serves its purpose nonetheless: health is not entirely individual; it is relative to the individual's context, which in turn is fashioned out of the interactions that exist between members of any defined collective whose health (read: population health) is defined by the health and context of its members. The circularity of this concept and argument is not lost on us. Many diseases such as allergic, cardiovascular, and even genetic 9 disorders seem to have contextual antecedents (Mackenbach 2006). And these contextual causes, determinants or facilitators tend to accumulate from, probably, before conception and birth through adult life (Kuh et al. 2003;Kuh and Ben-Shlomo 2004). We will return to this issue of life course and causal context of health in a population shortly.
First, I want to broach two implicit views of population health: the simply-the-sum-of-the-parts and the greaterthan-the-sum-of-the-parts views. The former-hopefully with a dwindling proponents base-sees population health as no more than a summary of health, aggregated across individuals within a population (see for instance, the debate and work on designing summary measures of population: (World Bank 1993;Murray et al. , 2001Murray et al. , 2002Murray and Evans 2003;Murray 1994;Anand andHanson 1997, 1998;Institute of Medicine 1998;World Health Organization 2000;Williams 2000;Mathers et al. 2003Mathers et al. , 2004). Under this view, summary measures of population health (SMPH) represent aggregated, singular indices of the quantity and sometimes distribution of health in a given population. These measures combine data on mortality and morbidity, including disability, obtained from the population in question or extrapolated from ''similar contemporary'' populations. The idea is that both the quantity and quality of life that an individual born into such a population could expect to enjoy can be captured by measures such as healthy life expectancy (HALE) and disability-adjusted life years (DALY). These measures are commonly used in global health and national health policy circles. Critics have pointed that some of these metrics are not necessarily equitable or particularly suitable for the health policies they are purported to support: [Disability-adjusted life years or] DALYs are an inequitable measure of aggregate ill-health and an inequitable criterion for resource allocation. Through age-weighting and discounting, they place a different value on years lived at different ages and at different points in time. They value a year saved from illness more for the able-bodied than the disabled, more for those in middle age-groups than the young or the elderly, and more for individuals who are ill today compared with those who will be ill in the future. We regard such valuations to be inequitable both for the 9 Take the example of the autosomal recessive hereditary/genetic condition known as phenylketonuria (PKU), diagnosable in newborns. It results from a gene mutation on chromosome 12, leading to absent or reduced activity of the enzyme needed to process one of the essential amino acids, phenylalanine (present in many cereals, cocoa products, egg, fish). Theoretically, if a child with PKU were to be born in a context where phenylalanine did not exist in staple foods-Footnote 9 continued instead a related amino acid, tyrosine, which replaces phenylalanine in the metabolic pathway in the human body, were present-then it is difficult for the disorder to be suspected in the absence of mandatory testing. Therefore, this child could easily grow up without the PKU disease label. Thanks to the child's new extraordinary context, she could remain healthy although her bodily functions are easily engaged in a process that tends to reduce her health. Notice that her new context is far from being standard, even relative to her human species. exercises of measuring the quantity of ill-health and for resource allocation. For resource allocation equity requires giving priority to the claims of the disadvantaged, which cannot be achieved by using the restricted information set of the DALY (Anand and Hanson 1998).
The second implicit view of population health, the greater-than-the-sum-of-the-parts account as pursued in this article, would see population health as the indivisible health experience of a collective of individuals, where this collective is taken to be distinguishable from a mere collection or summation of individuals. 10 The context would be seen as so defining and powerful that simple aggregations of health into singular measures would miss the richer information present in the context that shapes current and future health of the collective and of its individual members. At a minimum, population health should be measured in multidimensional terms, rich in information for different purposes and interpretations. Greenland recently underscored this requirement as follows: My intention in raising these issues is not to offer a solution to a specific summarization problem. Rather, it is to remind those facing a choice among measures that candidates need not (and, for policy purposes, should not) be limited to unidimensional summaries. While our ability to think in several dimensions is limited, it can be improved with practice. That practice has proven crucial in attacking problems in physics and engineering, and there is no reason to suppose it is less important in tackling more complex social policy issues. In instances in which many different people must make informed choices based on the same scientific data, but with different values, multidimensional measures are essential if we are to provide each person and each executive body with sufficient information for rational choice (Greenland 2005).
It is clear that how population health is measured is dependent on how it is conceptualized. If population health were seen only as aggregate health of a group, then unidimensional metrics such as HALE and DALYs might suffice. If, however, population health were conceived as a deeply contextual and causally charged notion, then metrics that went beyond the descriptive and dealt with the predictive, explanatory and evaluative would be needed (McDowell et al. 2004). Is this how population health is conceived in the public health literature? Population health as a concept of health has been defined as ''the health outcomes of a group of individuals, including the distribution of such outcomes within the group'' (Kindig and Stoddart 2003). Additionally, as a field, population health is said to address how and why some groups of people are healthy and others are not (McDowell et al. 2004;Evans and Stoddart 2003). The late Geoffrey Rose once described the population [health] strategy as ''… the attempt to control the determinants of incidence, to lower the mean level of risk factors, to shift the whole distribution of exposure in a favourable direction. In its traditional 'public health' form it has involved mass environmental control methods; in its modern form it is attempting (less successfully) to alter some of society's norms of behaviour'' (Rose 1985).
Although the term population health could mean health outcome or health determinants in relation to public health outcomes or both, public health specialists mostly spend their time trying to influence the determinants or the socalled root causes of population health. This population health approach is quite old although there is no definitive history of this approach, with recent historic applications seen in the works of Jerry Morris and Richard Titmuss (Szreter 2003) and in the seminal Lalonde model (Lalonde 1974;Evans and Stoddart 1990).
At its simplest level, the health determinants or Lalonde model states that health has four classes of determinants: lifestyle, environment, human biology, and healthcare. This rather simple model was rather well-received, with no one seriously challenging the view that how we lived, where we lived, who we were (born), and the care we used all shaped our health. As Evans and Stoddart (1990) noted, the policy response was not entirely clear given that one possible policy interpretation could have been that health was a personal choice. This is something that could be heard echoing in the corridors of many North American and European ministries, given the rise of consumerism, performance disclosure, market mechanisms and the information age in nearly all public policy areas. If anything, public policy on health missed the point about the health of populations being contextual, a reflection of the complex interplay of lifestyle, environment, human biology and even healthcare. Recently, a global Commission on Social Determinants of Health was launched by the World Health Organization to focus health policies on the social context of health and inequalities (Lee 2005;Marmot 2006;Irwin et al. 2006). I can only hope the renewed interests will see the context of population health as both a means and an end, not just another series of inputs for attaining and subsequently aggregating health across 10 This indivisibility and inseparability of individuals and their context must be seen in such a way that the same collective of individuals could not be moved from their current context to a new one without changing the identity, health, interactions, and well-being of the collective.
Individual versus population health 239 members of a group. The context of population health comprises so much diversity, meaning and information which must be factored into any health evaluation exercise or intervention that seeing context as only given circumstances is to render the very concept of health of a person and of a group impotent.
The life course
A crucial prerequisite for defining individual health and population health in terms of their context is that context must be dynamic and causal. Dynamic implies that context is not stationary. Even habitual lifestyles are rarely stationary; they are subject to the enabling environment and resources that feed such habits. Human biology is subject to numerous factors like micro-organisms, radiation, accidents, and so on. Individuals are born; they develop from childhood, adolescence through adulthood, learning the language and ways of life of their parents, imbibing their tastes, experiences, music, dance, art, and interacting with other people. They fall ill, survive, marry, have their own children, live with the marks of their experiential journey through life, and are continuously molded by their context as they search for and define who and what they become. Social epidemiologists only recently discovered this life course interpretation of the health, well-being and over-all context of human beings, something that was already known for many years to psychologists, sociologists, anthropologists, biologists, and demographers (Kuh and Ben-Shlomo 2004). Life course …epidemiology studies long term effects on later health or disease risk of physical or social exposures during gestation, childhood, adolescence, young adulthood and later adult life. It aims to elucidate biological, behavioural, and psychosocial processes that operate across an individual's life course, or across generations, to influence the development of disease risk (Kuh and Ben-Shlomo 2004).
Parents' social class, behaviors, wealth, education, and other childhood factors like cognitive and psychosocial developments have all been shown to determine who stays healthy, falls ill or dies prematurely in adult life (Kuh and Ben-Shlomo 2004;Case et al. 2005). If lifetime circumstances so evidently mold health and well-being and also subsequent social and other life circumstances in such cumulative ways, why must the health of persons and groups be seen as individual or concerted organ functioning given normal environment or circumstances? What is normal? Which environment? The currently observed one? Or the one that has accumulated over the life course and may remain a harbinger of well-being in years to come?
Neither individuals nor collectives can be understood in only cross-sectional, one-time views. All through their lifetimes, individuals become the collective just as the collective becomes them. And collectives age across generations of its members, evolving and defining and being defined through cumulative and adaptive experiences, events, and history. In all these, an individual still retains her individual, distinctive identities that evolve over time. This individualism within a collective should not be mistaken with the ordinary usage of individualism that seems to suggest a whiff of unsociability, but should be taken as the sort that forms the basis for an extensive concern for others (Appiah 2005). This concern is the type needed throughout life to build a context worthy of individuality, freedom and collective well-being and health.
Healthy individuals, healthy populations
So far, I have argued that neither individual nor population health is easily separable from the other. Even when they are considered separable, as approaches to health, rather than health concepts, Geoffrey Rose would seem to choose the population approach because he was a strong believer in the context and distribution of health and its causes (not that he would sacrifice individuals to achieve his objectives) (Rose 1985(Rose , 1992. One might ask if the link between individual and population health could then be construed to imply that unhealthy individuals could not be found in healthy populations and vice versa. Instances of incongruity between individual and population health may be best understood by considering a possible categorization of the individual-versus-population health relationship. Therefore, borrowing terminology from epidemiologic methodology (Copas 1973;Greenland and Robins 1986), I can classify the individual-versus-population health relationship into four categories: 1. Immune: individual health remains good irrespective of the population health or context 2. Causative: individual health is boosted in favorable population health or context 3. Preventive: individual health is compromised when population health or context is unfavorable 4. Doomed: individual health is compromised irrespective of the population health or context.
Categories 1 and 4 would be rare under our considerations and in real life. They would include genetic diseases (category 4) which progress irrespective what is done or experienced in the collective or medicine. The two middle categories would be far more realistic and common. A category 2 illustration: If along with the growing physician emigration (Arah et al. 2008), Tolu were to move from her impoverished circumstances in Nigeria to a safer suburb somewhere in England, her health would no longer be what it was or would have been back in her hometown. She and her family might not only enjoy the healthful experiences of their new context, they might also acquire other nonhealth experiences and tastes which might subsequently redefine their immediate and long-term well-being. She, and in particular her children, would have escaped from a context where their life expectancies might have been the odd forty-something years to a place where they could live well into their seventh decade or longer. This would contrast with a category 3 scenario where Jane, a Brit, who might have lived to be an octogenarian in England, would end up cutting her life short in her thirties by moving to a mosquito-infested Nigeria without proper anti-malarial prophylaxis or by being involved in a rather common road traffic accident there. Similarly, it is very difficult to imagine populations that could be called healthy if the context for health is heavily compromised and individual members of the collective are at constant risk of dangerous exposures and events. It then seems to me that the relationship between individual and population health is a matter of ubi mel ibi apes-where there is honey there are bees.
To be sure, health is not entirely relative. It will be selfdefeatist to assume a rigidly relativist view. Such a view would excuse the unfortunate morbidity and mortality suffered by millions of children in deprivation in Africa. After all, their ''fate'' could be dismissed as their context. However, this would imply denying a partial absolutist notion of health (for instance, for these children not to be malnourished, to enjoy good health, and not be stricken dead before age five). In a purely relativist view, we could easily miss a widespread compromise of health in a context where health was already poor because we might erroneously infer that the relative distribution of health remained unchanged. The absolutist core of health implies that whenever health is compromised to the extent that functioning is obstructed that there is ill-health, no matter what the relative picture looks like. It is on this absolutist core of health that a relativist layer of the enabling context of health should be built. The relationship between individual and population health resides mostly in this relativist layer, although it requires the absolutist notion of health to exist in the first instance.
Without the informative contextual characterization of health at the individual or population level, there is little insight being gained by saying a person or a community is healthy. A possible criticism here is that this contextual reinterpretation of individual and population health includes almost every well-being oriented activity under the rubric of health. True, but this fear of all-inclusiveness that has already been leveled against the normativist school is not embarrassing. If anything, it is refreshingly bold to attempt to elevate the concept of health to the level of human wellbeing. If health is so integral to the notion of well-being and to the ability to conduct the life one may have reason to value (including achieving one's vital goals) (Nussbaum and Sen 1993;Nordenfelt 1995Nordenfelt , 2007, then it is not surprising that the boundaries of health can easily encroach on the boundaries of well-being and life as a whole. After all, health represents both functioning (the achieved) and capability (the achievable): a means to life's other vital goals or capabilities as well as an end in itself (Sen 1985;Nussbaum and Sen 1993).
I suspect that when some philosophers reject such ambitious notions of health, they are merely concerned with the overuse or abuse of possible responses or interventions to deal with not being in ''full health'': a fear of medicalization. However, I think such criticisms miss the subtle but important distinctions between the boundaries of health (and thus, health need) and the boundaries of healthcare (and thus, healthcare need). 11 Health need depicts the shortfall in ideal health (in some sense, a gradual progression from the completely healthy end of the health spectrum to the disease end), whereby the shortfall and context combine to hinder the ability to flourish to a degree important to the individual. Healthcare need, on the other hand, alludes to a shortfall in health which inhibits a person's ability to flourish and which is only amenable to healthcare or organized medicine. Not every health need would become a healthcare need. In this sense, health need subsumes healthcare need, not the other way around. Suffice it to say that while it is necessary to avoid medicalization, there is little reason for a concept of health to be bounded mainly by this medicalization avoidance or by any narrowly defined interpretation of what medicine is. Medicine is largely a socially constructed response and therefore secondary, whereas health is more fundamental 11 From an economic societal perspective, healthcare need has been defined ''as the minimum amount of resources required to exhaust a person's capacity to benefit'' (Culyer 1995). Culyer proposed the following conditions for recognizing healthcare need: (i) that its value-content be up-front and easily interpretable; (ii) that it be directly derived from the objective(s) of the health care system; (iii) that it be capable of empirical application in issues of horizontal and vertical distribution; (iv) that it should be service and person specific; (v) that it should enable a straightforward link to be made to resources; (vi) that it should not, if acted upon as a distributional principle, produce manifestly inequitable results. Culyer's definition has all the good elements of the capacity to benefit notion, an observation that should please those who object to ''medicalization'' on safety and effectiveness grounds. It also quantifies the resources that are needed, a feature that ought to please those who fear ''medicalization'' on inefficiency grounds. Tolu and Jane, say, might have equal health needs and yet different healthcare needs, or different health needs but the same healthcare needs. If at equal health needs, Tolu required more resource-intensive healthcare, Culyer would say that Tolu had higher healthcare need than Jane. and therefore prior. Nordenfelt has discussed some of the notions of medicine as health enhancement in both narrow and broad senses (Nordenfelt 1998(Nordenfelt , 2001. For now, I will submit that the prevailing dichotomization of medicine-and its associated fields including (bio)ethics-into clinical medicine and public health aspects is erroneous, inefficient, and outdated, if not unethical. This criticism can also be leveled against the duality I have been discussing, namely, individual versus population health. Such binary views which seem to pervade almost all of public policy on health fail to use the rich information and interpretations that stem from a more comprehensive approach to health over the life course (i) of the individual within the collective and (ii) of the collective of interacting individuals.
Many questions remain unanswered. I invite the reader to consider them: If the concepts of individual versus population health are so intimately interwoven, why do bioethicists see the need to separate public health ethics from main stream bioethics? Is it to give ethical considerations to, say, distributional issues that would otherwise be difficult at the individual or clinical level? Or are we young public health ethicists just busy building a parallel dichotomy similar to that seen between clinical medicine and public health, by way of argumentum ad verecundiam? Further, given the mounting evidence that health is compromised early in life and that the insults are borne forward into adult life and beyond and ultimately lead to expensive healthcare, why do health policies still concentrate overwhelmingly on healthcare in adulthood? While we are at it, must healthcare represent the standard policy response to health problems, in effect being what Norman Daniels once called the ambulance at the bottom of the cliff after the free fall through life?
Conclusions
This article has argued that the relationship between individual and population health is one that is entrenched in the contextual definition of health and its life course causes. I have made an attempt to derive this relationship based on the concept of health (if we were to continue pursuing such a concept anyway) by including a population perspective on health. I emphasized the role of the ''context'' component of any notion of health, that is, the role of the ''standard circumstances'' or the so-called ''statistically normal environment.'' I then argued that this context is both individual and collective in nature, in largely inseparable ways, and that context must be causally seen across the life of an individual and the life of the collective. The meanings of both individual and population health lie in this revitalized life course and causally defined context, and have implications for how we measure and analyze health at all levels. Armed with the reasoned scrutiny and the unresolved complexity of the concepts, I invite philosophers and other scientists to revisit the definitions of individual health and population health if the notions are to carry any more weight in ongoing discourses in public health, healthcare, and bioethics. I can only hope that this article will stimulate further debates on individual and population health concepts and on their associated policy-relevant fields. One conclusion of this article, for now, is that health, be it individual or population health, can be very context-dependent. After all, prior to the accident, Tolu may have been absolutely healthy from her personal experiential point of view, but she was still contextually unhealthy, relatively speaking. | 2014-10-01T00:00:00.000Z | 2008-12-24T00:00:00.000 | {
"year": 2008,
"sha1": "0443ae99c000a94961742caa5e3cf7fad703dc83",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11019-008-9173-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0443ae99c000a94961742caa5e3cf7fad703dc83",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
4739220 | pes2o/s2orc | v3-fos-license | Haematological malignancies in Qatar: from childhood to adulthood
Not available.
F o r e w o r d
Haematological malignancies are the most common pathologies affecting children, adolescents and young adults. The commonest types are lymphoma, leukaemia and myeloma but they also include myelodysplastic syndromes or myeloproliferative diseases. The majority of patients diagnosed with hematologic malignancies can now be cured.These advances have predominantly resulted from the introduction of a wide range of cytotoxic chemotherapeutic regimens, radiotherapy or a combination of both. Although overall survival has increased, adverse effects of treatment, both in the short and long term, may effect on the overall the quality of life of the survivors.
This supplement of Acta Biomedica is subdivided into 3 sessions: A review paper on "Testicular damage in children and adolescents treated for malignancy", prepared by clinicians from Italy, Qatar and Greece, an original article and four case reports in adulthood.
The impact of treatment on future fertility is of significant concern, both to parents and patients. Fertility preservation options for the pediatric cancer patient differ from those in adults due to differential toxicities of treatment regimens and relative immaturity of pediatric germ cells.Thus, clinicians providing care to childhood cancer survivors need to incorporate these discussions into routine clinical care.
Given that up to half of childhood survivors will suffer a therapy-related endocrinopathy, pediatric endocrinologists are frequently involved in their care. Therefore, a strict collaboration is needed between oncologist and endocrinologist.
The Pediatric Endocrinology and Hematology and Oncology Section of Hamad Medical Corporation (HMC) in Doha are a good example of this strict collaboration for the treatment of therapy-related endocrinopathies in children, adolescents and young adults with oncology problems and hemoglobinopathies. Several publications in international journals have been reported by this joint collaborative group in the last decade.
Here, they present an original article on "The Impact of Iron Overload in Patients with Acute Leukemia and Myelodysplastic Syndrome on Hepatic and Endocrine functions". Iron overload is common in patients with hematologic malignancies requiring repeated blood transfusions and may have a deleterious effect on the outcome of these patients. These findings have led to the suggestion that iron overload is common and may have deleterious effects in these patients. Therefore, an oral iron chelation therapy has been recommended to decrease these morbidities.
The heterogeneous nature of the haemopoietic and lymphoid cells, their individual kinetic characteristics and the disseminated nature of haemopoietic and lymphoid tissue explains the complexity of haematological malignancy. There are three major groups: leukemia, lymphoma, and plasma cell neoplasms. Haematologic neoplasms are markedly heterogeneous, with more than 35 subtypes of acute leukaemias, 35 subtypes of non-Hodgkin lymphoma and six subtypes of Hodgkin lymphoma currently recognised. The classification of haematological malignancy has changed markedly over recent decades, and will continue to do so as innovative diagnostic methods and techniques are developed. Moreover, the diagnostic complexity of haematological malignancies is mirrored by the wide diversity of treatment pathways.
Every practicing hematologist/oncologist or primary care physician in the professional life encounters patients with uncommon, rare or complex hematologic malignancies. Four case reports, from the large series of patients followed at Department of Hematology, NCCCR, Hamad Medical Corporation (HMC), Doha, Qatar, are presented and discussed.
We hope that oncologists, hematologists, pediatricians and physicians will find this supplement of Acta Biomedica a valuable resource for their current medical practice.
Vincenzo de Sanctis, MD | 2018-04-26T22:59:23.299Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "28cb1732950d02c74f3bdcba346714aafe35824d",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9cb1411c8fa55ecad6d37349702170558c71a44a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.