id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
213113744
pes2o/s2orc
v3-fos-license
ALTERNATIVE APPROACHES FOR LONG-TERM DEFENCE PLANNING The problems requiring managerial decisions are common for all countries in the Euro-Atlantic community regardless of the status of each individual country. Regardless the involved state is a member of the North Atlantic Treaty Organization (NATO) or its partner. Some of these problems have already existed for decades but have become particularly acute over the last twenty years. These problems include the reduction of troops and spending on post-Cold War peace dividends as well as the need of an increase in the number of mobile armed forces capable of operating far beyond the borders of their countries and whose actions have to be supported for a very long periods of time. INTRODUCTION From a historical standpoint the defense management has emerged as a topic of interest to the defense sector fairly recently. Western countries introduce a defense management concept in the process of solving problems like allocating financial or human resources, solving strategic and operational tasks as part of an integrated approach or using business-specific defense management tools. Such an approach requires highly professional and dedicated efforts at all levels and in all divisions of the state military organization. One of the proven ways of achieving this goal is the use of planning management functions, organization, leadership and control in all areas of the defense organization, which can contribute to maximizing the effectiveness of the armed forces' operational activities. The problems requiring managerial decisions are the same for all countries in the Euro-Atlantic community, regardless of the status of each country, whether it is a member of the North Atlantic Treaty Organization (NATO) or its partner. Some of these problems have existed for decades, but have become increasingly acute over the last twenty years. These problems include the reduction of troops and spending on post-Cold War peace dividends as well as the need of an increase in the number of mobile armed forces capable of operating far beyond the borders of their countries and whose actions have to be supported for a very long periods of time. In order for the implementation of managerial approaches to be considered they must address these and other similar problems while also be placed in the general context of the public interest according to the state of defense. Furthermore, in requests related to the actions and the results from these actions of the defense sector as a whole in particular the defense forces and resources. The fulfillment of this condition is of particular importance given the fact that in the absence of incentives or pressure from above any publicly funded organization, including the defense organization, is unlikely to undertake its own initiatives for the increase of the effectiveness of its activities. Thus every theoretical approach towards defense management must be closely linked to the sphere of democratic control over the defense sector and the armed forces (Antonov, Tsonev, 2016;Stoev, Zaharieva, Mutkov, 2019). There is no single commonly accepted definition of the term "defense management". The term simply covers the idea that defense organizations need to implement defense policies into practice and that they must create reliable and effective planning mechanisms, security systems and infrastructure. The modernization of the defense sector is a critical issue that has been addressed to the governments of the Euro-Atlantic community for at least the last fifteen years. Some states are focused on transforming their armed forces in order for them to better respond to 21st century security threats. Other states undertake a more ambitious restructuring of the entire defense sphere in order to create new defense institutions. This process is especially true for post-communist states that are on the path of democratic transformation as well as for states that are currently in the last stage of these transformations. All of these states expect strategic results from the reform of their defense and security sectors, correctly considering the success of these transformations as a factor contributing to their integration into the Euro-Atlantic community as well as a factor in strengthening their own security and the prosperity of their people.Achieving these strategic goals requires a more rational allocation of scarce public resources, a more efficient use of such resources and a more visible and controlled outcome of government programs, including the defense oriented ones (Antonov, Hristozov, 2017;Terziev, Bankov, Georgiev, 2018). In many countries public administration is replacing its rather rigorous and rather bureaucratic form of activity on behalf of society with more flexible and distinct public sector governance with the number of these countries steadily increasing. But in this case a question arises: how can the government "build defense" more effectively? Part of this answer lies in the level of implementation of the optimal management practices in the defense sector that are accepted in the business sector, where the achievement of the intended results is of paramount importance for the survival of each organization in the competitive environment. The cooperation initiative NATO/EAPC called "Partnership Action Plan on Defense Institution Building" (PAP-DIB) provides several examples of how domestic incentives to reform a country's defense sector through its more effective institutionalization are matched by international interest in supporting relevant programs. Part of this initiative is directly related to the concept of defense management. Management planning is different from military operation planning, but it has a direct impact on the development of the armed forces structure or on the purchase of basic military equipment. What defense management can provide is to connect people in defense organizations who are prepared to accomplish their assigned tasks, with equipment and weaponry, and with comprehensive support to more effectively achieve their defense goals and objectives. DEFENSE PLANNING AND DEFENSE MANAGEMENT Parliaments and defense organizations of many partner countries as well as some of the new NATO members still face certain problems related to the concept of defense policy, the link between policy and planning, the concept of defense potential, the relationship between plans and budgets, the link between structural changes and technical modernization and other important and costly activities. This is not surprising given that unlike in NATO, the decision-making and planning processes in the former Warsaw Pact countries have been fully centralized. The countries of the former Warsaw Pact with the exception of Russia little to no knowledge and experience about defense policy and planning. Moreover, in the last decade of the 20th century the defense organizations of the former Warsaw Pact and post-Soviet republics constituted only a small fraction of immature and generally weak democratic institutions. As a result of this approach very few of its new members are able to make a significant contribution to the Alliance's potential during NATO accession. This study examines the importance of the defense policy and the transparency of long-term and structural reform plans for democratic governance in the defense sector. It also discusses the characteristics of short, medium and long-term planning as well as the relationships between the respective processes, which indicate why defense planning is one of the key processes in defense management (Antonov, 2017а; Terziev, Petkova -Georgieva, 2019b). A framework model for ensuring coherence between military policy objectives and structural change is presented and the importance and role of risk planning is explained. A brief description of the context of the defense planning process at a national level is presented and the importance of transparency of decisionmaking processes in terms of democratic accountability and the effectiveness of the actions of the defense department are highlighted.I will try to help the civilian and military experts of each country involved in the creation of democratic governance in the field of defense to better understand the link between security challenges and the political goals of defense planning and on the other hand the mechanisms for defense planning and resource management. Regardless of how "perfect" the system implemented in the Ministry of Defense, accounting and how transparent financial procedures are they must ensure the development of an organizational structure appropriate to the situation, political goals and strategy of the country (Stoev, Zaharieva, Borodzhieva, 2019а;Terziev, Bankov, Georgiev, 2018а). ALTERNATIVE APPROACHES FOR LONG-TERM DEFENSE PLANNING The two most credible defense planning sources provide similar definitions about the defense planning approaches. In the 2004 edition, Bartlett, Holman and Soames offered nine options (Terziev, Petkova -Georgieva, 2019c). In a top-down approach interests, goals and strategies determine the decision directions according to the structure of the armed forces. In the bottom-up version the focus is based on enhancing existing defense capabilities and improving their respective weapon systems: first, the ability to meet the requirements for ongoing operations and operational plans is improved. In the case of scenario based approaches in which planners model several typical situations with each situation providing specific conditions for the use of the armed forces. These scenarios are then used to identify tasks that are aimed at achieving the goal and providing suitable opportunities. In two closely related and complementary approaches respectively based on threat and vulnerability assessment, plan developers are looking for ways to address the problems associated with identified threats and the potential weaknesses of the potential adversary. Then the requirements for military capabilities are determined in comparison to the capabilities of the potential adversary (Terziev, Arabska, Dzhumalieva, 2016а;Terziev, Dzhumalieva, 2015;Nichev, 2009). The approach -"key responsibilities and tasks" -is functional. In this approach the capability requirements for the Armed Forces and Allied Forces are determined independently of the scenarios, threats or identified weaknesses of the potential adversary. Instead they are defined as key responsibilities for example, to ensure air supremacy at all costs. Then depending on these key responsibilities, requirements are created to create the required set of capabilities as well as separate groups of requirements in the event of peace, crisis orconflict. The capability-based approach also provides for functional analysis. The functions and tasks that must be performed during the envisaged operations are transformed into requirements for capabilities. On this basis, planners develop force grouping options to provide these capabilities as efficiently and costeffectively as possible. By reinsurance, planners try to minimize the risks associated with the preparation of troops for every current objective as well as objectives that may occur after thirty or more years. At the same time, the requirements approved will be sufficient to provide the necessary balance and flexibility to deal with a variety of challenges and threats although the cost of these measures will be extremely high (Terziev, Dzhumalieva, 2016b-е). Using the following approach planners are trying to achieve strategic and operational superiority based on technology. This approach is based on the belief that knowledge, creativity and innovation will provide the best systems and therefore a significant military advantage. Finally, in the fiscal approach to defense planning, decisions on the structure of the armed forces are determined by budget constraints. Another authoritative source is the Handbook on Long Term Defence Planning issued by the NATO Research and Technology Organization. It offers a slightly different set of possible approaches to long-term planning. It is presented in the form of a three-component structure depending on the main purpose of the defense analysis. When the planning process is paramount, analysts distinguish top-down planning and limited resource planning (Stoykov, 2011-а;Stoykov, 2002;Stoykov, 2003;Stoykov, 2005). In accordance on the level of optimism about the capabilities of new technologies or, on the other hand, the desire to adhere to historically proven facts, experienced designers apply four possible approaches:  Technological optimism;  Risk avoidance;  Planning through gradual construction;  Accounting for historical experience. The last three approaches are based on proven concepts, the existing organizational structure, the capabilities of the Armed Forces and also adhere to the method of gradually improving efficiency and profitability. Under certain conditions they may be similar to the bottom-up variants described above. The following three approaches differ when the focus is on the features or specific scenarios that determine the level of effectiveness of the future structure of the armed forces. These approaches include capability-based planning, scenario-based planning and threat-based planning. Each of these approaches has its advantages and disadvantages which are rarely implemented in their 'pure' form. Virtually a defense planning approach may have the characteristics of two or more different possibilities. According to the Handbook, mature defense planning systems are today dominated by two approaches. This is resource-oriented planning (a softer form of resource-limited planning) and scenario-based. Since the publication of the Handbook in 2003 major defense planning efforts have been focused on enhancing the focus on capabilities and introducing the latest operational concepts. In particular an "impact-oriented" approach to operations. They goal to increase the flexibility of mechanisms for strategy development and planning and the response speed to changes in the security situation (Terziev, Bogdanova, Kanev, Georgiev, 2019b-d;Petrov, Georgiev, 2019e;Terziev, Georgiev, 2017b). CONCLUSION The conducted comparative analysis of the EU defense planning methodology compared to US and NATO defense planning shows that this methodology is largely similar. This applies in particular to the structure and logic of planning, its geographical coverage and the significant publicity component of public documents. Like the US and NATO the EU makes little use of strategic forecasting in its security and defense planning and places the principle of 'strategic uncertainty' at the forefront. Dynamic forecasting elements are widely used in EU planning especially in the short term. The EU has gone even further in this field applying a technique that can be arbitrarily called "dynamic planning". The EU methodology for planning in the field of defense has some major characteristics related to the fact that the EU is mainly a civic organization where military matters occupy only a small, albeit rather important place. Therefore, priority in planning is given to the support of civilian efforts to ensure security and last but not least the use of armed force. This predetermines the fact that the main focus of EU policy is on instruments such as crisis management, political stability, peacekeeping operations and the participation of other countries in various forms of partnership and cooperation (Terziev, Nichev, 2017c-i;Terziev, Nichev, Bogdanov, 2017j-k;Terziev, Madanski, Georgiev, 2017l-m). Terziev, V., Bankov, S., Georgiev, M. (2018а). The Stability and growth pact: pursuing sound public finances and coordinating fiscal policies in the EU member states. // Journal of Innovations andSustainability, Plovdiv, Bulgaria, 4, 2018, 3, pp. 53-68, ISSN 2367-8127 (CD-ROM), ISSN 2367-8151 (on-line). Terziev, V., Petkova -Georgieva, S. (2019b). The performance measurment system key indicators and the determinants impact on the level of decentralization using as an example a subdivisional unit from the Bulgarian social health and care experience. // Proceedings of SOCIOINT
2020-03-19T19:42:52.074Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "62a86c00cf46e2d1445582973638803c6ce63a90", "oa_license": "CCBY", "oa_url": "http://ijasos.ocerintjournals.org/tr/download/article-file/1256977", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a612df38ba3ec263e07c1997d54a88cfb296291e", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business" ] }
16742844
pes2o/s2orc
v3-fos-license
Functioning and health in patients with cancer on home-parenteral nutrition: a qualitative study Background Malnutrition is a common problem in patients with cancer. One possible strategy to prevent malnutrition and further deterioration is to administer home-parenteral nutrition (HPN). While the effect on survival is still not clear, HPN presumably improves functioning and quality of life. Thus, patients' experiences concerning functioning and quality of life need to be considered when deciding on the provision of HPN. Currently used quality of life measures hardly reflect patients' perspectives and experiences. The objective of our study was to investigate the perspectives of patients with cancer on their experience of functioning and health in relation to HPN in order to get an item pool to develop a comprehensive measure to assess the impact of HPN in this population. Methods We conducted a series of qualitative semi-structured interviews. The interviews were analysed to identify categories of the International Classification of Functioning, Disability and Health (ICF) addressed by patients' statements. Patients were consecutively included in the study until an additional patient did not yield any new information. Results We extracted 94 different ICF-categories from 16 interviews representing patient-relevant aspects of functioning and health (32 categories from the ICF component 'Body Functions', 10 from 'Body Structures', 32 from 'Activities & Participation', 18 from 'Environmental Factors'). About 8% of the concepts derived from the interviews could not be linked to specific ICF categories because they were either too general, disease-specific or pertained to 'Personal Factors'. Patients referred to 22 different aspects of functioning improving due to HPN; mainly activities of daily living, mobility, sleep and emotional functions. Conclusions The ICF proved to be a satisfactory framework to standardize the response of patients with cancer on HPN. For most aspects reported by the patients, a matching concept and ICF category could be found. The development of categories of the component 'Personal Factors' should be promoted to close the existing gap when analyzing interviews using the ICF. The identification and standardization of concepts derived from individual interviews was the first step towards creating new measures based on patients' preferences and experiences which both catch the most relevant aspects of functioning and are sensitive enough to monitor change associated to an intervention such as HPN in a vulnerable population with cancer. Background Weight loss is a common and serious problem in patients with cancer [1][2][3]. In patients with cancer in the abdominal cavity weight loss is often caused by symptoms preventing sufficient food intake or digestion, e.g. bowel obstruction, fistulas or short bowel syndrome [4]. More prominently, weight loss in advanced cancer is frequently related to the anorexia-cachexia syndrome. This includes various metabolic changes leading to a waste of adipose tissue and skeletal muscle mass related to tumour progression [5,6]. In addition, side effects of antineoplastic therapy result in diminished food intake and progressive deterioration of patients' condition [7]. Malnutrition leads to physical weakness, psychological imbalances and fatigue. It not only compromises patients' functioning and hence quality of life but has also negative effects on prognosis [8]. One possible strategy to prevent malnutrition and further deterioration of functioning is to maintain sufficient caloric intake by parenteral nutrition. This can even be administered at home. Although there are some studies showing the benefits of home-parenteral nutrition (HPN) in cancer-associated malnutrition, its use is discussed controversially from both an economical and ethical position [4,[9][10][11]. The effects of HPN on survival are well known [4]. Health-related quality of life is another relevant outcome of HPN for patients with advanced cancer [4]. Studies on quality of life, however, are inconclusive [11][12][13]. Although HPN potentially improves patients' functional status, performance, and participation, established quality of life measures do not capture the salient aspects relevant in this population [14,15]. This is why an instrument more specific to the effects of HPN therapy in patients with cancer is required [16]. Moreover, it is not known which issues are most relevant to those patients, and which of these issues are prone to change by the administration of HPN. Concepts used so far in the assessment of quality of life in patients on HPN lack a comprehensive theoretical framework that justifies the choice of specifically addressed items. The International Classification of Functioning, Disability and Health (ICF) potentially is a comprehensive and commonly accepted framework that covers the experience of human functioning as a whole [17]. The ICF is part of the WHO family of international classifications. It is both a model and a classification. The ICF model consists of two parts: Part one, referred to as 'Functioning and Disability' covers the components 'Body Functions', 'Body Structures' and ' Activities and Participation'. Part two, referred to as 'Contextual Factors' covers the components 'Environmental Factors' and 'Personal Factors' (see Figure 1). Each component consists of several 'chapters', the components Body Functions and Activities and Participation are grouped in 'blocks' additionally. The ICF model describes the individuals' functioning as a complex interaction between a health condition and contextual factors. The ICF classification contains more than 1400 hierarchically organized categories which describe the components of the ICF model in detail up to four levels (see also Figure 1). The intention of the ICF is to record and organize a wide range of information about health and healthrelated states for individuals and populations. For the purpose of defining the contents of a comprehensive assessment, the ICF provides a universal language intended to be equally used and understood by health professionals and patients. Thus, it can be used to organize and standardize issues most relevant for patients with cancer on HPN while respecting patients' perspective and experiences. The objective of our study was to investigate the perspectives of patients with cancer on their experience of functioning and health in relation to HPN in order to get an item pool to develop a comprehensive measure to assess the impact of HPN in this population. Specific aims were (1) to identify relevant aspects of functioning and health expressed by ICF categories in those patients (2) to explore their experiences on improvements in functioning and health due to HPN and (3) to explore and to compare the experiences of patients shortly after the beginning of HPN in contrast to those with longer established HPN. Study design We conducted a multi-stage series of qualitative, semistructured, face-to-face interviews using a descriptive approach [18]. The interviews were audio-recorded and transcribed verbatim. Two different stages were chosen to address the presumably different experiences of patients in different situations: In the first stage, we included patients shortly after the beginning of HPN who are confronted with the challenge of a new therapy to cover their specific experiences with and expectations on HPN. In the second stage we included patients with established HPN who are familiar with this therapy and faced with effects of longer HPN to validate the first stage findings and to specifically explore the consequences and experiences in the situation of prolonged HPN. Interview guide The interview guideline was adopted from earlier focus group and individual interview studies with the focus to explore relevant aspects of functioning and health in dif- ferent populations [19,20] (see additional file 1). It was designed to address the components of the International Classification of Functioning, Disability and Health (ICF). The interview questions tackled each of the three functioning and disability components, 'Body Functions', 'Body Structures', ' Activities and Participation', and the contextual factors 'Environmental Factors' and 'Personal Factors'. Additionally collected data We collected sociodemographic and disease-specific data (age, sex, living situation, site of primary tumor and duration of HPN). Additionally, to describe an overall view of functioning, the patients were asked to appraise their personal limitations in overall functioning using a horizontal visual analogue scale, ranging from zero, for complete limitation in all aspects of functioning to ten, for no limitation in functioning. Participants Patients with malignant tumors undergoing HPN were recruited from a customer database of a cooperating home care provider. Potential participants were consecutively contacted and asked for their willingness to contribute to a study by their nutrition nurse. In case of preliminary consent, the patients were provided with detailed information about the study. Informed written consent had to be signed prior to the beginning of the interview. Inclusion criteria for both stages were over 18 years of age and adequate command of the German language. Additional inclusion criterion for stage 1 was that HPN had been administered at least seven and up to 20 days. Additional inclusion criterion for stage 2 was that HPN had been administered at least for 6 weeks or was currently suspended due to stable general condition. Positive vote of the ethics committee of the Medical Faculty of Ludwig-Maximilians-University Munich was obtained prior to start. Data analysis Qualitative Data Analysis The Meaning Condensation Procedure [21] was used for the analysis of data content. In the first step, the verbatim transliterated transcripts of the interviews were read through to get an overview over the collected data. In the second step, the text was divided into units of meaning and the theme that dominated a meaning unit was determined. A meaning unit was defined as a specific unit of text either a few words or a few sentences with a common theme. Therefore, a meaning unit division did not follow linguistic grammatical rules. Rather, the text was divided where the researcher discerned a shift in meaning. In the third step, the concepts contained in the meaning units were identified. A meaning unit could contain more than one concept. For quality assurance reasons, the qualitative data analysis was conducted independently by two health professionals trained in the methodology (MM, SL). The results were compared and discussed prior to further analysis. Linking to the ICF The identified concepts were linked to the categories of the ICF by two health professionals (MM, SL) based on established linking rules which enable linking concepts to ICF categories in a systematic and standardized way [22]. According to these linking rules, health professionals trained in the ICF are advised to attribute each concept to the ICF category representing this concept most precisely. One concept can be linked to one or more ICF categories, depending on the number of themes contained in the concept. Consensus between the two health professionals was required to decide which ICF category should be linked to each identified concept. In case of a disagreement, a third person trained in the linking rules was consulted. In a discussion led by the third person, the two health professionals that linked the concepts stated their pros and cons for the linking of the concept under question to a specific ICF category. Based on these statements, the third person made an informed decision. For feasibility reasons, the linking procedure was restricted to the second level of the ICF. See Table 1 for a scheme of qualitative data analysis and linking. Sample size The sample size was determined by saturation. Saturation refers to the point at which an investigator has obtained sufficient information from the field [23]. In this study, we defined saturation as the point during data collection and analysis when an interview revealed less than 5% additional second level ICF categories. This strategy aims to assure maximum sensitivity to gather a maximum variety of experiences and expectations from the participants. Patients in stage 1 specified expected improvement in functioning and health which corresponded to 17 different ICF-categories. Patients in stage 2 specified experienced improvements in 11 different ICF categories (see Tables 2,3,4,5). There were 39 concepts (8% of all extracted concepts) which could not be linked to specific ICF categories. Most of them (28 concepts, 6%) could not be linked to the ICF because they were too general to be linked to specific ICF categories (aspects related to mental or general health, or quality of life) or were disease-specific and thus not covered by the ICF. A smaller proportion (11 concepts, 2%) pertained to personal factors. Specifically, those concepts were "impatience or patience", "remaining/loss of sense of humor", "faith in god", "coping with illness", "personal attitude towards disease" and "struggling with anticipated death". Discussion To our knowledge, this is the first study to investigate patients' perspectives on functioning and health in patients undergoing home-parenteral nutrition with the help of a comprehensive classification, the International Classification of Functioning, Disability and Health. Patients reported various aspects of functioning as relevant. Reported issues differed between patients with short-term HPN and long-term HPN. A part of those aspects of functioning was expected and experienced to improve during HPN. Functioning is increasingly perceived as an important outcome when examining patients undergoing HPN. To give an example, the Karnofsky Performance Status Scale [24] is one of the most frequently used outcome measures [4], assessing different performance levels. Nevertheless, it does not discriminate among specific aspects of functioning. In our study, patients were able to give a very conclusive and comprehensive picture of their specific impairment and limitations when confronted with the framework of the ICF. Relevant concepts could easily be extracted from the interviews. Perceived limitations in Functioning and Health Categories from all chapters of the ICF component 'Body Functions' were represented. Patients reported impairments in mental and sensory functions referring to general symptoms of malignant disease such as pain, disturbed sleep, changes in temperament and emotional functions or diminished attention [25][26][27]. Other impairments associated with antineoplastic therapy, e.g. impairment of sensory functions or problems with functions of the skin and hair, [28][29][30] were mentioned. Patients reported consequences of malnutrition such as decreased muscle power and muscle endurance, and impaired exercise tolerance. Problems with fluid and caloric intake were also reported, resulting in disturbed metabolic, endocrine and urinary functions. This is in line with literature describing functional consequences of malignancy and subsequent therapy [31,32]. Persoon et al. [14] reported similar symptoms in a population of patients with long-term HPN including patients with non-malignant disease. Limitations in functions related to the cardiovascular und respiratory system are also well known as general symptoms of malignant disease [33,34]. Of the ICF component 'Body Structures', most of the specified categories corresponded to the sites of malignancy. Also, patients at stage 2 of the interviews reported impaired structures of hair and nails, corresponding to side effects of radiation or chemotherapy [28,29]. One .)." Since the sites of malignance differ from patient to patient, no univocal picture of the typically involved body structures could be drawn. As for the ICF component ' Activities and Participation,' categories from all chapters were represented. Patients reported limitations in mobility, self-care and domestic life, aspects of transfer and moving around, and aspects of family life and social relationships. This is in line with the findings of Helbostad and colleagues, who identified mobility and self-care as most relevant for patients with advanced cancer [35]. Carrying out household tasks, and mobility are other activities frequently limited [13]. Family and social life is burdened by malignancy [36]. Although studies show that awareness of diagnosis and its consequences is not associated with time since diagnosis [37], our findings indicate that patients at stage 1 were more concerned with the immediate impacts of disease whereas patients at stage 2 were also aware of the consequences on work and employment. Another notable finding within the ' Activities and Participation'-component is that patients in stage 1 did not consider eating and drinking as relevant, whereas patients in stage 2 did. Of the ICF component 'Environmental Factors', products and technology, as well as personal relationships and attitudes, were reported to have an impact on functioning and health. The ICF category 'Products and technology for personal consumption' covers food and drugs as well as their adverse effects. The influence of social support, both from the family, colleagues or friends is a main factor in the perception of malignant disease and can either worsen or ameliorate patients situation [38]. Equally, social security and the health care system do influence patients' functioning. Expected and experienced improvements in functioning and health We could show differences between stage 1 and 2 in terms of experienced impairment and limitation. Patients at stage 2 but not at stage 1 reported limitations in specific mental functions, such as memory, emotional and perceptual functions. These limitations might have been there even in stage 1 but were probably veiled by more acute needs. Expected and experienced improvements within the component Body Functions were congruent. A benefit in weight maintenance is one of the primary goals in HPN [13,39]. Although some studies report HPN to disturb sleep [40], the patients in our study expected and experienced improved quality, duration and effectiveness of sleep: " was the only category to be expected and to be experienced to improve. Of the component ' Activities and Participation', walking was the only category to be expected and to be experienced to improve. Arguably, this is to be seen in the context of increased energy and muscle power. As described before, patients in stage 1 did not report eating and drinking as impaired, whereas patients in stage 2 did. In addition, only the patients in stage 2 experienced improvements in eating and drinking due to HPN. Eating and drinking can still be heavily limited in patients shortly after the start of HPN, as described frequently in relation to oral mucositis as a side effect from antineoplastic therapy [41]. Relevant aspects that could not be expressed in ICF categories Only few of the concepts extracted from the interviews could not be linked to specific ICF categories. Most relevant were aspects related to the ICF component 'Personal Factors', specifically aspects associated with coping strategies or spiritual meaningfulness of the situation. This is in line with the literature stating that cancer patients describe making sense of their situation and the development of coping skills as the most relevant issues [42,43]. Methodological considerations We have to point out that it was not the intention of our study (and of qualitative studies in general) to draw generalizing conclusions on the expectations and experiences towards functioning and health of cancer patients under HPN, or to report outcomes of HPN in various subgroups. Rather, the results of our study should provide a pool of patient-relevant items to be investigated in respect to prevalence and change over time in future studies. Our study has a potential limitation. Selection of patients for the interviews could have been biased towards individuals with milder disease who would be ready to undergo an interview procedure. However, our findings have high face validity and are in line with the few studies conducted in this field. Thus, our study can contribute a first impression from the patients' perspective regardless of potential selection bias. Conclusions The ICF proved to be a satisfactory framework to standardize the response of patients with cancer on HPN. For most aspects reported by the patients, a matching concept and ICF category could be found. However, the development of categories of the component 'Personal Factors' should be promoted to close the existing gap when analyzing interviews with the aim to explore the individuals' perspectives on functioning and health in specific situations. The identification and standardization of concepts derived from individual interviews was the first step towards creating new measures based on patients' preferences and experiences which both catch the most relevant aspects of functioning and are sensitive enough to monitor change associated to an intervention such as HPN in a vulnerable population with cancer. Additional material Competing interests MM received a research grant by TravaCare Gmbh, Hallbergmoos, Germany. The sponsor contributed in the discussion regarding optimal study design and participant recruitment. The sponsor was not involved in collecting, analyzing and interpreting the data, in the writing of the manuscript, and in the decision to submit the manuscript for publication. Additional file 1 Interview guideline.
2014-10-01T00:00:00.000Z
2010-04-16T00:00:00.000
{ "year": 2010, "sha1": "7a35da73dddc246a265bcafdcf3d2873a48ce4ff", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/1477-7525-8-41", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "803bc94ce853284c887a95593980f484844c7b4a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
12943094
pes2o/s2orc
v3-fos-license
T4-Locally Advanced Nasopharyngeal Carcinoma: Prognostic Influence of Cranial Nerve Involvement in Different Radiotherapy Techniques Background. Cranial nerve involvement at disease presentation of nasopharyngeal carcinoma was not uncommon. We investigated the prognosis of patients with T4-locally advanced NPC, with or without cranial nerve involvement, and compared the outcome of patients treated using different radiotherapy techniques. Methods. In this retrospective study, 83 T4-locally advanced NPC patients were diagnosed according to the seventh edition of the American Joint Committee on Cancer staging system. All patients were treated using three-dimensional conformal radiotherapy (3D-CRT) or intensity-modulated radiation therapy (IMRT). The survival rate was analyzed using the Kaplan-Meier method. Results. The 5-year overall, locoregional-free, and disease-free survival rates of patients treated using IMRT were 88.9%, 75.2%, and 69.2%, respectively. The outcome in these patients was significantly better than that in patients treated using 3D-CRT, with survival rates of 58.2%, 54.4%, and 47.2%, respectively. There was no significant difference in the 5-year overall, locoregional-free, and disease-free survival rates of the patients with (64.2%, 60.5%, and 53.5%, resp.) and without (76.9%, 63.6%, and 57.6%, resp.) cranial nerve involvement. Conclusion. Locally advanced NPC patients treated using IMRT had significantly better outcomes than patients treated using 3D-CRT. Our results showed that the outcome of T4 NPC patients with or without cranial nerve involvement was not different. Introduction Nasopharyngeal carcinoma (NPC), a tumor arising from the epithelial cells of the nasopharynx, is one of the most commonly diagnosed head and neck malignancies in Taiwan, with an annual incidence rate of 6.88 per 100,000 in 2007 [1,2]. Because of the anatomic location of the nasopharynx and its tumor biology, radiotherapy-based treatment for nasopharyngeal carcinoma is the standard treatment modality [3,4]. For early-stage nasopharyngeal carcinoma, the mainstay treatment is radiotherapy alone, and for advanced nasopharyngeal carcinoma, concomitant and neoadjuvant chemotherapy are suggested [3,5]. Radiotherapy treatment remains challenging due to the proximity of the tumor to the surrounding vital organs, especially in tumors with intracranial extension [5][6][7]. Over the past decades, the development of three-dimensional conformal radiotherapy (3D-CRT) permits a more selective delivery than conventional radiotherapy. More recently, intensity-modulated radiation therapy (IMRT) produces more accurate dose distribution around targets [8][9][10]. Because of a rich submucosal lymphatic drainage system, early development of cervical lymph node metastasis occurs frequently and locoregional invasion and metastatic spread have prognostic value. The local failure rate correlates with advanced T stage [4]. Other important factors include the presence of cranial nerve palsy, skull base erosion, and oropharyngeal and parapharyngeal extensions [4,11]. Approximately 70% of patients with NPC presents with locally advanced disease such as nonmetastatic stage III or IV disease [6]. According to the seventh edition of the American Joint Committee on Cancer (AJCC) staging system in 2010, nasopharyngeal tumors with intracranial extension and/or involvement of cranial nerves, hypopharynx, or orbit or those with extension to the infratemporal fossa or masticator 2 The Scientific World Journal space are defined as stage T4 [12]. Destruction of the skull base resulting in intracranial extension with cranial nerve involvement is not unusual because the cranial nerve is located adjacent to the skull base and the tumor is infiltrating in nature. It has been shown in previous studies that 11-29% patients had cranial nerve involvement at disease presentation [7,13,14]. The majority of cases with cranial nerve involvement are caused by superior invasion through the skull base into the cavernous sinus. The most commonly affected cranial nerve is the abducens nerve, followed by the trigeminal nerve. Many investigators have reported that cranial nerve deficit is a poor prognostic factor in T4 tumors [4,7,15]. However, most of these studies used the American Joint Committee on Cancer (AJCC) staging system prior to 1997, which also classified cases with skull base erosion as stage T4. The purpose of this study was to analyze the outcome of nonmetastatic T4 NPC patients treated at our department between January 1997 and January 2007. We compared the outcomes of patients treated using 3D-CRT and IMRT and the effect of cranial nerve involvement. Methods Between January 1997 and January 2007, 879 new NPC patients were diagnosed in the Department of Otolaryngology, Taipei Veterans General Hospital, Taiwan. Patients who had distant metastasis or disrupted treatment were excluded from this study. Eighty-three patients (9.4%) were diagnosed with T4-locally advanced disease and were enrolled in our study. The study was approved by the hospital's Institutional Review Board (IRB 2012-02-020AC). All patients underwent pretreatment evaluation, including complete medical history, physical and neurological examination; hematology and biochemistry profiles; and chest radiography, abdominal sonography, whole body bone scan, and magnetic resonance imaging (MRI) of the head and neck. They were restaged according to the seventh edition of the AJCC classification system. All patients were treated using external radiotherapy with or without concomitant or neoadjuvant cisplatin-based chemotherapy. Follow-up data were collected at periodic visits to our clinic until January 2012. The follow-up period was considered as the duration from the day of the first treatment to the day of death or the last clinic visit before analysis. Statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) 19.0 software. The between-groups analysis was calculated using Chi-Square test. The survival rate was calculated using the Kaplan-Meier method. value of less than 0.05 was considered statistically significant. Patient Distribution. Data from 83 patients were collected and were analyzed retrospectively in our study. There were 69 (83.1%) men and 14 (16.9%) women, with a mean age of 50.8 ± 14.0 years (range, 18-78 years). The mean followup period was 66.5 months (range, 1-174 months). Fiftythree patients (63.9%) received 3D-CRT and 30 (36.1%) received IMRT. Neoadjuvant or concomitant cisplatin-based chemotherapy was administered to 32 patients (60.4%) in 3D-CRT group and 25 patients (83.3%) in IMRT group. The age, sex, and N status between 3D-CRT group and IMRT group showed no significant difference. Patients with cranial nerve involvement were 42 patients (77.4%) and 13 patients (43.3%) in 3D-CRT group and IMRT group, respectively. It was significantly high in 3D-CRT group, as shown in Table 1. The most common symptoms of T4-staged nasopharyngeal carcinoma were diplopia (22.9%), followed by headache (15.7%), aural symptoms (14.5%), and neck mass (12.0%). Cranial nerve involvement was seen in 65.1% (54/83) of The Scientific World Journal the cases. Seventeen of these patients showed involvement of multiple cranial nerves. The most commonly involved cranial nerve was the cranial nerve VI (63.0%), followed by the cranial nerve V (44.4%), II (13.0%), III (9.3%), and X (5.6%). Based on the level of cranial nerve involvement, the patients were divided into 2 groups: the anterior group, which includes cranial nerves from I to VIII, and the posterior group, which includes cranial nerves from IX to XII. Of the 54 patients with cranial nerve paralysis, 50 showed involvement of the anterior cranial nerves, 3 showed involvement of the posterior cranial nerves, and only 1 showed involvement of both the anterior and the posterior cranial nerves. Discussion In our experience, compared to 3D-CRT, IMRT in locally advanced NPC patients showed significantly better results. We also found that cranial nerve involvement did not influence the overall five-year survival in patients with T4-locally advanced NPC. Radiation-based therapy has been considered the standard modality for treating NPC patients [3,4]. Locoregional control is a fundamental goal of NPC treatment, and locoregional recurrence has been associated with poor outcome and a high risk of distant metastasis [4,11]. However, approximately 70% of patients present with locally advanced nonmetastatic disease [6]. Various radiotherapy techniques have been introduced in an attempt to improve the locoregional control of NPC using primary radiotherapy while reducing toxicity to normal organs. The development of IMRT has gained popularity for the treatment of head and neck cancer, including nasopharyngeal carcinoma [8][9][10]. With this technique, the intensity of the radiation beams can be modulated such that a high dose can be delivered more accurately to the target tumor while significantly reducing the dose to the surrounding vital organs and normal tissues [16]. The IMRT technique has gradually replaced conventional radiotherapy for the treatment of NPC as a standard treatment modality, because it delivers higher radiation dose to the primary disease and neck metastases while sparing the organs at risk, thereby enhancing the therapeutic ratio [8,9,16,17]. Radiotherapy for patients with NPC is challenging because it requires delivery of an adequate dose to the target tumor without causing potentially serious complications to adjacent critical organs, especially in patients with cranial nerve involvement and intracranial extension [16,17]. We found that the 5-year overall, locoregional-free, and diseasefree survival rates of T4-locally advanced NPC patients treated using IMRT were 88.9%, 75.2%, and 69.2%, respectively, which were significantly better than the corresponding values (58.2%, 54.4%, and 47.2%, resp.) in patients treated using 3D-CRT ( = 0.004, 0.018, and 0.046, resp.). Most of studies documented N status as a prognostic factor for survival. Liu et al. reported that T stage of disease was a significant predictor of disease-free survival, favoring those with early-stage (T1-2) disease, and that N status was also a significant prognostic factor for the overall survival [4]. Lee et al. found that patients with more aggressive N statuses have poorer clinical outcomes, but the influence was smaller in T4-staged patients [18]. However, we found that N status does not affect the survival rates. All of our patients were diagnosed with T4 disease. Although our N3 group was small, we found that N status had less influence in patients with advanced primary tumor. It has been shown, that compared to conventional radiotherapy, IMRT better improves the outcome of nasopharyngeal carcinoma [8,9,16].Özyar et al. reported 3-year overall survival rates of 71 and 60% and disease-free survival rates of 74 and 46% for IVA-and IVB-staged patients, respectively. Their results also showed that advanced N status was an unfavorable prognostic factor both for overall ( = 0.03), diseasefree ( = 0.0004), and distant metastasis-free ( = 0.0003) survival [11]. Lai et al. observed a trend of improvement in disease-free survival in the IMRT group compared to the two-dimensional radiotherapy (2DRT) group [16]. A study at the Memorial Sloan-Kettering Cancer Center also found a trend for improved local control with IMRT compared to local control of 79% in 35 patients treated using 3D-CRT ( = 0.11) [10]. Excellent locoregional control for NPC was also achieved using IMRT in a University of California, San Francisco, study. The estimated 4-year local progressionfree, locoregional progression-free, and distant metastasesfree rates were 97%, 98%, and 66%, respectively, and the 4year overall survival was estimated to be 88% [9]. Our data confirmed that the local and distant disease control in locally advanced NPC was better by using IMRT than 3D-CRT. The diagnosis of nasopharyngeal carcinoma can be a challenge to physicians. This is because nasopharyngeal neoplasm may hide all nasal and aural symptoms and present nonspecific signs such as diplopia, facial numbness, or headache as the initial manifestation [2,19]. Eleven to 29% patients showed cranial nerve involvement at disease presentation [7,13,14]. Cranial nerve involvement was observed in 65.1% of our nonmetastatic T4 NPC patients. The abducens and the trigeminal nerves were the most frequently affected. Based on cranial nerve involvement, our patients were classified into 2 subgroups. The 5-year overall, locoregional-free, and diseasefree survival rates of patients with cranial nerve involvement were 64.2%, 60.5%, and 53.5%, respectively, and in those without cranial nerve involvement were 76.9%, 63.6%, and 57.6%, respectively. There were no significant differences in these values between the 2 groups. We also divided the level of cranial nerves into anterior and posterior groups and the survival rates did not differ between these groups. Furthermore, we found that there was no significant difference in the 5-year overall, locoregional-free, and disease-free survival rates between patients with T4 disease with or without cranial nerve involvement in the 3D-CRT group (58%, 53.1%, and 46.3%, resp., versus 58.3%, 58.3%, and 50%, resp.; = 0.35, 0.523, and 0.594, resp.) or the IMRT group (83.9%, 83.9%, and 76.2%, resp., versus 93.8%, 68.1%, and 63%, resp.; = 0.94, 0.323, and 0.586, resp.). Roh et al. investigated prognostic factors in nasopharyngeal carcinoma and found that patients with involvement of both anterior and posterior cranial nerves had a worse prognosis than those with involvement of either anterior or posterior cranial nerves ( = 0.0219) [15]. Chang et al. also presented similar results. They found that patients with extensive cranial nerves involvement have worse survival than patients with limited involvement of anterior or posterior cranial nerves ( < 0.001) [20]. Our data showed similar survival in the anterior group, the posterior group, and the group with involvement of both anterior and posterior nerves. But a majority of patients in this study had only anterior cranial nerve involvement (92.6%). Further studies are needed to establish the role of different groups of cranial nerve involvement in NPC survical. Cooper et al. found that the outcome in subgroups of T4locally advanced NPC disease was not significantly different based on cranial nerve involvement alone, skull base erosion alone, or both. In most studies, cranial nerve involvement was recognized as a poor prognostic factor [21]. Altun et al. reported that the overall 5-year survival rate in patients with cranial nerve deficit was 25% compared to 58% in patients without cranial nerve deficit ( = 0.01). They documented that patients with cranial nerve palsy had a worse prognosis than patients with skull base erosion alone [13]. We restaged all patients according to the seventh edition of the AJCC staging manual in 2010. Our data showed that cranial nerve involvement did not affect the prognosis of T4-locally advanced NPC patients. It is important to note that previous reports used an older staging system, which classified skull base invasion as a T4 stage. According to the current staging system, skull base destruction is defined as T3 stage, which has a better prognosis than T4 stage. Limitations of this study include a small sample size in the IMRT group because our medical center started using IMRT for the treatment of NPC patients only in late 2003. A larger population of patients and a longer follow-up period to evaluate the long-term outcomes and complications are needed. Due to its retrospective nature, chemotherapy in our studies was not uniform. In addition, cranial nerve involvement may be asymptomatic, and sometimes the symptoms may be subtle. Evaluation of cranial nerve palsy by using clinical symptoms and physical examination also has 6 The Scientific World Journal certain limitation; therefore, a more accurate and careful neurological examination is required. Conclusion In conclusion, we found that cranial nerve involvement, which was proposed to be a poor prognostic factor in the past, had no significant effect on the survival of T4-locally advanced NPC patients. Patients with locally advanced NPC should be encouraged to complete the entire course of treatment. IMRT delivers higher radiation dose and a better coverage of the tumor region thereby enhancing the therapeutic ratio. Improvement of treatment modality, better radiotherapy technique combined with chemotherapy, increased the survival rate of locally advanced NPC patients.
2018-04-03T00:35:58.150Z
2013-12-09T00:00:00.000
{ "year": 2013, "sha1": "871a4e62fa8235659b27c3e5b6bf31feea43a7d3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2013/439073.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be9738b39debfde1d42aff41fd4a8a1722a63212", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249058622
pes2o/s2orc
v3-fos-license
Evaluation of serum levels of cathepsin S among colorectal cancer patients Objective Colorectal cancer is the third most common cancer worldwide. Cathepsins are protease that are known to be involved in cancer progression and metastasis. The aim of this study is to evaluate the levels of serum cathepsin S in patients and control subjects and its effects on the prognosis of the cancer. Methods In this case-control study, colorectal cancer patients referred to our gastroenterology clinic were included. The control group consisted of healthy individuals. Cathepsin S levels were analyzed in these patients and the check list consisting of demographic data, cancer stage, colonoscopy findings, CEA marker and cathepsin S levels were recorded. Results Of 80 patients and healthy controls included in the study, age, gender and BMI were not significantly different among the two groups, p = 0.265, p = 0.752 and p = 0.2, respectively. Cathepsin S levels were significantly greater in-patient group p < 0.001 and was significantly correlated with the stage of the tumor. CEA marker was also linear related with the increased levels of cathepsin S, p < 0.001. Conclusion Our study concluded that cathepsin S is elevated in the cancer patients and can be a significant marker for the prognosis of colorectal cancer. Introduction Colorectal cancer is one of the leading causes of cancer death worldwide. It contributes to the 6% of all types of cancer and is the third most common cancer worldwide. Australia, New Zealand, Canada, the United States and parts of Europe are reported to have highest incidence of the cancer whereas countries with the lowest rates include China, India, parts of Africa and South America. Colorectal cancer accounts for 6.1% of all cancers in men and 13.1% in women [1]. According to the Health Ministry, colorectal cancer is the third leading cause of death in Iran after cardiovascular disease and accidents. Colorectal cancer is one of the nine most common cancers reported in Iran, and the prevalence is rising among young people [2,3]. The risk factors known to be associated with colorectal cancer include smoking, alcohol intake, obesity and intake of red meat, whereas, age, family history of the cancer and inflammatory bowel disease are non-modifiable risk factor [4][5][6]. Cathepsins are lysosomal peptidases belonging to the class of cysteine, serine, and aspartic protease. Cathepsins were initially described as intracellular peptide hydrolases, although several cathepsins also have extracellular function. Systemic B, C, F, H, L, K, O, S, V, W and X cathepsin belong to papain family and are the largest class of cathepsin. Cathepsins are produced in the form of passive enzymes and are converted into active and mature enzymes during a process [7]. Cathepsin S is distinguished from other cysteine proteins by its limited tissue distribution. While most members of the cathepsin family are expressed in a wide variety of tissues and organs, cathepsin S is found mainly in the spleen, lymph nodes, monocytes, macrophages, and several APC cells. This unique distribution pattern suggests that cathepsin S is highly involved in the immune response. Cathepsins are known to play an important role in cancer metastasis and progression. Increased expression of cathepsin is associated with poor prognosis of tumor and is therefore suggested as the marker for the prognosis of cancer. Cathepsin S causes the degradation of extracellular matrix and promotes cell metastasis such as brain-to-breast metastasis [8]. A number studies have indicated the cathepsin contributes to tumor microenvironment and can be detected at high levels in colon, ovarian, lung, breast, liver, head and neck and brain cancer [9]. The aim of this study is to evaluate the serum levels of cathepsin S in colorectal patients in comparison to the healthy patients and its role in the prognosis of the cancer. Methods In this case-control study, patients with colorectal cancer referred to gastroenterology clinic in 2019 for colonoscopy were included. The control group was composed of healthy individuals referred to the center for colonoscopy. Written content was obtained from all the patients included in the study. Exclusion criteria was patients presented with multiple primary malignancies, patients in whom colonoscopy could not be performed and those with hematological disorders. 9 cc of intravenous blood was obtained from the patients and sent to the laboratory. Blood samples were centrifuged and serum was frozen for the tests. The level of cathepsin S is measured by the Human Cathepsin S Elisa Kit. Information on demographic characteristics, disease stage (based on additional tests performed on the patient in terms of disease stage determination), type of pathology and tumor anatomical location, as well as CEA level was obtained from patients' files and colonoscopy reports. The data was computerized and analyzed using SPSS v22. The mean and standard deviation was used to describe the variables. T test and Chi square tests were used to evaluate the relationship between the variables and hypothesis. The study was approved by the ethical committee of (XXX). Unique identifying number is: researchregistry7621. The methods are stated in accordance with STROCSS 2021 [10]. Results This study included 80 colorectal cancer patients and 80 healthy control. The mean age of the patients in the case and control group was 58.9 ± 11.7 and 56.9 ± 11.7 years, respectively. The highest prevalence of the colorectal cancer was in the age group 50-69 years (47.5%) and 49-30 years (27.5%) whereas the greater prevalence of control patients were aged 50-69 years. The difference between the mean age group of the patients and the control group based on the independent t-test was not statistically significant (P = 0.265). The difference in the frequency distribution of patients and control age groups based on chi-square was not statistically significant (P = 0.433). In patient group, 43 males (53.8%) and 37 females (46.2%) were included and in the control group there were 41 males (51.2%) and 39 females (48.8%). The gender difference among the two groups was not statistically significant, P = 0.752. In terms of education level, in case group, 63.7% had primary education and 23.8% did not finish their primary education. 8.8% had undergraduate certificate and 3.8% went to foreign universities. In patient and control group, majority of the participants were living in suburban areas, 71.3% and 80%, respectively. This was not statistically different in the two groups, p = 0.197. In terms of occupation, in the patient group, greatest number of participants were housewives (40%). 27.5% were jobless, 17.5% were self-employed, 5% were farmers and 6.3% were employees. In control group, majority of the individual were housewives 36.3% and self-employed 26.3%. In terms of employment status, the two groups had significant difference, p = 0.02. In case and control group, 59% and 65.8% individual had normal BMI, respectively and 41% and 31.6% had BMI greater than normal, respectively. The two group was not significantly different in terms of BMI, p = 0.2 (Table 1). 7 patients (8.8%) were reported with a history of intestinal polyps and 6 patients (7.5%) reported a history of inflammatory bowel disease (IBD). Also, a history of colorectal cancer was seen in the first-degree relatives of 24 patients (30%). In terms of staging, most tumors were in stage II (35%) and stage III (31.3%). The results of Post hoc test showed that the difference between serum cathepsin S level among patients with stage I and III tumor was significant, p < 0.001 and stage I and stage IV, p < 0.001. The cathepsin S level was not significantly different among stage I and stage II cancer patients, p = 0.348. Similarly, cathepsin S level were significantly different among stage II and III, p < 0.001 and stage II and IV, p = 0.001. However, this marker was not statistically significant among stage III and IV tumor stages, p = 0.408. It was seen that as the stage of the cancer increases, the serum cathepsin levels increases significantly too, p < 0.001( Table 2). The mean cathepsin S level in patient and control group was 21.55 ± 6.3 and 12.35 ± 1.87 μg/L. The results of T-test showed that the difference in cathepsin S level among the two groups is statistically significant, p < 0.001 (Table 3).Histopathological analysis of the tumor showed that the majority of tumors were in the descending colon (33.8%) and the ascending colon (25%) and the lowest frequency was in the rectosigmoid area of 6.3%. All tumors (100%) were adenocarcinoma. The mean levels of cathepsin S were not significantly different based on the location of the tumor, p = 0.984. The distribution of cathepsin S level in different age groups. The serum cathepsin did not differ significantly among different age groups, p = 0.399 in patient group. Similarly, the gender, BMI and blood group type were also not associated with cathepsin levels, p = 0.342, p = 0.251 and p = 0.743, respectively, in this group. In the above correlation matrix, no direct or inverse correlation was found between age and serum cathepsin S levels in the studied patients p = 0.164 however, there was a direct linear relationship between serum CEA levels and cathepsin S, p < 0.001. Increase in CEA levels was associated with increase in serum cathepsin S levels (Table 4). Discussion In current study, we evaluated the serum levels of cathepsin S in colorectal cancer patients in comparison with healthy control. The findings of our study demonstrated that cathepsin S levels are significantly elevated in colorectal cancer patients irrespective of their age, gender, BMI and blood group. Furthermore, the increase in the cathepsin S was directly correlated with the advancement of the cancer. It is also significantly correlated with carcinoembryonic antigen, colon cancer marker. Cathepsin S, is a secretory protein and its expression has been reported in a number of cancerous studies such as hepatocellular carcinoma [11], lung and prostate cancer [12,13]. Antibodies against cathepsin and RNA silencing have been reported effective to inhibit tumor inhibition and progression and induce apoptosis [14][15][16]. In a retrospective study conducted by Gormley, Hegarty [17] on 560 colorectal cancer patients, reported that the expression of cathepsin S is 1.3 folds greater in patients, compared to the control. The study also showed that more than 95% of the patients were presented with an increased expression of cathepsin S. In a cross-sectional study, Liu, Liu [18] evaluated the levels of cathepsin S in gastric cancer along with esophageal, nasopharyngeal, liver, colorectal cancer patients in comparison with healthy controls (n = 496) and reported that the cathepsin S levels are significantly greater in these patients. The study also reported that these levels were lower in stage I and II patients as compared to stage III and IV. The findings of our study also indicated that the progression of the cancer to later stages is significantly associated with greater levels of serum cathepsin, as compared to the early stages. CEA marker was also seen to be significantly associated with cathepsin levels. The study found that cathepsin S is not associated with gender, smoking and alcohol status, grade of the tumor and age. In an in-vivo study by Burden, Gormley [13], Fsn0503, cathepsin S antibody inhibit proteolysis and has anti-angiogenic properties. The study also showed from the biopsy samples of colorectal cancer tissue that the expression of cathepsin was significantly enhanced, as compared to the healthy tissue. Huang, Chen [19] reported that targeting cathepsin S can induce autophagy in colorectal adenocarcinoma cells. Our study doesn't report the outcomes of chemotherapy or cancer surgery on cathepsin levels. Clinical trials and studies on therapies targeting cathepsin S can give better conclusion. Conclusion In line with previous publications, we also report that cathepsin S is significantly elevated in colorectal cancer patients and is associated with poor prognosis. Ethical approval All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the. Source of funding No funding was secured for this study. Author contribution Dr. Koroush Ghanadi: conceptualized and designed the study, drafted the initial manuscript, and reviewed and revised the manuscript. Dr. Saber Ashorzadeh and Dr.Asghar Aliyepoor: Designed the data collection instruments, collected data, carried out the initial analyses, and reviewed and revised the manuscript.Dr. Khatereh Anbari: Coordinated and supervised data collection, and critically reviewed the manuscript for important intellectual content. Trail registry number None. Provenance and peer review Not commissioned, externally peer-reviewed. Availability of data and material Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
2022-05-26T15:03:12.009Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "a98831566b79517ed8e18207f1c36ac7b361a02d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.amsu.2022.103831", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e6df153ba95143a533c8b6096680d9c593099e0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226109307
pes2o/s2orc
v3-fos-license
Intraspecific hybridization in greengram genotypes The hybridization was made between the primary gene pool which has diverse parent of greengram ( Vigna radiata (L.) wilczek) genotypes. Four well known genotypes used for hybridization to yield improvement. Four cross combination were obtained. Out of four crosses the cross L2 x T1 recorded more number characters improvement through these parents combination, number of branches per plant, days to fifty percent flowering, number of cluster per plant, number of pods per plant, single plant yield, dry matter production and days to full maturity while L 2 xT 2 showed none of the improvement. Introduction Vigna radiata (L.) wilczek, is commonly known as greengram or mungbean or pachapayaru in Tamil. It is the most widely cultivated species among the six Asiatic cultivated Vigna species. It has cheapest protein with high quality for health of human and animals. It has some added features compared to other pulses. It is highly drought tolerant and well adapted to varied range of soil conditions including light soils and can thrive even under limited irrigation, moreover, it is suited for crop rotation and crop mixtures (Baldev, 1988) [2] and Sadaphal, 1988) [10] . Presently, the yield level of greengram is very low due to genetical factor and also some biotic devasting disease like MYMV can reduce the yield drastically. This is the disease involved for major constrains in the production and productivity of greengram. Besides other management factors the prime cause for the low productivity can be ascribed to the inherently low yielding potential of the cultivars coupled with susceptibility to other diseases. The varietal breeding program is very easy and time saving for developing new varieties. In this study the genotypes KMG 189 and KMG 242 having resistant against MYMV. The crossing programme taken up in this crop had resulted only with somewhat limited variability success and as far as yield improvement is concerned its very good. The basic reason for limited success had been due to the limited variability prevailed among the parents used for hybridization in most of the studies. There had been always possibility of improving the crop by incorporating donor genes to the available ruling varieties. For this study utilization of primary gene pool itself of this crop can result in tremendous improvement in yield. In order to utilize the variability available in the primary gene pool can be given improvement in the greengram new varietal development. For utilizing the variability available in the primary gene pools, it is essential to attempt intraspecific crosses and to develop hybrids. These hybrids need to be critically evaluated as such and in the segregating generations for improvement in yield and yield components. The introgressed materials developed through wide crosses can also contribute as genetic reservoirs for novel genes apart from contributing to the improvement of yield and yield components. To the view to evaluate the available parents for attempting intra specific hybridization to generate segregants for better yield along with biotic resistant like MYMV was made in greengram. Materials and Methods To generate and characterize variability through intra specific crosses hybridization involving Vigna radiata with another diverse parent in greengram. The intraspecific hybridization among greengram accessions was attempted by using female parents namely VRM (Gg) 1 (L1), Pusa bold (L2) and male parents KMG 189 (T1) and ML. 682 (T2). The crossing block was raised in the TNAU, Agricultural Research Station, Virinjipyram, Vellore, Tamil Nadu during 2013 ( Fig-1 and Fig-2). The male and female parents were raised in row basis with 2-meter length, plant to plant spacing is 20 cm row to row 50 cm. The biometrical observations for 13 characters were recorded. Observations were recorded on ten randomly selected plants for each parent and F1. The mean values were subjected to statistical analysis the statistical analysis by Excel was used. Results Programmed for four hybrids and 13 traits in terms of mean performance was studied. The analyses of variance for different characters were studied. The females Vs male parents' variance was significant for all the characters studied (Table 1). Parents Vs crosses showed significant variance for all the traits. Mean performance of parents and hybrids ( Table 2) were studied for finding out the superiority of cross and also suitable parents. Wide range of variation was observed among parents and hybrids for various traits studied. The mean value of parents toward the different character like plant height ranged from 27.05 to 72.00 whereas no. of branches per plant ranged from 16.78 to 52.73 cm. The days to fifty percent flowering ranged from 33.00 to 48.00 days. The character like number of clusters per branch was lowest of 1.30 and to the highest of 9.40. The number of clusters per plant ranged from 3.50 to 1.50 and number of pods per plant ranged from 8.60 to 58.15 and pod length was 6.72 to 8.00cm. The number of seeds per pod ranged from 9.50 to 10.55. Hundred seed weight ranged from 3.00 to 3.80 where as single plant yield is 6.55 to 7.18 g and dry matter production recorded from 3.8 g to 20.93 g while days to full maturity from 60 days to 80 days. In the hybrids shows wide variation for various traits in parents and hybrids. The crosses L1 x T1, L1 x T2 in (Fig 1) and L2x T1, L2 x T2 in (Fig.2) Discussion The present investigation deals with intraspecific hybridization in greengram for assessment of their breeding value. There are many approaches for selection of parents for hybridization programme viz., selection of parents based on per se performance, if parents are identified on the basis of divergence analysis, the resulting recombinants through hybridization would be more heterotic with the possibility of obtaining larger frequency of better segregants in subsequent generations (Reddy. 1998) [9] and Aher et al. (2001) [1] . For intrtraspecific hybridization the parents selected for represent maximum genetic diversity. The parents were selected from based on the number of characters improved over the parental combination. The lines as female and testers as males were selected based on the understanding of performance in the field experience the parents were selected. By considering all the genetical factors, parental lines viz. VRM (Gg) 1, Pusa bold as female, KMG 189 and ML 682 were selected for performing intraspecific crosses. In this cross combination based on the hybrid performance of the parent's ability were judged by contribution of parental combination, four cross combination namely L1 x L1, L1 x T2, L2 x T1, L2x T2 were studied by the many characters. Out of four cross combination the cross L2 x T1 (Pusa bold and KMG189) parental combination has showed their improved performance for many characters like number of branches per plant, days to fifty percent flowering, number of clusters per plant, number of pods per plant, single plant yield, dry matter production and days to full maturity. Other cross combination L1x T1 contributing to improve the character viz., number of clusters per branch, pod length and hundred seed weight where as L1x T2 for improved characters were plant height, length of branches and number of seeds per pods. The cross L2 x T2 not contribution to the characters improvement. From this study the some of parental combination not improvement characters which means there is no diversification which shows the genetic closeness of varieties. These effects it is clear that additive effects were predominant for all the characters Gamble (1962) [3] reported that reduction in magnitude of additive effect was met within crosses involving parents that had undergone selection for the characters in question. Narayanan (1978) [6] and Kadambavanasundaram (1980) [5] in cotton and Iyemperumal (1983) [4] in greengram reported the influence of dominance and epistasis when widely divergent parents were used. It may also be due to the fact that the characters concerned possessed complexity of inheritance with low magnitude of additive effect with high per se and heterotic performance as opined by Rathnaswamy and Jagathesan, (1984) [8] and Aher et al. (2001) [1] . Conclusion Appreciable heterosis is present in the hybrids investigated. However, the development of commercial hybrids does not seem to be a possibility at present due to lack of male sterile mechanisms and low amount of natural crossing in greengram. Alternatively, the possibility of development of high yielding homozygotes equal to or better than the heterotic Fl hybrids in green gram can be developed through proper and efficient handling of the cross combinations.
2020-08-06T09:07:59.906Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "256ac1ecfd0802eebe7303393a671a433a4d3f85", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue2/PartB/S-8-2-39-832.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5fe797a287c3912da838884602513b0d7fa0692b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
119336415
pes2o/s2orc
v3-fos-license
The low-mass companion of GQ Lup Using NACO on the VLT in the imaging mode we have detected an object at a distance of only 0.7 arcsec from GQ Lup. The object turns out to be co-moving. We have taken two K-band spectra with a resolution of lambda /Delta lambda=700. In here, we analyze the spectra in detail. We show that the shape of spectrum is not spoiled by differences in the Strehl ratio in the blue and in the red part, as well as differential refraction. We reanalyze the spectra and derive the spectral type of the companion using classical methods. We find that the object has a spectral type between M9V and L4V, which corresponds to a Teff between 1600 and 2500 K. Using GAIA-dusty models, we find that the spectral type derivation is robust against different log(g)-values. The Teff derived from the models is again in the range between 1800 and 2400 K. While the models reproduce nicely the general shape of the spectrum, the 12CO-lines in the spectrum have about half the depth as those in the model. We speculate that this difference might be caused by veiling, like in other objects of similar age, and spectral class. We also find that the absolute brightness of the companion matches that of other low-mass free-floating objects of similar age and spectral type. A comparison with the objects in USco observed by Mohanty et al. (2004) shows that the companion of GQ Lup has a lower mass than any of these, as it is of later spectral type, and younger. The same is as true, for the companion of AB Pic. To have a first estimate of the mass of the object we compare the derived Teff and luminosity with those calculated from evolutionary tracks. We also point out that future instruments, like NAHUAL, will finally allow us to derive the masses of such objects more precisely. Introduction Now more than 160 extrasolar planets have been discovered indirectly by means of precise the radial velocity measurements of the host stars. At least for the 6 transiting planets the planetary nature of these objects is confirmed (e.g. Charbonneau et al. 2000). In two additional cases the planetary nature of the orbiting objects is confirmed astrometrically (Benedict et al. 2002). In many other cases, astrometric measurements are at least precise enough to rule out binary star viewed almost face on. A statistical analysis shows that the observed frequency of solar-like stars having planets with a minimum mass ≥ 0.3 M Jupiter orbiting at distances of ≤ 5 AU is 9% (Lineweaver & Grether 2003). It is thus quite surprising that brown dwarfs are very rare as close companions to normal Correspondence to: guenther@tls-tautenburg.de stars, in contrast to planets and stellar companions. The lack of brown dwarfs as companions is thus often referred as the brown dwarf desert. Marcy et al. (2003) estimate from their radial velocity (RV) survey of old, solar-like stars that the frequency of brown dwarfs with 3 AU of the host stars is only 0.5 ± 0.2%, and thus much smaller than the frequency of planets, or the frequency of binaries. This result is recently confirmed by a radial velocity survey of stars in the Hyades which, combined with AO-imaging also shows that the number of companions with masses between 10 M Jupiter and 55 M Jupiter at distances ≤ 8 AU is ≤ 2% . Studies by Zucker & Mazeh (2001) show that the frequency of close companions drops off for masses higher than 10 M Jupiter , although they suspect there is still a higher mass tail that extends up to probably 20 M Jupiter . From the currently known "planets" 15 have an m sin i between 7 and 18 M Jupiter . It has been argued by Rice et al. (2003) that these massive planets do not form by core accretion, because the host stars do not show enhanced metallicity, unlike stars hosting planets of lower mass. Wide companions (e.g. d ≥ 50 AU) are detected by means of direct imaging. Unfortunately this means that their masses can only be estimated by comparing their temperature and luminosities with evolutionary tracks. In the case of these pairs, the situation is possibly different, as direct imaging campaigns probably have turned up 11 brown dwarfs orbiting normal stars. The result of all search programs for objects in TWA-Hydra, Tucanae, Horologium and the β Pic region is that the frequency of brown dwarfs at distances larger than 50 AU is 6 ± 4% (Neuhäuser et al. 2003). This result implies that the frequency of wide binaries consisting of a brown dwarf and a star is much higher than that of close binaries, or it means that there are serious problems with the tracks. Recently, three very low-mass companions have been identified that could possibly even have masses below 13 M Jupiter . 2MASSWJ 1207334-393254 is a brown dwarf with a spectral type M8V. A co-moving companion has been found which is located at a projected distance of 70 AU (Chauvin et al. 2005a). The companion has a spectral type between L6 and L9.5. Assuming that 2MASSWJ 1207334-393254 is a member of the TW Hydra association, and assuming an age 8 +4 −3 Myr, the mass of the primary is 25 M Jupiter . Using the non-gray models from Burrows et al. (1997), the authors estimate the mass of the companion as 3 to 10 M Jupiter . AB P ic also has a very low-mass companion (Chauvin et al. 2005b). AB P ic is a K2V star in the Tucana-Horologium association. The age is estimated as ∼ 30 Myr. The co-moving companion with a spectral type of L0 to L3 is located at the projected distance of 260 AU from the primary. The K-band spectrum of the companion shows the NaI doublet at 2.205 and 2.209 µm. For this object, the authors give a mass estimate between 13 and 14 M Jupiter . The third such object is GQ Lupi which will be discussed here. GQ Lup GQ Lup is a classical T Tauri star of YY Orionis type located in the Lupus I star-forming region. Quite a number of authors have determined the distance to this star-forming region: Hughes et al. (1993) find 140 ± 20 pc, Knude & Høg (1998) 100 pc, Nakajima et al. (2000) 150 pc, Satori et al. (2003) 147, Franco et al. (2002) 150 pc, de Zeeuw et al. (1999 142 ± 2 pc, and Teixeira et al. (2000) 85 pc but note that 14 stars of this group have measured parallax-distances, which are are on average 138 pc. The most likely value for the distance thus is 140 pc, which will be used in the following. The spectral type of GQ Lup is K7V. Batalha et al. (2001) find a veiling between 0.5 and 4.5 and an extinction A V of 0.4 ± 0.2 mag, which implies an A K = 0.04 ± 0.02 mag, and A L = 0.02 ± 0.01 mag. Using spectra taken with HARPS, we derive a v sin i of 6.8 ± 0.4 km s −1 , assuming a Gaussian turbulence velocity of 2 km s −1 , and assuming a solar-like center to limb variation. The broad-band energy distribution of GQ Lup is shown in Fig. 1 together with a K7V star of 1.5 R ⊙ located at 140 pc. In the optical, the data fits nicely to a star with low to medium veiling, as observed. In the infrared, a huge excess due to the disk is seen. The spectrum of the companion We detected a faint companion at a distance of 732.5 ± 3.4 mas with a positional angle of 275.45±0.30 o (Neuhäuser et al. 2005). As described in more detail in Mugrauer & Neuhäuser (2005), using our own imaging data, as well as data retrieved from the HST and SUBARU archive, it was shown that the pair has common proper motion at significants-level of larger than 7 σ. After this question is solved, the next question to solve is, what the companion is. Using NACO, we obtained two spectra of the companion. The first spectrum was taken on August 25, 2004, the second on September 13, 2005. The first spectrum had a S/N-ratio of only 25, that is why it was repeated. The second spectrum has a S/N-ratio 45. For our observations we used S54 SK-grism and a slit width of 172 mas which gives a resolution of about λ/∆λ = 700. Because the Strehl ratio, as well as the refraction depends on wavelength, the flux-loss in the blue and in the red part of the spectrum may differ if a very narrow slit is used. However, since we used a relatively wide slit, and observed airmass 1.24, and 1.30 respectively, this effect is only 1.5% for the wavelength region between 1.8 and 2.6 µm. There are several classical methods as to derive the spectral types of late-type objects from spectra taken in the Kband. Using the K1-index from Reid et al. (2001) (2003) which is simply the flux ratio between 1.964 to 2.075 µm, we derive spectral types in the range between L2V Chauvin et al. (2005b). These authors assign a spectral type L0 to L3 and an age of 30 Myr to this companion. Fig. 2. As can easily be seen, the spectra of the two objects are quite similar. The depth of the CO-lines is also similar. We assign a spectral type M9 to L4. to L4V. However, this coefficient is known to have an accuracy of only one spectral class. In order to be on the save side, we thus estimate the spectral type to be between M9V to L4V. Another piece of evidence is the NaI lines at 2.2056 and 2.2084 µm. These line vanishes at a spectral types later than L0V. Unfortunately, there is a telluric band between 2.198 and 2.200 µm, which is difficult to distinguish from the NaI lines in a low resolution spectrum. We thus can only give an upper limit of 3Å, for the equivalent width of the NaI doublet. Using the conversion from spectral type to T ef f from Basri et al. (2000), Kirkpatrick et al. (1999), and Kirkpatrick et al. (2000), this range of spectral types corresponds to T ef fvalues in the range between 1600 to 2500 K. The expected K-L'-colours of an object with a spectral type M9V to L4V are between 0.5 and 1.2 mag, which matches reasonably well the derived K-L'-colour of 1.4 ± 0.3 of the companion (Golimowski et al. 2004). Using the extinction to the primary, and assuming a distance of 140 pc, we derive from the observed brightness of m K s = 13.1 ± 0.1, and m L ′ = 11.7 ± 0.3, absolute magnitudes of M K s = 7.4 ± 0.1 and M L ′ = 6.0 ± 0.3 mag for the companion (Fig. 1). Old M9V to L4V objects have M K -values between 9.5 to 12 mag and M L ′ -values between 9.8 and 10.5 mag. The companion thus is much brighter than old M, or L-dwarfs (Golimowski et al. 2004). When discussing the brightness of the companion, we have to keep in mind that there are three additional effects that may lead to large absolute magnitudes, apart from the young age of the object: The first one simply is that it could be a binary. The second is that the distance could be much smaller than 140 pc. The third possibility is that the brightness is enhanced due to accretion and a disk, like in T Tauri stars. In this respect it is interesting to note that objects of similar age and spectral type often have disks and show signs of accretion. Typical accretion rates are about 10 −11 M ⊙ yr −1 (Liu, Najita, Tokunaga 2003;Natta et al. 2004;Mohanty et al. 2004a;Mohanty et al. 2005a;Muzerolle et al. 2005). Clear signs of accretion are observed even down to the planetary-mass regime at young ages (Barrado y Navascués 2002). The fact that we do not see the Br γ -line in emission does not speak against the accretion hypothesis, as the flux of this line is correlated with the accretion rate, and at 10 −11 M ⊙ yr −1 , we do not expect to see it (Natta et al. 2004). The accretion hypothesis is further supported by the fact that objects with spectral types of late M in Taurus have K s − L ′colours up to 1.2 mag, and absolute luminosities of M K = 6 to 7, and M L ′ ∼ 6.0. The large luminosities and red colours of these objects are usually interpreted as being caused by disks and accretion (Liu, Najita, Tokunaga 2003;Luhmann 2003). The absolute magnitudes of the companion of AB Pic of M J = 12.8 +1.0 −0.7 , M H = 11.3 +1.0 −0.7 , M K = 10.8 +0.9 −0.7 are also quite similar to the of the companion of GQ Lup. Thus, the companion of GQ Lup is quite a normal for an object of its age, and we should keep in mind that it is likely that there is a disk, and accretion. Comparing the spectrum with GAIA-dusty models Up to now we have compared the spectrum of the companion of GQ Lup with spectra of old brown dwarfs which have a log(g) ∼ 5.0. Thus, one may wonder, whether this causes a problem for the determination of the spectral type. In order to derive T ef f it would be better to compare the observed spectrum with spectra of different log(g). The only way to do this, is to compare the observed spectrum with model calculations. To do this, we use the GAIA-dusty models. Fig. 4 shows the flux-calibrated spectrum together with two models. Both are calculated for a temperate of 2900 K. One is for log(g)=0 and the other for log(g)=4.0. While the model with log(g)=4.0 reproduces nicely the 12 COlines and to the NaI doublet at 2.205 and 2.209 µm, it does fit to the H 2 O-band in the spectrum. Clearly, the object must be cooler than this. Also, if the T ef f were 2900 K, the radius of the ob- Fig. 4. Flux calibrated spectrum of the companion of GQ Lup. The thick line is the observed spectrum, the thin lines are models calculated for T ef f = 2900 K and log(g)=0, and log(g)=4.0. Clearly, theses model do not fit to the data. The object must be cooler than that. ject would be ∼ 1.0 R Jupiter . Which does not seems plausible for a very young low-mass object. Fig. 5 shows the flux-calibrated spectrum together with two models calculated for a T ef f of 2000 K. Judging just from the shape of the spectrum, the three models almost perfectly match the observed spectrum. The fit seems to be better for the two models with log(g)=2.0 and log(g)=4.0. We can do this comparison a little more quantitatively. However, given the cross-talk between log(g) and T ef f , and given that only a spectrum with a resolution of λ/∆λ of 700 is available, the currently achievable accuracy of the determination of log(g) and T ef f is rather limited. We find that T ef f -values in the range between 1800 and 2400 K, and log(g)-values between 1.7 bis 3.4 give good fits, in excellent agreement with the previous temperature estimate. However, as can easily bee seen, the 12 CO-lines are always a factor two deeper in the model than in the spectrum. If we assume that this difference is caused by veiling due to the presence of the disk, the radius of the companion would be 1.2 to 1.3 R Jupiter . If we assume that there is no veiling, the object would have a radius of 1.7 to 1.8 R Jupiter . It is interesting to note that the depth of the 12 CO-lines in the spectrum of GQ Lup is the same as in the case of the companion of AB Pic. This means that either both have the same veiling, or the 12 CO-lines in the models are too deep (Fig. 2). Putting the object into perspective The problem in giving a mass for the companion is that there is not a single object with an age of about one Myr and such a late spectral type where the mass has been determined directly. Mohanty et al. (2004b) attempted to do this by deriving the log(g) and T ef f -values for late type objects on USco. These objects have an age of about 5 Myr. For the analysis they used spectra with ∆λ/λ = 31 000 in the wavelength-range between 6400 and 8600Å. For USco 128 and USco 130, which have a spectral type of M7 and M7.5, they find log(g)-values of 3.25 (Mohanty et al. 2004b;Mohanty, Jayawardhana, Basri 2004c). With these values, they find masses for these objects of 9 to 14 M Jupiter . However, during this meeting it was mentioned by the authors that the log(g)-values are possibly too small by 0.5 dex (Mohanty 2005b). This would increase the masses of these objects to ≥ 20 M Jupiter . In any case, the mass of the companion of GQ Lup must be lower than that of USco 128 and USco 130, as it has a later spectral type and is younger than these (Fig. 6). Because the companion of GQ Lup has the same spectral type as the companion of AB Pic but is younger, it must have lower mass than it. Given the cross-talk between log(g) and T ef f , and given that we have a spectrum with a resolution of only λ/∆λ = 700, it is currently not possible to constrain the log(g) sufficiently well to give a mass. For a radius of 1.2 to 1.3 R Jupiter , a log(g) of ≤ 3.7 would imply a mass ≤ 13 M Jupiter . Similarly, if we assume that there is no veiling, a log(g) of ≤ 3.4 would imply a planetarial mass. The problem when using evolutionary tracks for objects at very young ages is that the brightness and temperature of the objects depend on the history of the accretion. This means that in principle, the evolutionary tracks from Burrows et al. (1997) and Baraffe et al. (2002) should not be used at such a young age. However, it is still worth-while to have a look at these in order to have an idea. Although an isochrone for 10 6 years is not even shown in Burrows et al. (1997), the 3 10 6 year-isochrone leads to a mass of 3 to 9 M Jupiter , with a T ef f of 1800 and 2400 K. Similarly, we read of a mass between 3 to 16 M Jupiter from the Fig. 2 in Baraffe et al. (2002). Burrows et al. (1997) and Baraffe et al. (2002) also give the luminosity for objects at different ages. We may also try to use this result for estimating the masses. According to Golimowski et al. (2004) the bolometric correction BC K is 3.17 ± 0.06 and 3.38 ± 0.06 for objects with spectral types of Mohanty et al. (2004b) and Mohanty, Jayawardhana, Basri (2004c). The mass of the companion of GQ Lup must have a mass lower than that of USco 128 and USco 130, because it has a later spectral type and is younger. In these papers the authors give masses of 9 to 14 M Jupiter for USco 128 and USco 130, however as mentioned at this conference, new values of oscillation strength of the TiOlines imply higher masses. M9 and L4V, respectively. With M K = 7.4 ± 0.1, this gives an M bol = 10.7±0.2, or log(L/L ⊙ ) = −2.38±0.08, assuming a distance of 140 pc (that is, not taking the error of the distance into account), and assuming that there is no contribution from the disk, or accretion. If we assume such a contribution, the luminosity goes down to log(L/L ⊙ ) = −2.7. If we further assume that the distance would be only 100 instead of the canonical 140 pc, we would obtain only log(L/L ⊙ ) = −3.0. For these three assumptions, we derive masses of about 20, 15 and 7 M Jupiter using Burrows et al. (1997), for the three hypothesis respectively. Using Baraffe et al. (2002) we find values of about 30, about 15, and 10 M Jupiter , or so. Now in progress are models which take the formation of the objects into account. Hubickyj, Bodenheimer, and Lissauer (2004) model the formation of giant planets via the accretion of planetesimals and subsequent capture of an envelope from the solar nebula gas. They show that for a short time, a massive planet can be very bright. Unfortunately, no evolutionary tracks giving T ef f are shown. Evolutionary tracks for GQ Lup and its companion calculated by Wuchterl were presented in (Neuhäuer et al. 2005) and at this conference (see Wuchterl these proceedings). These tracks give masses between 1 and 2 M Jupiter for the companion of GQ Lup. The future As mentioned above, the big problem is that there is not a single object with an age of about one Myr and such a late spectral type where the mass has been determined directly. For deriving the mass by measuring the log(g) and T ef f , spectra with a resolution of ∆λ/λ ≥ 30000 are required. Because of the problems of the TiO lines in the optical, such an experiment is better be carried out at infrared wavelength. The CO-lines in the K-band could be used for instance. However, because of the additional complication that there could be veiling, it is required to observe not only these lines but a much larger number of lines. While CRIRES will give the required spectral resolution, 9 settings are required to cover the J-band, 7 settings for the H-band, and also 7 settings for the K-band. Getting the required data with CRIRES would be time-consuming, to say the least. Such a project thus is only feasible if an instrument like NAHUAL (see Martín et al, this conference) is used. The other possibility is to determine masses directly in a binary system. While this has not been achieved yet, it is certainly the way to go.
2019-04-14T01:44:20.726Z
2005-10-28T00:00:00.000
{ "year": 2005, "sha1": "a949b4dc9837f41e0bc870873f64d7e0c12bfd54", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a949b4dc9837f41e0bc870873f64d7e0c12bfd54", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119144721
pes2o/s2orc
v3-fos-license
Entropy-driven cutoff phenomena In this paper we present, in the context of Diaconis' paradigm, a general method to detect the cutoff phenomenon. We use this method to prove cutoff in a variety of models, some already known and others not yet appeared in literature, including a chain which is non-reversible w.r.t. its stationary measure. All the given examples clearly indicate that a drift towards the opportune quantiles of the stationary measure could be held responsible for this phenomenon. In the case of birth- and-death chains this mechanism is fairly well understood; our work is an effort to generalize this picture to more general systems, such as systems having stationary measure spread over the whole state space or systems in which the study of the cutoff may not be reduced to a one-dimensional problem. In those situations the drift may be looked for by means of a suitable partitioning of the state space into classes; using a statistical mechanics language it is then possible to set up a kind of energy-entropy competition between the weight and the size of the classes. Under the lens of this partitioning one can focus the mentioned drift and prove cutoff with relative ease. Introduction and Main Results In this paper we present sufficient conditions for a family of finite ergodic Markov chains to exhibit cutoff. Roughly speaking the cutoff phenomenon is an abrupt convergence of a Markov chain to its equilibrium distribution. The detailed description of the cutoff phenomenon is given by means of two quantities, the cutoff-time and the cutoff-window, the latter being much smaller than the former. For an overview on the cutoff phenomenon we refer the reader to the review paper by Diaconis [9] and the book by Levin, Peres and Wilmer [6]. Our main results, Theorem 1.2 and its corollary, identify with much clarity the cutoff-time as the expected value of a certain hitting time, and for the first time in literature such hitting time is related to some entropy considerations, see Section 1.2 below. Corollary 1.3 also gives evidence of the nature of the cutoff-window, which is in turn kindred to the standard deviation of the hitting time mentioned above and/or to the mixing features of the chain. The level of generality of the key results gives the possibility to use statistical-mechanics-based ideas to prove cutoff for a variety of models known in literature, such as Coupon Collector, Top-in-at-random, Ehrenfest Urn, Random walk on the hypercube and mean-field Ising model. Furthermore, we prove cutoff for a couple of one-parameter families of random walks, partially biased (i.e. with drift) and partially diffusive, whose peculiar feature is to have cutoff-window of different order depending on the parameter. It is worthy to notice that the first of those families is an example of non-reversible chain exhibiting cutoff (see Section 3.4). Section 1.1 defines the structure of our study, Section 1.2 gives some of the ideas standing behind the main results and draw a comparison with previous approaches. Section 1.3 states our key theorems while Section 1.4 examines them and gives an explanation of the hypothesis. All the proofs are deferred to Section 2. In Section 3 we discuss the application of our results to the models mentioned above. Framework and notation In what follows we will consider families of finite ergodic Markov chains, that is sextets of the form {Ω n , X t n , P n , π n , µ t n , µ 0 n } where Ω n is the finite state space of the n-th chain, X t n , which has transition matrix P n and unique stationary measure π n . The symbols µ 0 n and µ t n stand for the initial distribution of the n-th chain and its probability distribution after t steps. The time t is a discrete quantity. For brevity we will refer to such families simply as families of Markov chains, omitting the expression "finite ergodic" throughout the whole paper. Remark 1.1. Definition 1.1 was first introduced in [12]. Although there exist equivalent alternative definitions of cutoff (see [3], [1] and [8]) we prefer to work with the one given, for it leaves us control on the cutoff-window. As mentioned above there exists a connection between the cutoff-time and the expectation of an hitting time. That connection can be easily pointed out if we think of the total variation distance between µ t n and π n (which, in principle, could be computed at any given time) as a random variable, or better as a deterministic object computed at a stochastic time. This idea motivates the following Definition 1.2. Given a random variable ξ, we define the total variation distance at time ξ as the following r.v. d TV µ ξ n , π n = t∈Z d TV µ t + n , π n 1 {ξ=t} (1.5) Figure 1: Biased random walk on a segment. The transition probabilities are P i,i−1 = 1 6 , P i,i = 1 3 and P i,i+1 = 1 2 . The curves refer to different values of n, the length of the segment. where t + = max{0, t}. When ξ takes values in [−a, +∞), with a ∈ R + , this definition is equivalent to d TV µ ξ n , π n = t≥−a d TV µ t n , π n 1 {ξ=t, ξ≥0} + d TV µ 0 n , π n 1 {ξ=t, ξ<0} (1.5a) We need this definition because in the statements of the key theorems we will consider the expectation of (1.5a) at the stochastic time ζ − a, where ζ ≥ 0 is a hitting time. This is a natural consequence of our aim to care about the cutoff-window. The expectation of (1.5a) can be computed as E d TV µ ξ n , π n = t≥0 d TV µ t n , π n P (ξ = t, ξ ≥ 0) + d TV µ 0 n , π n P (ξ < 0) (1.6) Although the condition ξ ≥ 0 could be dropped in the first sum of (1.6), we prefer to keep it for notational purposes that will become clear in the proof of Theorem 1.2. Cutoff-times and hitting-times Theorem 1.2 and its Corollary 1.3 bring to light the link between the cutoff phenomenon and the hitting of the relevant part of the state space Ω n . Relevant part means the subset of the state space where the stationary distribution π n is mostly concentrated, see equation (1.16) below. This seems quite a natural approach when we realize that nearly every chain known to exhibit cutoff hits the relevant part of the state space in a quasi-deterministic way, that is the hitting time τ n of such a relevant part satisfies the following limit: where σ [τ n ] is the standard deviation of τ n . It is relatively easy to prove a limit as in (1.7) whenever the chain presents a drift towards the relevant part of the state space. In Section 3 we present a rich selection of examples of applications of our theorems as well as a comparison with the existing literature. The picture of a quasi-deterministic hitting we have described so far holds as well for the systems with uniform stationary measure, for which the relevant part of the state space would be Ω n itself. As a matter of fact, if we desist from the whole description of such a chain and look into a suitable projection, then we may find that the original stationary distribution is no longer uniform. The projected stationary distribution, ν n (x), is indeed proportional to the number of states i ∈ Ω n which correspond to x according to the equivalence relation we used to project the original chain. Consequently, since −ν n (x) log ν n (x) is the contribution to the entropy of ν n given by the x-th equivalence class, we have that the relevant part of the state space is composed of the classes providing the leading contribution to the entropy. In these cases the drift mentioned above is therefore supplied by entropic considerations; we will return to this point later on in Section 3. With respect to what we have said above, Corollary 1.3 represents then a possible trait d'union between two classes of Markov chains exhibiting cutoff: the first being made up of chains having stationary measure concentrated in a small subset of the state space, like birth-and-death chains with drift, and the second composed of those chains with stationary measure uniform or spread over Ω n , like the random walk on the hypercube, many card-shuffling models and some high-temperature statistical mechanics models. The idea of relating cutoff with the hitting of the appropriate quantiles of the stationary distribution is already present in literature, see [1], [5], [8] and [3]. In [1] and [5] the cutoff is completely characterized for the special case of birth-and-death chains, respectively in total variation and in separation distance. A discussion of the results in [1] is deferred to Section 1.4 after we have stated our main theorems. With respect to [8] and [3] our approach allows the study of the cutoff phenomena in a context closer to the classical Diaconis' paradigm. In particular, with respect to the former reference we define cutoff in a finite configurations space and consequently we have a precise control of the cutoff-window. With respect to the latter, we will show in Sections 1.4 and 3.5 that our tackle to the problem produces a clearer understanding of the role of the drift in the cutoff phenomena. Key results In this first theorem, that will be the main ingredient of the proof of Theorem 1.2, we relate the cutoff phenomenon to systems having an abrupt convergence to equilibrium at a stochastic time which is quasi-deterministic in the sense of (1.7). Theorem 1.1. Let {Ω n , X t n , P n , π n , µ t n , µ 0 n } a family of Markov chains, {τ n } a family of non-negative random variables with finite expected value E n = E [τ n ] and standard deviation σ n = σ [τ n ] such that Let {δ n } be a sequence of positive numbers such that where f and g are two functions tending to 0 as θ → ∞. Then the family exhibits cutoff with Before we move to the statement of Theorem 1.2 we need to introduce some tools. Definition 1.3. We define a family of nested subsets as a sequence {A n,θ } θ≥1 with the following properties: Definition 1.4. Given a family of nested subsets we shall say that π n is hconcentrated on A n,θ if there exists a function h(θ) tending to zero as θ → ∞ such that definitively as n → ∞ π n (A n,θ ) < h(θ) (1.16) where A n,θ = Ω n \ A n,θ . Finally, define ζ θ n = min{t ≥ 0 : X t n ∈ A n,θ } the hitting time of A n,θ ; note that ζ θ n ≥ ζ θ n if θ ≤ θ . We are now ready to state the main result of this paper. Theorem 1.2. Let {Ω n , X t n , P n , π n , µ t n , µ 0 n } a family of Markov chains. Suppose that µ 0 n is such that there exists a family of nested subsets {A n,θ } θ≥1 ⊂ Ω n with the following properties: and there exists a sequence of positive integers {∆ n } such that Then there exists a function f (θ), tending to 0 as θ → ∞, such that where δ n = 2 (∆ n + σ ζ 1 n ) (1.23) A relatively easy consequence of Theorem 1.2 is the following Corollary 1.3. Assume that all the hypothesis of Theorem 1.2 hold for a given family of Markov chains. In addition suppose that given two copies of the n-th chain of the family, Z t n and W t n , there exists a coupling (Z t n , W t n ) such that Theorem 1.2 identifies a general structure that underlies a class of systems exhibiting cutoff: those with stationary measure concentrated in a small region of the state space (A n,θ in the theorem, see (1.14)-(1.16) above). Although widely general, Theorem 1.2 is most useful when we face a family of Markov chains X t n which is, or can be projected onto, a family of birth-and-death chains. In those cases we have indeed closed formulas to deal with expectation and variance of the various hitting times, see for example [3] or [13]. The non-reversible random walk on a cylindrical lattice, presented in Section 3.4, shows that the application of Theorem 1.2 is not restricted solely to those models where the study of the cutoff can be completely reduced to a one-dimensional problem. Total variation cutoff was completely settled in [1] for the class of birthand-death chains, in particular it is shown therein that we have cutoff if and only if t MIX are respectively the relaxation time and the mixing time of the n-th chain. It should be pointed out, however, that in some importat models of statistical mechanics, namely the Ehrenfest urn and the magnetization chain for the mean field Ising model, a non optimal t (n) REL · t (n) MIX window order is found. Our approach conversely, provided a suitable definition of the A n,θ 's (see Remark 1.2 below), is always capable of delivering the right cutoff-window order. Moreover, in most situations the computation of E ζ θ n and σ ζ θ n happens to be less challenging than the computation of the spectral gap of the chain. Within the framework of birth-and-death chains, π n being concentrated in A n,θ is equivalent to a drift of the chain towards A n,θ itself; such a drift is likely to ensure Limit (1.29) means in turn that for n sufficiently large the chain will hit A n,θ in a quasi-deterministic way, that is the probability of X t n being into A n,θ will suddenly rise from 0 to 1 in a window of size σ ζ θ n centered on E ζ θ n . This means that, if the system was started outside A n,θ , it is undergoing the first part of the cutoff curve, i.e. it satisfies (1.2). If the system relaxes inside A n,θ in a time interval that is comparable with σ ζ θ n , then we would experience cutoff with a window of the order of σ ζ θ n . It is also possible that the time t mix needed for the system to relax inside A n,θ is larger than σ ζ θ n but smaller than E ζ θ n , implying then cutoff with a cutoff window of the order of t mix . This is the case of the Ehrenfest Urn and the Random Walk on the Hypercube, which we present in detail in Section 3.3. The technical problem we had to face in designing Theorem 1.2 is the fact that E ζ θ n is not a good candidate to the cutoff-time, a n , being θ-dependent. This is the reason why we preferred to split the diffusion inside A n,θ in two parts: the hitting of A n,1 , a subset of Ω n such that π n (A n,1 ) is non-vanishing in n, and the diffusion time once A n,1 is reached, see (1.26). Remark 1.2. There is no universal choice for the family A n,θ , multiple definitions are possible and each of them affects indirectly the size of the cutoffwindow. Remark 3.8 in Section 3.5 shows a choice for the A n,θ 's which leads to a non optimal cutoff-window. The applications presented in Section 3 also suggest the key to obtain an optimal cutoff-window: design the family A n,θ in such a way that the expected travelling time E ζ 1 n − ζ θ n is of the same order in n as the time θδ n necessary to achieve equilibrium starting anywhere in A n,1 (cfr. Corollary 1.3). From the discussion in this section, and in particular from (1.29), we can take an energy-landscape point of view and visualize our system as a single well, where the height of the energy landscape in a given point i increases with π −1 n (i). Consider for example the Ehrenfest Urn, presented in Section 3.3; requiring that E ζ 1 n − ζ θ n = O(δ n ) corresponds to say that, once the chain has reached the border of A n,θ , it falls to the bottom of the well (that is A n,1 ) in a time which is also sufficient to diffuse inside the well itself. Remark 1.3. Note that, in the case of birth-and-death chains, hypotheses (1.19) is trivial. Remark 1.4. We would like to emphasize that the task of showing the cutoff behavior is usually accomplished by means of a coupling argument. In most situations the coupling argument needs to be sufficiently fine, since the desired estimates are to be performed at times a n ± θb n , i.e. with two very different time scales involved. In our approach this time scale issue is set loose when we split the study of the cutoff in two phases, namely the hitting of A n,1 and the subsequent evolution to equilibrium. We will see later on in the applications (Section 3) that within our framework only very basic and intuitive couplings are demanded. Proof of Main Results In the following we will make intensively use of two easy and well known facts that are worth of a brief recalling, before we proceed with the proof of the key results. Lemma 2.1. (Cantelli's inequality) Let Y be a random variable with finite mean µ and finite variance σ 2 . Then, for any θ ≥ 0 Let X(t) be a discrete Markov chain with finite space state. Then the total variation distance from stationarity is a non-increasing sequence as a function of t. A proof of Lemma 2.2 may be found in [7] and a proof of Lemma 2.1 in [14]. Now we can start with the proof of the key results. Proof of Theorem 1.1. For brevity of notation set D(t) ≡ d TV (µ t n , π n ) and ξ ≡ τ n − θδ n ; note that according with the latter definition E [ξ] − θσ n = a n − θb n . Then, using (1.6) We can estimate the sum in (2.3) as follows where from (2.4) to (2.5) we have used Lemma 2.2 to estimate the second sum. Proof of Theorem 1.2. Fix θ > 1 arbitrarily and consider n sufficiently large to ensure (1.14). As in the proof of Theorem 1.1 set D(t) = d TV (µ t n , π n ) and ξ = ζ 1 n − θδ n . By (1.6) Hence using (1.4) we have that, for n sufficiently large, We can estimate the first term of the sum in (2.25) by virtue of Lemma 2.1: By (1.18), (1.20) and (1.23) we have that, definitively for n → ∞, P (ξ ≥ 0) is greater than any function of θ tending to one, say 1 − 1 θ . Thus for n sufficiently large we have that Next consider the remaining term of (2.25): Now we have to face possibly two scenarios: In the former case we have that also σ(ζ θ n ) is o(∆ n ) in virtue of (1.19). Therefore we can rewrite the first term of (2.33) as In the latter case we have that σ [ζ 1 n ] satisfies an equation of the kind of (1.21) as well as ∆ n . Then Proof of Corollary 1.3. We construct a coupling (X t n , Y t n ) of µ t n and π n as follows: 1. set X 0 n ∼ µ 0 n and Y 0 n ∼ π n , and defineγ n = min{t ≥ 0 : X t n = Y t n }, first coalescence time 2. for 0 ≤ t ≤ ζ 1 n : (a) X t n and Y t n evolve independently untilγ n , ifγ n < ζ 1 n n , then for all t > ζ 1 n run the coupling of Z t n and Y t n and set (X t n , Y t n ) = (Z t n , W t n ). We have built the coupling (X t n , Y t n ) in this fashion to have the following property: given that ζ 1 n = T < ∞, for all z 0 ∈ A n,1 where, according to the notation introduced in Corollary 1.3, γ n is the first coalescence time of Z t n and W t n . The idea is then to use the Coupling Lemma on the coupling (X t n , Y t n ) using the informations we already possess from (Z t n , Y t n ), that is line (1.26). So let us take an arbitrary M , then By means of (1.26) and (2.39) we have that for n sufficiently large Finally, passing to the expectation in (2.45), by means of (2.38), we get Indentifying τ n with ζ 1 n we have obtained (1.11) of Theorem 1.1, while Theorem 1.2 gives us (1.8), the definition of δ n via (1.23), (1.9) and (1.10). Therefore we have that the family of Markov chains exhibits cutoff with a n = E [ζ 1 Remark 2.2. Assume now that the state space Ω n is endowed with a nearestneighborhood binary relation. Such a relation naturally defines over Ω n a graph G(Ω n , E), and therefore a metric d : Ω n × Ω n → N. For any event A ⊆ Ω n it is then reasonable to define the set of the extremal points of A as If the family of Markov chains is a nearest-neighbor dynamics, that is P ij = 0 whenever d(i, j) > 1, we know for sure that X t n cannot jump inside A n,1 but is going to hit it on its border, that is X ζ 1 n n ∈ ∂A n,1 . Thus we can ask less than (1.26) to the coupling (Z t n , W t n ), specifically Also, it is not infrequent whatsoever facing Markov chains where the state space Ω n can be put in a one-to-one correspondence with a finite subset of Z, then the graph G(Ω n , E) defined above is just a discrete segment, and is composed of just two points. In those situations depending on µ 0 n we could be able to determine which point of ∂A n,1 will be hit by X t n so that the max in (1.26a) would not be needed at all. The Coupon Collector Model The Coupon Collector Model is a pure-death chain on the state space Ω n = {0, 1, 2, . . . , n}, more specifically it is a chain with the following transition rates: This model was introduced in [15] and it is discussed in many classical probability books, see e.g. [6] and references therein. The model can be easily accommodated in our general framework. We give an alternative description of the cutoff in this context by means of Theorem 1.1. The chain clearly has a drift towards the state 0, for it just cannot move to the right. The equilibrium distribution is π n = δ i,0 , where δ i,j is the usual Kronecker's delta; the initial distribution is taken to be µ 0 n = δ i,n . The hitting time of the state 0 is τ 0 n , which happens to be a strong stationary time. Thus, we have that for any finite time t Besides, to the leading order E [τ 0 n ] = n log n and σ [τ 0 n ] = n. By (3.2), following the same steps we made from (2.40) to (2.45), we have that for any c ≥ 0 Next, recall that D(t) = d TV (µ t n , π n ) and take ξ = τ 0 n − 2θn and A = {0}, then from line (2.23) we get and t≥0 P X t n = 0 P (ξ = t , ξ ≥ 0) ≤ n log n−θn t=n log n−3θn Thus, for n sufficiently large, there exists a function f (θ) which tends to 0 as θ → ∞ such that The Top-in-at-random model The Top-in-at-random is a card shuffling model introduced first in [12] and it is the first example in which the cutoff phenomenon has been recognized. The state space Ω n is the symmetric group, that is the set of all n! possible permutations of a deck of n cards. The chain describing the model evolutes according to the following shuffling procedure: pick the first card of the deck and insert it in the deck at a position chosen uniformly at random. The equilibrium distribution π n is uniform. Here we give a description of cutoff in this case using Theorem 1.2. Given the initial permutation ρ 0 , without loss of generality we shall imagine to relabel the cards from 1 to n, being 1 the bottom card and n the topmost one. Next, consider the sets R θ composed of those permutations ρ having the cards from 1 up to θ + 1 in crescent relative order. This corresponds to say that the first rising sequence has length l ≥ θ + 1, see [10] for the definition of rising sequence and for its properties. To evaluate the cardinality of R θ we use the following argument: given a permutation ρ ∈ R θ keep fixed all the cards displaying a face value bigger than θ +1 and permute in all possible ways the remaining. Call P(ρ) the set of such permutations, its cardinality is (θ + 1)! and clearly P(ρ) ∩ P(ρ ) = ∅ if ρ = ρ . As ∪ ρ∈R θ P(ρ) = Ω n we have obtained the following result: Please note that {ρ 0 } = R n−1 ⊂ R n−2 ⊂ · · · ⊂ R 1 = Ω n . Thus we define the set A n,θ = Ω n \ R θ , that is the set of all permutations having the first rising sequence of length at most θ; note that (1.17) is fulfilled. Define ζ θ n as the hitting time of A n,θ and τ θ n as the first time when the card θ reaches the topmost position; τ θ n can be restated as the hitting time of B n,θ ⊂ A n,θ , where B n,θ is the set of all permutations in A n,θ having the card θ at the topmost position. Clearly, It is easy to find that E τ θ n = n log n − n log θ (3.13) and therefore Moreover, the variances present a property of monotonicity, because ∀ θ ≥ 1 we have that ζ θ n − τ θ+1 n is independent of τ θ+1 n and τ θ n − ζ θ n is independent of ζ θ n . Therefore, Hence to the leading order in n, Taking ∆ n = n we find that all the hypothesis of Theorem 1.2 are satisfied. Eventually, ζ 1 n is a strong stationary time so that (3.2)-(3.3) hold, with τ 0 n replaced by ζ 1 n ; thus via Theorem 1.1 the Top-in-at-random model exhibits cutoff with a n = n log n and b n = O(n). The Ehrenfest Urn model The Ehrenfest Urn model is possibly the most famous model of diffusion. The cutoff phenomenon for this chain was first showed in [11], see also the review [9] and the references therein. In this model we have two boxes containing a total amount of n particles, each of them independently change container with probability 1 2n . If X t n is defined as the number of balls in Urn 1 and that contains i balls then the transition rates for the Ehrenfest chain are According to (3.19) the Ehrenfest chain is a lazy birth-and-death chain on Ω n = {0, 1, . . . , n} and its stationary distribution is a binomial B(n, 1 2 ). Let us discuss the cutoff-time and the cutoff-window in this case using the results from Section 1.3. A good choice for the family of nested subsets is the following: since π n (A n,θ ) < 1 θ 2 by means of Chebyshev's inequality. Suppose now that µ 0 n = δ i,0 , that is at time 0 Urn 1 is empty; plain but lenghty calculations (presented for the sake of completeness in Appendix A) show that, to the leading order in n The Lazy Ehrenfest Urn shares this feature with the Mean-field Ising model so we defer the matter to Section 3.6.2 (see in particular Remark 3.11). Eventually, we have proved that the Lazy Ehrenfest Urn exhibit cutoff with a n = 1 2 n log n and b n = O(n). The Lazy Random Walk on the Hypercube In this model the state space is a n-dimensional hypercube, Ω n = {0, 1} n ; each state can be then represented as a binary n-tuple x = (x 1 , . . . , x n ). Without loss of generality, let the chain be at time zero at the vertex (0, . . . , 0), then at each step we flip with probability 1 2 a component of the tuple chosen uniformly at random. This corresponds to the following update procedure: at each step we choose one of the possible n directions in space and move along it with probability 1 2 , while with probability 1 2 we stand still. The equilibrium distribution is clearly the uniform one. The standard treatment of this model is to project it onto a birth-anddeath chain by means of the following equivalence relation: where x 1 = i x i is the Hamming weight of the vertex x. The quotient state space Ω n / ∼ can be put into a one-to-one correspondence with the state space Ω n = {0, 1, . . . , n} of a new chain X ,t n , having transition rates given by (3.19) and equilibrium distribution equal to a binomial B n, 1 2 . Let us name µ ,t n the evolute measure after t steps of the projected chain X ,t n and by π n its equilibrium distribution, then it is a standard task to shown that d TV µ t n , π n = d TV µ ,t n , π n (3.23) Thus the Lazy Random Walk on the Hypercube exhibits cutoff with the same cutoff-time and cutoff-window of the Lazy Ehrenfest Urn. Remark 3.1. Since π n is uniform the projected stationary distribution π n (i) is clearly proportional to the number of vertices having Hamming weight equal to i. Therefore π n is binomial and is supported in the sense of (1.16) on A n,θ . As the configurations in A n,θ give the leading contribution to the entropy of the distribution π n , we say that the system is entropy-driven towards the stationarity. This drift ensures that the conditions of Theorem 1.2 hold although the original distribution on the hypercube cannot provide any drift, being uniform. Non-reversible biased random walk on a cylinder Consider a family of Markov chains {Ω n , X t n , P n , π n , µ t n , µ 0 n } having space state with |Ω n | = n = l·m (3.24) As stated more precisely below, we are going to regard Ω n as a cylindrical lattice of volume n, having height l and base circumference of lenght m. The transition kernel of the n-th chain is P n , whose entries are given by the following transition probabilities: l layers composed of m points each. Moreover, the neighborhood structure just highlighted introduces a metric on Ω n , given by the length of the shortest path between two vertices of the graph (cfr. Remark 2.2 above). Each chain of the family defined above is an irreducible and aperiodic chain, thus it exists a unique invariant measure π n such that π n = π n P n . Since the model has an evident radial symmetry, we expect that Thus let us look for π n in the form By definition of π n and (3.25) we have that, for h = 0, l − 1, which, by virtue of (3.26), yields The value of α to be taken is α = 1−q q since it satisfies π n = π n P n also for h = 0 and h = l − 1. Thus, The value of the normalization constant f n (0) is found via normalization: where last approximation holds for sufficiently large l. Given a state Ω n u = (h , φ ), with an abuse of notation we will denote as h(u) and φ(u) its height, h , and its position on the h -th layer, φ , respectively. Consider now the following equivalence relation between any two states The lumped chain, X ,t n , defined on the state space Ω n = {0, 1, . . . , l − 1} with transition matrix entries given by is a projection of X t n according to the equivalence relation ∼. The stationary measure π n (x) of the lumped chain is then found summing π n (u) over the elements u that belong to the equivalence class [x]. Since every equivalence class (i.e. every layer) contains exactly m points: Remark 3.3. The stationary measure π n is obviously reversible with respect to P n but this property does not hold for the original chain X t n , whose equilibrium measure is not reversible w.r.t. P n . To see this it suffices to take any two states u, v ∈ Ω n such that h(u) = h(v) and |φ(u) − φ(v)| = 1; then by (3.26) π n (u) = π n (v) but according to (3.25) P (u, v) = P (v, u). Remark 3.4. We have introduced the lumped chain, X ,t n , since it can be coupled to X t n in such a way that Therefore we can study the hitting time of any layer considering a onedimensional chain only. Nevertheless we want to stress that the study of the cutoff phenomenon for X t n cannot be reduced to the study of the cutoff for X ,t n , since in general the identity (3.23) won't hold. Let us consider, indeed, the initial distribution µ 0 n = δ u,u 0 with h(u 0 ) = l − 1, which represents the worst case scanario for the behavior of the total variation distance. Then (3.23) is false for any finite t but, as we will see, by means of Theorem 1.2 and Corollary 1.3 it is possible to prove cutoff with relative ease. Define now the following family of sets with this definition A n,θ is the union of the √ θ bottom layers and A n,1 is just the bottommost layer. The hitting time ζ θ n of the set A n,θ has the following expectation and variance: where ζ i→j is the first visit time of the state j starting from the state i and O θ (·) means O(·) for any fixed value of θ. To use Theorem 1.2 we want to study the behavior of these quantities in the limit for n → ∞ but n = l · m, thus we can let the volume of the cylinder grow by extending its height or enlarging its diameter or letting both grow simultaneously. To this extent let us consider the case where m = n ω and l = n 1−ω with ω > 0 (3.35) With the usual notation take ∆ n = m 2 = n 2ω , this choice fulfills all the hypothesis of Theorem 1.2 (namely (1.20) and (1.21)) and eventually sets the candidate cutoff-window order to All we are left to deal with is then the existence (cfr. Corollary 1.3) of a coupling (Z t n , W t n ) such that, with Z 0 n located on a point of the bottommost layer (that is h(Z 0 n ) = 0) and W 0 n ∼ π n (i.e. h(W 0 n ) ≥ 0 and distributed exponentially), we have lim θ→∞ lim n→∞ P (γ n > θδ n ) = 0 (3.37) where γ n = min{t ≥ 0 : Z t n = W t n } is the coalescence time. Consider the distance (Cfr. Remark 3.2) between Z t n and W t n : It exists a coupling (Z t n , W t n ), sketched for reference Figure 2, such that |} is a symmetric r-lazy random walk on the segment {0, 1, . . . , m 2 } 5. Φ s n = 0 for any s ≥ γ Φ n = min{t ≥ 0 : Φ t n = 0} Figure 2: Coupling scheme, the same random update is used for both Z t n and W t n . The two copies have the same probability to move to the upper or lower layer, except when one of the chains is on the topmost or bottommost layer. In the latter case the distance H t n has probability q 2 to reduce by 1 while in the former it has probability 1−q 2 . From the description of our coupling it should be clear that Thus, using Markov's inequality we get Now, according to point 3 listed above and the transition probabilities of According to point 4 listed above we get Lines ( Within this constraint we have cutoff and the cutoff-window shows the following behavior: and we see that the value ω = 1 5 gives the smallest cutoff-window order achievable. Remark 3.6. The case ω = 0 corresponds to an increase of the cylinder volume by extending its height while keeping fixed its base diameter, and it is almost identical to a biased random walk on a segment [6, §18.2.1]. In this sense the general case ω > 0 represents a non-reversible higher-dimensional extension of the biased random walk. A partially-diffusive random walk Fix ε ∈ (0, 1 2 ) and consider the birth-and-death chain X t n defined on the state space Ω n = {0, 1, . . . , n} with initial position X 0 n = n and transition rates This chain is such that outside the interval [0, n ε ] it behaves like a biased random walk while inside the interval it behaves like an unbiased one. It's quite easy to show that this model does not satisfies the strong drift condition, which according to [3] is a sufficient condition to prove cutoff, see Remark 3.9 below. Using Corollary 1.3 it's easy to show that this model actually exhibits cutoff. The stationary distribution π n can be found by reversibility where the constant c is 1 n ε +2 + O 1 2 n . In order to use Theorem 1.2 it is enough to take the following family of nested subsets With this choice (1.17) holds and, to the leading order in n see Appendix B for the details of the calculations. Choosing ∆ n = n 2ε we verify (1.20) and (1.21), then by Remark 1.3 we know that all the hypotheses hold except possibly (1.18). Now we consider a coupling (Z t n , W t n ), where Z t n and W t n are two copies of X t n with initial positions Z 0 n = n ε and W 0 n ∼ π n respectively; then, provided that the two chains have not yet collided, at each time we let the two copies evolve independently. Let γ n = min{t ≥ 0 : Z t n = W t n } be the coalescence time and set Z t n = W t n for any t ≥ γ n , then Let τ 0 n = min{t ≥ 0 : Z t n = 0}. Clearly, where the last inequality comes from Markov's inequality. The standard deviation of ζ 1 n is O(n 1− ε 2 ) (see Appendix B), therefore (1.18) holds and, with respect to the coupling defined above, (1.26) follows from (3.52)-(3.55) with t = θδ n = 2θ n 2ε + n 1− ε 2 . Thus, by means of Theorem 1.2 and Corollary 1.3 we have that this model exhibits cutoff with cutoff-time a n = E ζ 1 n = 2(1 − ε) log 2 n log n (3.56) and cutoff window Remark 3.7. From 3.57 we see that the choice ε = 2 5 gives the smallest cutoff-window order possible. Remark 3.8. This example shows how crucial is the choice of {A n,θ }. One could try in fact A n,θ = {i : 0 ≤ i ≤ θn ε }, because that scaling, linear on θ, worked well in the lazy Ehrenfest chain. This alternative definition would lead to an expected travelling time E ζ 1 n − ζ θ n = n log θ and force ∆ n (and consequently δ n ) to be of order n. Since θn steps are clearly sufficient for the chain started in n ε to achieve equilibrium, we would obtain a non-optimal O(n) cutoff-window. Remark 3.9. The reason why X t n does not satisfies the strong drift condition is that it fails the first requirement of the definition, namely Nevertheless, it is clear from the results included in [3] that the condition K q > 0 can actually be dropped if one replaces the second condition with where K n q = inf 0≤i≤n q i and The expected value of T while K n can be bounded from below by n ε . Then The mean-field Ising model Glauber dynamics The cutoff for the mean-field Ising model evolving according to the Glauber dynamics has been recently proved in [2]. Here we give an alternative proof of the existence of the cutoff and we evaluate the cutoff-time and the cutoffwindow in terms of an hitting process by means of our Corollary 1.3. The computations needed to achieve this goal in our framework are quite shortened. A generalization of this result to the non-symmetrical case, i.e. when a constant magnetic field is added, is likely to be treatable with little effort. In the mean-field Ising model we have n binary spins and a neighborhood structure given by a complete graph K n . X n = {+1, −1} n is the set of all possible configurations. The energy of a configuration σ = (σ 1 , σ 2 , . . . , σ n ) is then The Glauber dynamics for this model is defined as follows: · pick up a site i ∈ {1, 2, . . . , n} uniformly at random · update σ i to the values +1 or −1 respectively with probability where S(i) = 1 n j =i σ j is the so-called local field. The parameter β has the physical meaning of the inverse temperature of the system: the higher its value, the stronger the role of the energy over the entropy in the establishment of the equilibrium states. The limiting case of β = 0 coincides with the lazy random walk on the hypercube seen in Section 3.3.1: all the spins are updated independently and they are equivalent from an energy-landscape point of view. By reversibility it's easy to show that the Markov chain defined above has a unique stationary measure where Z n,β = σ ∈Ωn e −β H(σ ) is the partition function. Let us now define the magnetization of a configuration as Please note that this is not the standard definition of magnetization, since the one just defined in (3.67) takes values in Ω n = {− n 2 , − n 2 + 1, . . . , n 2 − 1, n 2 } while in general m ∈ [−1, 1]. We chose this definition because we want to reduce our system to a birth-and-death chain. We can rewrite the Hamiltonian (3.63) in terms of m(σ) as follows: and then The stationary distribution and the update probabilities take now the form Let us now define the magnetization chain, that is a new birth-and-death chain X t n with state space given Ω n = {− n 2 , − n 2 + 1, . . . , n 2 − 1, n 2 } and transition rates p k = P k,k+1 = Consider the Glauber chain, started say with initial distribution λ 0 n on X n such that λ 0 n (σ) = λ 0 n (σ ) whenever σ ∼ σ . Along with this process take its projection, the magnetization chain, that has initial distribution µ 0 n and stationary measure π n equal to It is not difficult whatsoever to prove that λ 0 n (σ) = λ 0 n (σ ) for σ ∼ σ leads to λ t n (σ) = λ t n (σ ) for any t ≥ 0, which in turn infers that In other words, the Glauber chain exhibit cutoff if and only if the magnetization chain does. 3.6.1 Analysis of π n (k) Fix θ ≥ 1 and define For k ∈ A n,θ we can estimate π n (k) by means of Stirling's formula: Next we pass to the log and use its analytic expansion to get log 1 Therefore for k ∈ A n,θ we have that is π n (k) is very close to a Gaussian distribution N 0, 1 2 n 1−β for k ∈ A n,θ . This means that (1.17) holds, because there exists a positive constant c β such that, for n sufficiently large π n A n,θ < c β θ 2 (3.81) Remark 3.10. Note that in this model the Gaussian structure of π n is given by both energy and entropy contribution, merging in the expression of the free-energy, which can be recognized as the exponent of e − 2(1−β) n k 2 divided by β. Hence in this case we will say that the cutoff is free-energy driven. Proof of cutoff Now suppose the Glauber chain is started at time 0 with magnetization n 2 , that is λ 0 n = δ σ,1 and µ 0 n = δ i, n 2 ; this choice gives equal probability to equivalent configurations, then (3.76) holds. As usual define ζ θ n as the hitting time of A n,θ and ζ 1 n as the hitting time of A n,1 . Lengthy but straightforward calculations (deferred to Appendix A) show that, to the leading order in n Remark 3.11. Since for β = 0 the magnetization chain reduces to the Ehrenfest chain, the following estimates hold as well for the Ehrenfest Urn model presented in Section 3.3. To prove Corollary 1.3 consider the following coupling, (Z t n , W t n , Z +,t n , Z −,t n ) where each component is a copy of the magnetization chain and for a given fixed θ > 1. Let any of the four chains move according with the same transition probabilities and using the same i.i.d. random update u ∼ U (0, 1). To illustrate the transition probabilities let us consider for instance the chain Z t n and suppose that at time t we have Z t n = k, then The restriction of the coupling defined above to its first two components, Z t n and Y t n , is the coupling we are going to consider for Corollary 1.3. Thus we define γ n = min{t ≥ 0 : Z t n = W t n } and recall Remark 2.2. By a careful analysis of (3.70)-(3.72) (noticing, in particular, that r k ≥ 1 2 and p k = q −k ) such a scheme ensures that any two components of the coupling mantain their relative partial order undergoing a single-step transition, and indeed it is impossible that two chains at distance 1 will undergo a one step transition that would change their relative order. Hence the evolution scheme described above has the following sandwiching properties Using (3.81) we have Therefore by means of the sandwiching properties stated above where τ 0 n = min{t ≥ 0 : Z +,t n = Z −,t n = 0}. Note that Z +,t n has a drift towards 0 as well as any copy of the magnetization chain. Accordingly, it can be coupled with a lazy uniform random walk R t n such that whereτ 0 n = min{t ≥ 0 : R t n = 0}. Now we can use the following estimate, which is a classical result for random walks and we have found that Corollary 1.3 holds with δ n = n. Appendices A Mean value and variance of ζ 1 n for the meanfield Ising model In this appendix we present in full details the estimates for E [ζ 1 n ] and Var[ζ 1 n ] we have used to apply Corollary 1.3 to the magnetization chain in Section 3.6. Since for β = 0 the magnetization chain reduces to the Ehrenfest chain, the following estimates hold as well for the Ehrenfest Urn model presented in Section 3.3. Standard formulas (see e.g. [3]) give where ζ k→k−1 is the first time the chain visits k − 1 after visiting k and π n (j) π n (k) = Next, note that for any of the values of triple (i, j, k) involved in the calculations 0 ≤ i So we find handy the following two easy lemmas. In virtue of Lemma A.2 we can bound line (A.5) as follows: Therefore we obtain the following upper bounds: where ε 1 tends to 0 exponentially fast in n. Remark A.1. The error ε 1 gives a negligible contribution to E [ζ 1 n ] being exponentially small, for this reason we will henceforth drop it. The right-hand in (A.32) can be rewritten as follows From (B.7) we see that for n sufficiently large E ζ 1 n − ζ θ n grows as n 2ε at most. To compute Var[ζ 1 n ] we use the following formulas
2012-05-03T09:11:26.000Z
2011-02-22T00:00:00.000
{ "year": 2012, "sha1": "c39d54542496e28b64385d10c9f60312f7c2d5e7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1102.4517", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c39d54542496e28b64385d10c9f60312f7c2d5e7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
233261603
pes2o/s2orc
v3-fos-license
A novel mouse model of heatstroke accounting for ambient temperature and relative humidity Background Heatstroke is associated with exposure to high ambient temperature (AT) and relative humidity (RH), and an increased risk of organ damage or death. Previously proposed animal models of heatstroke disregard the impact of RH. Therefore, we aimed to establish and validate an animal model of heatstroke considering RH. To validate our model, we also examined the effect of hydration and investigated gene expression of cotransporter proteins in the intestinal membranes after heat exposure. Methods Mildly dehydrated adult male C57/BL6J mice were subjected to three AT conditions (37 °C, 41 °C, or 43 °C) at RH > 99% and monitored with WetBulb globe temperature (WBGT) for 1 h. The survival rate, body weight, core body temperature, blood parameters, and histologically confirmed tissue damage were evaluated to establish a mouse heatstroke model. Then, the mice received no treatment, water, or oral rehydration solution (ORS) before and after heat exposure; subsequent organ damage was compared using our model. Thereafter, we investigated cotransporter protein gene expressions in the intestinal membranes of mice that received no treatment, water, or ORS. Results The survival rates of mice exposed to ATs of 37 °C, 41 °C, and 43 °C were 100%, 83.3%, and 0%, respectively. From this result, we excluded AT43. Mice in the AT 41 °C group appeared to be more dehydrated than those in the AT 37 °C group. WBGT in the AT 41 °C group was > 44 °C; core body temperature in this group reached 41.3 ± 0.08 °C during heat exposure and decreased to 34.0 ± 0.18 °C, returning to baseline after 8 h which showed a biphasic thermal dysregulation response. The AT 41 °C group presented with greater hepatic, renal, and musculoskeletal damage than did the other groups. The impact of ORS on recovery was greater than that of water or no treatment. The administration of ORS with heat exposure increased cotransporter gene expression in the intestines and reduced heatstroke-related damage. Conclusions We developed a novel mouse heatstroke model that considered AT and RH. We found that ORS administration improved inadequate circulation and reduced tissue injury by increasing cotransporter gene expression in the intestines. Background Heatstroke is caused by exposure to high ambient temperature (AT) and can cause organ damage or death [1,2]. Heatstroke incidences are increasing with the increase in global warming. The International Labour Organization has reported that AT is estimated to increase by 1.5-3.0°C by 2100, which would result in increased frequency and severity of heatstroke worldwide; furthermore, heatstroke-associated economic losses are expected to exceed 2.4 billion USD by 2030 [3,4]. Heat exposure is associated with damage to organs including the heart, lung, liver, kidney, gastrointestinal tract, muscle, and central nervous system. Moreover, it can induce systemic inflammatory response syndrome, which may lead to multiple organ dysfunction or death [1,2,5]. The human core body temperature (cT) is strictly maintained at 37.0 ± 1.0°C. During exposure to high AT, the human body triggers heat divergence processes, including diaphoresis, increased respiratoryrate, and vasodilation, aiming to maintain the baseline cT. Among them, evaporation due to diaphoresis plays the most important role in the control of cT. However, this evaporation mechanism is impaired when the relative humidity (RH) is > 75% [6]. Therefore, people feel hotter in high ATs and RH than they would in dry conditions. The WetBulb globe temperature (WBGT) is an environmental index that accounts for AT, RH, and the level of heat being radiated from the surroundings for assessing heatstroke risk [6]. The WBGT is used for decisionmaking and guideline development in sports, military training, etc. [7,8]. Previously proposed animal models of heatstroke [9][10][11][12][13] disregarded the impact of RH, and created desert-like conditions, despite the general consensus that the WBGT can be used as a benchmark index. Therefore, we established a novel animal heatstroke model that considers AT and RH changes using conscious and unrestrained mice. It is well known that heatstroke-induced severe fluid loss can cause splanchnic hypoperfusion and organ injury [14,15]. Therefore, hydration is strongly recommended to prevent and treat heatstroke [16]. Drinking oral rehydration solution (ORS) is effective at treating fluid loss. The absorption of ORS is far superior to water as it is absorbed through the sodium/glucose cotransporter in the intestines [17]. Administration of ORS can relieve the severity of heatstroke symptoms in human patients [18]. Therefore, we examined the effect of hydration on heatstroke prevention and treatment to validate our model. Additionally, we investigated the gene expression of transporters in the intestinal membranes after heat exposure. Methods Male C57/BL6J mice (aged 20-24 weeks) were used in this study. All animals were purchased from SLC Japan, Inc. (Shizuoka, Japan). The mice were allowed free access to food and water and were maintained on a 12-h light/dark cycle at room temperature (24 ± 2°C) with constant humidity (40 ± 15%). Heat exposure protocol A semi-enclosed heatstroke chamber (200 × 340 × 300 mm) made of acrylic was created by vertically stacking animal cages in a greenhouse-like construction. An ultrasonic humidifier (USB-68, Sanwa, Okayama, Japan) and a digital thermo-hygrometer (AD-5696, CA&D Company, Tokyo, Japan) were used for humidification and monitoring of the AT, RH, and WBGT (Fig. 1a). The heatstroke chamber was placed in an incubator (Bio-chamber, BCP-120F, TITEC, Aichi, Japan), which was pre-heated to the desired experimental temperature for ≥ 3 h. The humidifier was started 3 h before heat exposure to create a hot and humid environment. Meanwhile, the mice were given 3 h of water restriction and the mildly dehydrated mice were placed in the heatstroke chamber, exposed to heat for 60 min, and returned to the animal cage maintained at room temperature. The mice were euthanized 7-96 h after heat exposure. Nine mice were subjected to heat exposure in each experiment (Fig. 1b). Ambient temperature and heatstroke severity We evaluated the extent of pathophysiological changes observed in the mice after exposure to different AT conditions. The mice were subjected to heat exposure at an AT of 37°C (AT37 group), 41°C (AT41 group), or 43°C (AT43 group) and constant RH > 99% for 1 h. Survival rate for 96 h were observed and compared between the three groups (n = 18 per group). Body weight (BW) was measured thrice as an indicator of body fluid volume: before water restriction (pre), 3 h after water restriction and before heat exposure (BW 0h), and immediately after heat exposure (BW 1h; Fig. 1b). The weight loss rate (%) was calculated using the following formula (100 -BW [0 h or 1 h]/BW [pre] ×100). The animals that did not survive were excluded from this evaluation. Monitoring core body temperature We used another set of mice to determine the cT during heat exposure. They were implanted with a small thermometer (Thermochrone type G, KN laboratories, Osaka, Japan) in their abdominal cavity as follows: (1) the mice were anesthetized using 4% sevoflurane in N 2 O/O 2 (70/30%) inhalation, (2) an incision of approximately 1.0 cm was made on the abdominal midline under aseptic conditions, and (3) the thermometer was implanted between the abdominal aorta and intestinal membranes, and the incision was closed using sutures. The animals were maintained for 3 weeks for recovery and thereafter exposed to an AT of either 37°C (n = 18) or 41°C (n = 27). When the animals were euthanized, the thermometer was removed, and the cT records were transferred. The cT was recorded every 5 min during each experimental period after implantation. Experimental protocols and examination of conditions for our mice heat stroke model. a Heatstroke chamber: The heatstroke chamber was made using acrylic resin in a construction similar to a greenhouse. An ultrasonic humidifier was placed in the corner, and a thermohygrometer was used to monitor the environmental conditions. b Protocol for heatstroke: The mice (n = 9) were exposed to heat (ambient temperature 37°C, 41°C, or 43°C) and relative humidity (> 99%) for 1 h and then returned to the chamber set to room temperature. They were sacrificed 7-96 h after heat exposure. c Survival rate (%) under three different ambient temperature (37°C, 41°C, or 43°C) conditions observed during 96 h: All mice died within 3 h of exposure to the ambient temperature of 43°C. The survival rate at the ambient temperature of 41°C was 15/18 (83.3%). d Rate of body weight loss (%) at the ambient temperature of 37°C and 41°C: 3 h of water restriction induced approximately a 3% body weight loss at the ambient temperature of 37°C and 41°C. Body weight significantly decreased after 1 h of exposure to the ambient temperature of 41°C, compared with that observed at the ambient temperature of 37°C (t test, *p < 0.05). e WetBulb globe temperature (WBGT) and relative humidity under ambient temperature between 37 and 41°C: WBGT always shows higher values than those of ambient temperature due to high humidity. RH was stabilized by more than 99.0% before and during the experiments. f Changes to core body temperature at ambient temperature of 37°C and 41°C: The core body temperature of the mice exposed to the ambient temperature of 41°C increased markedly; subsequently, it decreased to 34.0 ± 0.18°C (195 min after heat exposure). Then, the core body temperature gradually returned to physiological levels that showed biphasic thermal dysregulation response. There were significant differences in the core body temperatures measured during 1.0-7.4 h between the groups (t test, *p < 0.05). Tissue samples from organs, such as the liver, kidneys, upper jejunum, and lungs, were extracted and prepared as paraffin-embedded sections of 5-μm thickness and evaluated for morphological changes using Hematoxylin-Eosin (HE) staining. One of the authors (KH; Pathologist) who was blinded to the experimental group assignment evaluated the specimens. Impact of oral rehydration solution on heatstroke The mice were given ORS (OS-1®, Otsuka Pharmaceutical, Tokushima, Japan), tap water (water), or no treatment (NT) to validate our heatstroke model. Each experiment was performed with nine animals (3 mice/ group) and repeated six times, with a total of 18 mice/ group. We also prepared three other groups (5 mice/ group: NT, water, and ORS) that were not exposed to heat to evaluate the hydration effect. The mice were orally administered either ORS (30 mL/kg) or water (30 mL/kg) before and immediately after heat exposure. The NT group received no hydration during the experiment and was used as a control group. The animals were weighed and euthanized 7 h after heat exposure to allow enough time for fluid absorption [19]. The blood samples were collected; thereafter, tissue samples of six mice/group were perfused with 10% neutralized formalin for histological analysis. The upper jejuna were collected, snap-frozen in liquid nitrogen, and stored at − 80°C for polymerase chain reaction (PCR) analysis. mRNA isolation and cDNA production Isolation of total RNA and synthesis of cDNA were performed following the manufacturer's instructions with minor modifications [20]. In brief, total RNA from the upper jejunum was isolated using TRIZOL Reagent (Invitrogen, Carlsbad City, CA, USA) and dissolved in RNase-free water. The purity and concentration of the extracted RNA were determined spectrophotometrically (NanoDrop, Wilmington, DE, USA). The cDNA was synthesized using 2 μg of total RNA with a High-Capacity RNA-to-cDNA kit (Applied Biosystems, Foster City, CA, USA). Polymerase chain reaction We determined the gene expressions of sodium/glucose cotransporter 1 (SGLT1 encoded by Slc5a1), facilitated glucose transporter (GLUT2 encoded by Slc2a2), and intestinal fatty acid binding protein-2 (I-FABP encoded by Fabp2), which is known to contribute to intestinal injury markers. PCR analyses were performed using TaKaRa Ex Taq (TaKaRa, Shiga, Japan). All primers and gene information for Slc5a1, Slc2a2, Fabp2, and Rplp1 (housekeeping gene) are presented in Table 1. The reaction mixture was created with a suitable volume of the cDNA mixture, 0.25 μL of forward and reverse primers (50 nmol/mL), 2.0 μL of dNTP mixture (0.25 mM each), 0.1 μL of TaKaRa Ex Taq (5 units/μL), and 2.0 μL of 10×Ex Taq Buffer in a total volume of 20 μL. Thermal cycling parameters were set as follows: 95°C for 1 min for initial denaturation followed by a cycling regimen of 40 cycles at 95°C, 60°C, and 72°C for 45 s, 30 s, and 45 s, respectively. At the end of the final cycle, an additional 7-min extension step was included at 72°C. Quantitative PCR (qPCR) analyses were performed with SYBR Premix Ex Taq II reagent (TaKaRa), using the Applied Biosystems 7900HT Fast Real-Time PCR System (Applied Biosystems, Lincoln, CA, USA). The relative gene expression levels were calculated using the absolute quantification method against Rplp1 (a housekeeping gene). Statistical analysis Data were reported as the mean ± standard error of the mean. The Student's t test was used for comparisons between two groups; one-way analysis of variance (ANOVA) and the Tukey-Kramer tests were used for multiple comparisons. P values < 0.05 were considered indicative of statistical significance. Survival rate and weight at different AT Fourteen of the 18 mice in the AT43 group died during heat exposure; the remaining were unconscious and died within the following 3 h. Therefore, we excluded AT43 from the following assessments. All mice in the AT37 group survived for 96 h (18/18, 100%). In the AT41 group, the survival rate at 96 h was 15/18 (83.3%; Fig. 1c). As for weight, 3-h water deprivation induced around 3% BW loss, which indicated a mildly dehydrated state. Additionally, weight loss was significantly higher in the AT41 group than that in the AT37 group after 1 h of heat exposure (t test, p < 0.05; Fig. 1d). Alternation of AT and WBGT during heat exposure The thermal conditions of the AT37 and AT41 groups subsequently increased and reached the peak at 35.9 ± 0.16°C and 41.0 ± 0.11°C, respectively, 60 min after heat exposure. WBGT always showed higher values than AT. WBGT finally increased to 38.9 ± 0.14°C (AT 37) and 44.0 ± 0.15°C (AT41) at 60 min, respectively. RH was stabilized to be more than 99.0% before and during the experiments (Fig. 1e). Impact of exposure to heat on core body temperature The maximum cT of the AT37 group increased to 38.0 ± 0.09°C during heat exposure and returned to its physiological level. Hypothermia after heat exposure was not seen in the AT37 group (Fig. 1f). However, the maximum cT of the AT41 group increased to 41.3 ± 0.08°C during heat exposure and decreased to 34.0 ± 0.18°C at 195 min thereafter. The AT41 group's cT increased gradually and took approximately 8 h to return to physiological baseline levels after heat exposure, showing a biphasic thermal dysregulation response. There were significant betweengroup differences in the average cT recorded during 1.0-7.4 h (t tests, p < 0.05). Three animals in the AT41 group died and were excluded from further analysis. Impact of heat exposure on blood count and serum biochemical parameters There was no between-group difference in the total red blood cell (RBC) or white blood cell (WBC) counts. The Serum analysis revealed a significant increase in the levels of Na + and Clin the AT41 group. Moreover, serum biochemical parameters of the AT41 group revealed changes to the hepatic, renal, and musculoskeletal damage markers (Table 2). Histopathological findings In contrast to liver specimens from the AT37 group, those from the AT41 group presented with vacuolar hepatocytes observed mainly around the hepatic central vein (Fig. 2a). The kidney specimens obtained from the AT41 group presented with mild swelling and degeneration of the tubular epithelial cells and the urinary cast (Fig. 2b). The intestinal structures of the AT41 group were severely damaged. The mucosal epithelial cells were eroded, and the intestinal villi showed interstitial edema (Fig. 2c). No between-group differences were observed in the lung specimens (Fig. 2d). Oral rehydration solution improved rehydration and reduced the tissue damage after heat exposure BW in the NT group significantly reduced immediately and 7 h after heat exposure compared to that in the water and ORS groups (Tukey-Kramer tests, p < 0.05; Fig. 3a). CBC and serum biochemical parameter levels in the NT group showed electrolyte abnormalities and hemoconcentration and significantly increased hepatic and renal damage markers than those observed in the other groups (Table 3). Compared to those of the water group, the levels of serum hepatic and renal damage markers in the ORS group were significantly lower (Tukey-Kramer tests, p < 0.05). The hepatic tissue specimens acquired from the NT group showed hepatic vacuolar degeneration mainly appeared around the hepatic vein. The hepatic vacuolation improved but remained present in the water group. However, very few formations were observed in the ORS group (Fig. 3b). The intestinal tissue specimens acquired from the NT group presented with intestinal epithelial erosions and swelling of the intestinal villi (Fig. 3c), while the intestinal tissue specimens of the water group showed edema of the lamina propria of the mucous membrane but normal intestinal villi. The intestinal tissues in the ORS group had hardly any damage. The renal tissue specimens from the NT group showed signs of degeneration of the tubular epithelial cells and urinary casts. However, no damage to these structures was observed in the water or in the ORS group (Fig. 3d). No changes to the pulmonary tissue were detected in any of the groups (data not shown). ORS administration with heat exposure significantly increased the gene expression of transporters in intestinal membranes Fabp2 expression levels increased in the NT group 7-h post-heat exposure but were suppressed in both the water and ORS groups without any inter-group differences (Tukey-Kramer tests, p < 0.05; Fig. 4a). Slc5a1 expression levels increased significantly in all experimental groups after heat exposure. Concurrently, Slc5a1 expression in the ORS group was significantly more than that in the NT and water groups (Fig. 4b). Finally, Slc2a2 expression increased significantly more in the ORS group Fig. 2 Histopathological findings of organ specimens collected after heat exposure. a Vacuolar hepatocytes (arrow) appeared around the hepatic central vein in the specimens of the animals exposed to the ambient temperature of 41°C. P, portal vein; V, central vein. b Kidney specimens of the group exposed to the ambient temperature of 41°C showed mild swelling and degeneration of tubular epithelial cells (arrow) and urinary casts (asterisk). c The intestinal structures of the group exposed to the ambient temperature of 41°C were severely destroyed. The mucosal epithelial cells were eroded (arrow), and the intestinal villi showed interstitial edema (arrowhead). d No significant between-group differences were observed in the lung specimens of the group exposed to the ambient temperature of 37°C and of that exposed to the ambient temperature of 41°C than in the NT and water groups (Fig. 4c). No significant changes were observed in the expression levels of the three genes without heat exposure. Discussion Heat stroke mainly occurs in hot areas, although hot area varies from very low humidity deserts to hot and humid tropical regions as the world's climate is highly diverse. AT as well as RH plays an important role in the onset of heatstroke. For example, heatstroke is common even during the damp rainy days of early summer [21]. Therefore, we developed a mouse heatstroke model that mimicked temperate to subtropical regional weather conditions using WBGT as an indicator. In the monitoring of thermal conditions, WBGT always shows a higher value than the actual AT under hot and humid conditions. Heat-related deaths among outdoor workers and older adults have been reported at WBGTs above 33°C [22]. In our model, the peak WBGT was 44.0 ± 0.15°C during heat exposure. The thermal conditions in our study were more severe than those that induce heatstroke among humans. In a previous study, Shen [9] reported a mouse heatstroke model with 42.4°C AT and 50-55% RH for 1 h. However, in our study, many mice that were exposed to an AT of 43°C and RH > 99% for 1 h died, and those that remained were in critical condition. The mortality rate was too high to consider it a viable experiment; therefore, we excluded the AT43 condition. It is known that the mechanisms of cT regulation in human and mice are different. Mice have fewer sweat glands than humans and are unable to regulate their body temperature through evaporation by perspiration [23]. Instead, they conduct heat and regulate body temperature through heat-vaporizing saliva and exhalation [24]. In our model, heat evaporation through vaporizing saliva and exhalation might not work effectively under the hot and humid condition that induced critical outcomes. In our model, mice were subjected to a mildly dehydrated state by restricting water for 3 h prior to heat exposure. Dehydration is one of the important risk factors that aggravate heatstroke as it makes the subject prone to hypoperfusion [25]. In our model, BW decreased approximately 7-8% after 1 h of heat exposure indicating moderate to severe loss of body fluid volume. Therefore, a 3-h water restriction might be correlated with higher mortality in the AT43 group when compared with that in the other groups. Consequently, we reduced thermal conditions and performed heat exposure under 41°C. Several criteria for human heatstroke have been reported [2,26]. A heatstroke in humans is defined as a cT > 40°C and the presence of central nervous system (convulsive seizures), hepatic/renal, and coagulation dysfunction after exposure to high environmental temperatures. In a previously reported animal heatstroke model with conscious or unconscious subjects, the maximum cT during heat exposure was 40-43°C [10][11][12]. Moreover, Leon [12] has reported that hypothermia developed after heat exposure is a biphasic thermoregulatory response and the depth and duration of hypothermia are correlated with the severity of heatstroke. This biphasic Fig. 3 Effect of oral rehydration solution intake on body weight and histopathological findings of organ tissues. a Rate of weight loss (NT, water, ORS): The body weight of animals in the NT group was significantly reduced immediately and 6 h after exposure to heat (*p < 0.05). The use of water and oral solution had similar impact on the animals. b Hepatic vacuolation improved but remained present in the water group. Concurrently, there were very few formations in the oral rehydration solution group. c Intestinal tissue specimens from NT animals were marked with intestinal epithelial erosions (arrow) and swelling of the intestinal villi. Intestinal tissue specimens in the ORS showed only minor damage. d Renal tissue specimens in the NT group showed degeneration of the tubular epithelial cells (arrow) and urinary casts (asterisk). However, no damage was observed in the specimens acquired from the water and oral rehydration solution groups. NT, no treatment; ORS, oral rehydration solution; V, central vein thermoregulatory response to heatstroke is also observed in humans [27]. Therefore, excessive cooling of heatstroke patients is not recommended as it sometimes induces hypothermia [28]. In our study, the average cT during heat exposure of the AT41 group reached a maximum of 41.3°C and then decreased to a minimum of 34.0°C. Therefore, a biphasic thermoregulatory response was observed and the maximum cT achieved was comparable to that reported in previous literature. Contrastingly, such a response was not seen in the AT37 group. Although a biphasic thermoregulatory response is theorized to occur due to hypothalamic impairment [29,30], we did not determine the cause of thermal dysregulation in mice in this experiment. Heat stress and inadequate circulation during heat exposure induces tissue damage, including hepatic, renal, and intestinal injuries in humans and animals [2,31,32]. Our results also showed an increase in tissue damage markers in the AT41 group, suggesting the occurrence of rhabdomyolysis and hepatic and renal damage. Subsequent histological findings of the hepatic and renal tissues extracted from the AT41 group also showed hepatic and renal damage after heat exposure. Particularly, vacuolar hepatocytes were present in abundance around the hepatic central vein and farthest from hepatic circulation. This supports the hypothesis that heat exposure may reduce blood circulation, resulting in tissue damage. The intestinal specimens from the AT41 group showed mucosal epithelial cell erosion and interstitial edema in the intestinal villi. Hall [15] has reported that splanchnic hypoperfusion may result in ischemia to the gastrointestinal organs, followed by a reperfusion injury during sudden splanchnic vasodilatation that precedes the onset of hemodynamic collapse and hyperthermia. Splanchnic hypoperfusion might correlate with the intestinal injury observed in our model. Further, we examined the validity of our mouse heatstroke model by comparing different types of hydration (water/ORS). In our results, hydration improved hemoconcentration with no variation with the intervention type (water/ORS). However, the serum marker levels of hepatic and renal damage were significantly better in the ORS group than in the water group, suggesting that ORS might be more effective than water at suppressing heatstroke damage. Moreover, histopathological observations in the ORS group only showed minor tissue injury. A possible explanation is that ORS contains glucose and electrolytes, which improve absorption from the digestive tract through sodium/glucose cotransporters and improve tissue circulation [33,34]. These results indicate that our model resembled the pathophysiology of a heatstroke experienced by a human. Furthermore, we focused on cotransporter gene expression in the intestinal membranes to explore the effect of ORS after heatstroke. SGLT1 and GLUT2 are expressed in the apical and basolateral mucosal epithelial membranes of the small intestine. They co-transport glucose from the intestinal lumen into the capillaries in a process driven by the Na + gradient created by Na + /K + ATPase [35,36]. Moreover, we investigated the gene expression of I-FABP as an intestinal ischemia marker after heatstroke. Plasma and urinary levels of I-FABP are reported to increase after intestinal ischemia [37]. Additionally, plasma I-FABP levels are increased in heatstroke patients [38]. In the present study, expression levels of all three genes were not increased exclusively by hydration; heat-exposed mice tended to express these genes more than mice that were not exposed to heat. Fabp2 expression was upregulated in the NT group, suggesting an increase in the extent of intestinal ischemia post-heatstroke. Meanwhile, the expression levels of both SGLT1 and GLUT2 were significantly increased in the ORS group, suggesting that hydration with ORS increases the water and electrolyte absorption rates and may lead to improvement in hemodynamics and reversal of tissue damage. Further research is needed to explore the pathophysiology of heatstroke using this model. Our study has some limitations. Firstly, mice have fewer sweat glands than humans and are unable to regulate their cT temperature through evaporation by perspiration. There are some reports in human heatstroke Fig. 4 Expression of Slc5a1, Slc2a2, and Fabp2 genes. a The level of Fabp2 expression drastically increased in the non-beverage group 6 h after heat exposure. b The level of Slc5a1 expression in the oral rehydration solution group was twice as high as that observed in the non-beverage and water groups. c The level of Slc2a2 expression in oral rehydration solution group increased after heat exposure (*p < 0.05). Sham, normal mice; water (−), water intake without heat exposure; ORS (−), oral rehydration solution intake without heat exposure; NB (+), heat exposure without any beverage; water (+), water intake with heat exposure; ORS (+), oral rehydration solution intake with heat exposure with 43°C that have recovered completely [39]. The regulation of cT is different in humans and mice. Secondly, we usually give cold intravenous fluid and sometimes use continuous renal replacement therapy to control cT and remove myoglobin in clinical setting. The speed of reduction of cT is much faster in human heatstroke without having hypothermia. Lastly, we did not consider consciousness disturbance and coagulation abnormality in our model. Next, we will explore central nervous system injury due to heatstroke in another experiment using our model. Conclusion In addition to AT, RH plays an important role in the onset of heatstroke. We developed a novel mouse heatstroke model which considered AT and RH used WBGT as an indicator. We found that ORS administration with heat exposure increased transporter gene expression (SGLT1 and GLUT2) in the intestinal membranes and reduced heatstroke-related damage. Adequate hydration with ORS before and after heat exposure may improve the symptoms of heatstroke patients.
2021-04-17T13:43:38.801Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "bddbc6cf3b64f0c4559447dbac1c190eab9b23b6", "oa_license": "CCBY", "oa_url": "https://jintensivecare.biomedcentral.com/track/pdf/10.1186/s40560-021-00546-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bddbc6cf3b64f0c4559447dbac1c190eab9b23b6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16631985
pes2o/s2orc
v3-fos-license
Mechanics of fibroblast locomotion: quantitative analysis of forces and motions at the leading lamellas of fibroblasts. Shapes, motions, and forces developed in lamellipodia and ruffles at the leading edges of primary chick embryo heart fibroblasts were characterized by differential interference contrast microscopy and digital video enhancement techniques. The initial extension of the cell edge to form a thin, planar lamellipodium parallel to the substrate surface was analyzed in two dimensions with temporal and spatial resolution of 3 s and 0.2 micron, respectively. An extension begins and ends with brief, rapid acceleration and deceleration separated by a long period of nearly constant velocity in the range of 4-7 microns/min. Extensions and retractions were initiated randomly over time. As demonstrated by optical sectioning microscopy, the extended lamellipodia formed ruffles by sharply bending upward at hinge points 2-4 microns behind their tips. Surprisingly, ruffles continued to grow in length at the same average rate after bending upward. They maintained a straight shape in vertical cross section, suggesting the ruffles were mechanically stiff. The forces required to bend ruffles of these cells and of BC3H1 cells were measured by pushing a thin quartz fishpole probe against the tip of a ruffle 7-10 microns from its base either toward or away from the center of the cell. Force was determined by measuring the bending of the probe monitored by video microscopy. Typically the probe forced the ruffle to swing rigidly in an arc about an apparent hinge at is base, and ruffles rapidly, and almost completely, recovered their shape when the probe was removed. Hence, ruffles appeared to be relatively stiff and to resist bending with forces more elastic than viscous, unlike the cell body. Ruffles on both types of cells resisted bending with forces of 15-30 mudyn/microns of displacement at their tips when pushed toward or away from the cell center. The significance of the observations for mechanisms of cell locomotion is discussed. gesting the ruffles were mechanically stiff. The forces required to bend ruffles of these cells and of BC3H1 cells were measured by pushing a thin quartz fishpole probe against the tip of a ruffle 7-10 #m from its base either toward or away from the center of the cell. Force was determined by measuring the bending of the probe monitored by video microscopy. Typically the probe forced the ruffle to swing rigidly in an arc about an apparent hinge at is base, and ruffles rapidly, and almost completely, recovered their shape when the probe was removed. Hence, ruffles appeared to be relatively stiff and to resist bending with forces more elastic than viscous, unlike the cell body. Ruffles on both types of cells resisted bending with forces of 15-30 gdyn/gm of displacement at their tips when pushed toward or away from the cell center. The significance of the observations for mechanisms of cell locomotion is discussed. IoGvRe~TaI~ of fibroblasts and fibroblast-like cells t surface seems to occur as a cyclic pro-I • .L cess with two major phases (Abercrombie et al., 1970a,b;Trinkaus, 1984). First the cell extends a thin lamellipodium from its leading edge that contacts the substrate. Then portions of the cell behind the lamellipodium are drawn forward. During this cycle on some substrates, the extended lamellipodium is sometimes drawn backward to form a "ruffleY Although much information has been obtained about locomotion of diverse kinds of cells, the mechanisms by which the processes are coordinated and force is developed are still unknown (Trinkaus, 1984). Some investigators have suggested that cytoskeletal functions such as polymerization of filaments and myosin-dependent contractibility provide the driving forces for cellular locomotion (Abercrombie et al., 1977;Small, 1982;Rinnerthaler et al., 1988;Smith, 1988;Mitchison and Kirschner, 1988;Bray and White, 1988). Others have proposed that forward motion results from polarized deposition of cell surface membrane at the leading S. Felder's present address is the Department of Molecular Biology, Rorer Biotechnology, Inc., King of Prussia, PA. edge of the cell (Abercrombie et al., 1970c;Bretscher, 1984;Kupfer and Singer, 1988). The objectives of the work presented here are to describe fne details of leading edge motions and to measure forces generated in this region. This information sheds light on the mechanics of these processes and how they may be involved in cell locomotion. The general characteristics of the extension and retraction processes have previously been identified for fibroblasts (Abercrombie et al., 1970a;Chen, 1979Chen, , 1981 and for epithelial cells (Dipasquale, 1975). Motions of the leading edges of cells were studied by following the positions of a small number of discrete points along the active edges in time lapse images of cells taken at 0.5-or 1-min intervals. These studies, however, viewed only a two-dimensional projection of three-dimensional processes that often involve the folding and elevation from the substrate of motile lamellas to form ruffles. Observations of motile cells from a lateral view have yielded insights into the movement of ruffles in the vertical plane (Ingram, 1969), but were limited in spatial resolution, and were not quantitative. Alternatively, the three-dimensional shapes of ruffled extensions have been viewed by electron microscopy (Abercrombie et al., 1972). These studies allow excellent visualization of the structures, but not analysis of their movements. In addition, the observed shapes, especially of thin ruffles, may be perturbed by fixation or freezing. We have used high resolution differential interference contrast video microscopy and optical sectioning to characterize the two-and three-dimensional motions of the leading edges. These techniques have allowed us to surpass the temporal and spatial resolution of the earlier observations of live cells. We have found that extension of the lamellipodium occurs smoothly, directly, and with constant velocity. As the lamellipodium continues to grow in length, it lifts upward to form a ruffle. In swinging up from the substratum the ruffle bends about a localized hinge point 4-6/xm behind its extending tip. Interestingly, the ruffle continues to extend or grow in length at nearly the same rate after it has bent upwards. The elevated, extending portion of the ruffle distal to the hinge is mainly straight in cross section suggesting that its shape is rigidly maintained by its internal structure. We have found no evidence that extension and ruffling retraction occur with regular periodicity in time as previously suggested (Abercrombie et al., 1970a). These results suggest that ruffles are relatively stiff structures. To test this suggestion, we have used fine quartz fibers to measure the stiffness of ruffles. The sensitivity of the fibers allowed measurements of forces in the range of 3-100 #dyn. This approach has previously been used in different applications Howard and Hudspeth, 1987;Kishino and Yanagida, 1988). We report that ruffles are indeed very stiff considering their thinness, and are largely elastic. Upon being pushed, they remain straight and bend at a point near their bases, resisting deformation with forces of 15-30/~dyn/#m displacement of their tips. Our results provide a context for and place constraints on mechanistic models of fibroblast locomotion. Cells and Tissues Cultures Primary explant cultures of chick embryo heart fibroblasts (CHFs) l were used for measurements of ruffle deformability and for observations of motions and were prepared as follows (Izzard and Lochner, 1976). Hearts were removed from 8-to 10-day-old chick embryos and rinsed with TBS. The hearts were cut into small (,x,l-mm) pieces with a microdissection scissors; the pieces were rinsed in TBS and allowed to settle; and the TBS was removed. Rinsed pieces were resuspended in primary growth medium (Hunter, 1979) consisting of DME with 8% FCS, 2% chick serum, and 10% tryptose phosphate broth, and were placed on 22 x 22-ram cover slips in 35-ram tissue culture dishes. CHF tissue chunks in primary growth medium were cultured overnight in a 5% CO2 incubator at 370C, and fibroblasts that had migrated out of the tissue chunks onto the coverslip to a form a ring of cells around the chunks after 15 h were used. BC3H1 cells are a smooth musclelike cell line (Schubert and Harris, 1974). These were grown in BC3HI medium which retained the cells in an undifferentiated state and consisted of high glucose DME supplemented with 20% FCS in a 5% CO2, 37°C incubator. Logarithmically growing cells half way to confiuency were used for measurements of ruffle deformability. To characterize leading lumellar motions, a Teflon O-ring (Millipore Continental Water Systems, Bedford, MA) was coated with vaseline and placed onto a microscope slide, and the cavity thus created was filled with culture medium in a 5% CO2 environment. A coverslip with attached cells was placed face down onto this Teflon ring. The resulting sealed chamber had a volume of 'x,50/~1. Cells in this chamber were observed for up to 1 h, and continued to ruffle and migrate actively for at least 2 h. 1. Abbreviation used in this paper: CHF, chick embryo heart fibroblasts. To measure ruffle bending forces coverslips with attached cells were removed from culture medium and sealed with vaseline to the bottom (outside) of 35-ram plastic tissue culture dishes in which 15-mm-diam holes had been drilled, so as to leave the cells of the center region of the coverslips exposed and accessible to the inside of the dish. The dishes were filled with the appropriate culture medium for the two cell types, supplemented with 20 mM Hepes to maintain the pH at 7.4. Digital V'uleo Microscopy Cells were viewed with a Zeiss IM35 inverted microscope and 63x planapochromat oil immersion objective (NA of 1.4). For motion analysis, Normarski differential interference optics (oil immersion condensor with NA of 1.4) were used; for force measurements, cells and fishpole probe were viewed with phase three illumination from a long (9-ram) working distance condensor with NA of 0.6. Images of the cells were collected with a vidicon video camera with aspect ratio adjusted to yield equal magnification in two dimensions and digitized by a Grinnell GMR 274 video frame buffer. A 6.3x eyepiece was placed between the microscope and the video camera to increase magnification. Under these conditions, the total magnification was such that the full video screen width displayed 90 #m. Four or eight video frames were averaged within the video frame buffer to decrease video noise, and up to 1,000 averaged frames were stored by use of a VAX 11/780 computer onto an RMS0 disk at 3.0-or 6.0-s intervals (motion experiments) or at 1.0-s intervais (force experiments). Maximally one full frame of video data (0.25 megabytes) could be stored per second. The microscope and video camera were placed on a vibration isolation table (Kinetic Systems, Inc., Boston, MA). The microscope stage, condenser, and objectives were enclosed in a Plexiglass box with a total volume of ,~1.5 cubic feet. A slow stream of warm air was blown into the box to maintain an ambient temperature of 36.5°C. For the force measurements, the video signal was also recorded directly from the camera onto a time lapse video tape recorder (Panasonic NV8040) to assess the effect of the deformations on the ability of the cells to continue ruffling. Two-dimensional Edge Analysis The edge of the cell (i.e., the border of the two-dimensional projection of the cell onto the focal plane of the microscope) was identified by hand for the first image frame of the time sequence. The cursor supplied in the video frame buffer system was used to designate points along the cell's edge. The edge of the cell for each successive frame in the sequence of up to 500 frames was then identified automatically by a computer algorithm developed specifically for this analysis involving pattern recognition and image registration. The algorithm has been presented in detail (Felder, 1984). Displacement Data Detailed motions of the cell edge were characterized using radial lines intersecting the edge as local coordinate axes. For round cells, the radii originated from the average center of mass of the projected cell outline. For cells with elongate shapes, radial lines were described for each region of the cell at which the edge was actively extending and retracting. These radial lines originated from the center of curvature of an arc that approximated the average shape of the region's active edge. The points of intersection of the edge at the time with each radial line were identified by computer program, and the distance outward along the radial line from the local center to the point of intersection was calculated. The average distance along each radial line for all frames analyzed was subtracted from each measured distance. The resultant displacement data set provided the distance from the average edge position outward along radial lines as a discrete function of time (or frame number) and radial position. The duration of the data records ranged betweon 600 and 3,000 s (200 to 500 data points), with the sampling rate of 1 frameY3.0 or 6.0 s. Optical Sectioning Microscopy Optical sectioning was performed manually by moving a rod affixed to the fine focus control along a notched surface (for details, see Felder, 1984). The uncertainty of the amount of elevation for each position was estimated to be 0.15/~m. A video image of the cell (usually 150 x 150 pixels in size) was collected at each focal plane at 0.5-or 1.0-s intervals. The first image recorded the focal plane at the base of the cell. Then the objective was quickly moved by a unit step size of either 0.5 or 1.0/an, and the next image was recorded at this new focal plane. Successively nigher (more dorsal), focal planes were recorded until the highest point of the cell was in focus. Then the focus was rapidly returned to the level of the base of the cell and the process was repeated. Typically 8 or I0 focal planes were recorded per cycle and two time intervals were used to refocus to the base of the cell at the end of each cycle. The images from a fifll cycle together make up one "time frame" of optical sectioning data. Between 10 and 35 time frames were collected for each cell. Cross-sectional Shapes and Lengths of Ru~les To analyze the shape of the ruffles in cross section, lines were drawn by eye perpendicular to the ruffling edge of the cell at ,02 #m intervals. The image of each focal plane for each time frame was scanned along this line (Felder, 1984), and the point of largest absolute difference from the background intensity level was identified by computer as being the likely point of intersection of the cross-sectional line with the ruffle. The estimated intersection points for each line were inspected by use of a computer program that displayed to the user the image, the point chosen, and the line scanned. Corrections were made when necessary, and were required mostly for the lowest level frame where other bright or dark spots appeared due to other structures inside the cell. The result was a set of points that when connected identified the intersection of the ruffle with a vertical plane of cross section. The length of the ruffle along this plane of cross section was calculated by summing the lengths of line segments connecting the points of intersection between the ruffle and the plane for consecutive optical sections. The sum was begun at an arbitrary starting point inside the cell on the crosssectional line within the "lowest" focal plane (so the offset of the values was arbitrary). Pictorially, the lengths of line segments of data similar to that plotted in Fig. 5 were summed. Since the resolution in the dorsal-ventral direction was 1 #m, the uncertainty in length measurements was high. This is estimated to be 0.7 #m for nearly vertical ruffles, and 1.5 #m for ruffles bent upwards at an angle of 30 ° • The defined lengths for the first six time frames (50 s) after the ruffle first was observed to lift from the substrate were fit to straight lines by linear regression. The fitted slopes for four to six vertical sections were averaged, yielding the estimated velocity of growth and its standard deviation. The correlation coefficients were >0.8, and the total increase in length during the 50-s period averaged 4 t~m, and hence exceeded the estimated uncertainty of the measurement by three to sixfold. The rate at which the base of the cell retracted after the ruffle had lifted was calculated by fitting the length of the cell within the "lowest" (nearest the substrate) optical section in the same way. For both calculations of rates, the frame at which the extending tip first appeared in the second optical section was the first time frame used. This eliminated possible errors in comparing length of the ruffle before and after lifting, and removed the rapid decrease in extension of the base of the cell that was due to the rapid bending upward of the extended lamellipodium that usually occurred at a point 2--4 #m behind the extending tip. To reconstruct the shapes of the ruffles in cross section, intersection points between ruffle and vertical planes for consecutive optical sections were connected by straight lines. First, however, the intersection points 1 were corrected for tame lags between the collection of the lower and upper images within a time frame. We assumed that the ruffle moved steadily and parallel to the cross-sectional line between time frames. The corrected position of the ruffle at each focal plane level was calculated by linear interpolation, preceding time frame to current time frame, in accord with the respective time delay. Quartz FIshpole Probes Quartz fishpole probes were produced following the specifications for constructing balances capable of weighing samples of 1-10 ng (Lowry and Passonneau, 1972). Quartz fibers of 3-mm diameter were flame blown to lengths of 5-10 cm with diameters ranging from 0.3 to 1.5 ~m. One fiber estimated to be 0.5 #m in diameter was selected for use. The fiber was secured at one end to a Pasteur pipette and was cut to a free length of 2.94 + 0.06 rnm. The sensitivity (resistance to bending) of this fiber was calibrated by hanging pieces of individual freeze-dried muscle tissue on the tip of the horizontal quartz fiber, and measuring the deflection of the tip, whicli ranged from 0.3 to 0.8 mm with a 70× dissecting microscope. The tissue samples ranged from 1.5 to 4 rig and were weighed on a previously calibrated fishpole balance as described (Lowry and Passonneau, 1972). The sensitivity was found to be 3.73 + 0.18 (SEM)/~dyrJmm. Two lengths were cut from this fiber and mounted with epoxy onto the tips of glass micropipettes (see Fig. 9 [Appendix I]). The free lengths of the fibers were 282 and 421 #m. During the experiment, the position of the micropipette and hence the fishpole probe was controlled by a micromanipulator (Narishige Scientific Instrument Laboratory, Tok~, Japan). Collection of Force Data The quartz fishpole probe was introduced into the dish at an angle of "015 ° from horizontal, and cells and probe were located and manipulated into place under low power by moving the sliding microscope stage and the micromanipulator. The entire length of the quartz fiber was submerged. Video frame collection was run continuously to obtain 350 frames at 1-s intervals while a cell was being probed. A ruffle or microspike was brought into focus. The probe tip was then slowly brought into contact with the ruffle 7-10 #m above the coverslip by use of the joystick control of horizontal and vertical positioning of the micromanipulator. Then the probe was moved horizontally, perpendicular to its long axis at a rate of 5-10 lan/s against the ruffle by use of the dial control of one horizontal dimension, also available on the micromanipulator. This insured as well as possible that the only movement of the fishpole was perpendicular to its length. Part of the dispersion of measured deformation forces might be due to the exertion of force by the probe in a direction not entirely normal to the probe length. The moored end of the probe was moved I0 to 30 #m during exertion of force, and so components of the motion not orthogonal to the probe length should be very small. Further, movement of the probe tip due to cell motion, being much smaller than the amount by which the moored end was moved, had little effect on the amount of force exerted. Analysis of Force During a measurement, the fishpole probe was held fixed at one end while the force of its interaction with the ruffle was perpendicular to the axis of the probe at its other end. The equation that relates the force on the probe to the magnitude of its resulting small deflection is derived in Appendix I. The applied force f (in #dyn) at the fishpole tip is related to the angular deflection, 0, at the tip for a short fiber segment of length I (in micrometers) cut from the calibrated stock fiber of length L (in millimeters) by: For the two segments used in these measurements this equation reduced to: f = 795 + 71 sin0 ~dyn (for the 282/tin segment); and f = 357 + 32 sin0 #dyn (for the 421 #m segment). The uncertainties in the calibration of fare due to uncertainties in length measurements and in the calibration of the stock fiber. The angle that the tip of the fishpole probe made with respect to the video reference frame (coordinates fixed relative to the video camera) for each digitized video image was calculated as follows. The coordinates in the video reference frame of five points estimated to lie along the center of the beam and to range from 0 to 20 ~m from the tip of the beam were identified by using the zoom and pan cursor feature of the Grinnell video frame buffer. These points were fit by linear regression to straight lines. In all experiments, the correlation coefficient was good (>0.97 or <-0.97). A consecutive sequence of image frames was chosen for analysis when the first frame showed the probe to be apparently touching the cell but with no force exerted on the probe detectable in the angle measurement. Then, the next one or more frames captured time points at which the probe pushed against the ruffle. The force exerted on the probe in each frame of a sequence was calculated as explained above from the difference in tip angle for that frame minus the tip angle for the first (0 force) frame. The deformation of the cell structure was measured by the amount of movement of the probe tip perpendicular to its unstressed long axis. The accuracy of the difference angle measurements was tested by recording video frames of the fishpole probe in position above and out of contact with the cells (unstressed) intermittendy for 350 s. Short sequential segments of this data record were analyzed to yield the amount of angle variation. The average difference between the angle at one time point and the angle at a time point 1-10 s later was 0.0052 + 0.0053 radians for the longer (more sensitive) fishpole probe. This uncertainty in the measurement of the difference angle translates to an uncertainty in the measurement of the force of 4.1 mlyn. The forces measured in experiments ranged between 4 and 60 t~dyn, with an average of 34 V~lyn, yielding an average signal to noise ratio of 9. The accuracy of the measurement of deformation, i.e., the extent of deflection of the probe tip, was estimated to be + 1 pixel, or + 0.2 #m. Accounting for estimated error in both force and displacement measurements, a force measurement of 35 V.dyn with a displacement of 1.0 ~m would have an expected uncertainty of 9/~dyn/t~m. It was not possible to measure repeatedly the force of deformation of one ruffle to determine directly the uncertainty of the measurement because the ruffles change shape over time. Nevertheless, from an examination of the apparent linearity of the measurement seen when two or three measurements were performed on the same ruffle (see Discussion or Fig. 3), the uncertainty of the measurement must have remained in this range. The sensitivity of the probe for this experimental design could be adjusted to range down to 1 ttdyn and up to many mdyn. The limit in sensitivity results from the limit of making (and working with) quartz fibers thinner than '~0.2 t~m, as well as from the fact that the amount of sway in the fiber due to fluid motions of the culture medium prohibits 0.2-t~m-diam fibers longer than •150 t~m. A fiber sensitive to 1/~dyn//~m of displacement with an expected uncertainty in force measurements of 0.4 t~dyn would have to be 0.2/~m in diameter and 120 t~m long. This thinner fiber would be more difficult to calibrate, however. Two-dimensional Displacement Data Displacement data records were developed for 18 cells. These records consisted of the measured distance outward of the leading, active edges of the cells as a function both of position along the edge and of time. To determine whether lamellipodial extensions showed a tendency toward oscillatory behavior, we calculated the average power spectrum for the displacement data records (Fig. 1). The shape of the average power spectrum, shown in Fig. 1 D, is consistent with a random, band-limited process, with power damped to half maximal at a frequency of 0.025 Hz. No significant secondary peaks were seen. Hence, although these processes may appear to occur at regular intervals (Abercrombie et al., 1970a), no oscillatory behavior was detected. Rate and Time Course of Extension The characteristics of larger larnellipodial extensions were determined by analyzing edge displacement data selected by two different sets of criteria. The first selected edge regions at least 1/~m wide that extended at least 2.5/~m over a period of at least 15 s (Fig. 2 A). Alternatively rufl]ing extensions were selected by visual inspection of the time lapse digital images. For 12 cells, all large milling events seen during data collection were analyzed and yielded 20 displacement records (Fig. 2 B). Data selected by the former, objective criteria did not differ from those selected by the latter, subjective criteria. The behavior of the individual lamellar extensions shown in Fig. 2, (,4 and B) is summarized in Fig. 2 C, which presents a histogram of the rates of edge extension. These values agree well with those found earlier for fibroblasts (Abercrombie et al., 1970a) and epithelial cells (Dipasquale, 1975) and for the extension of filopodia from nerve growth cones (Argiro et al., 1985). Altogether 71 periods of extension were catalogued. All these events occurred smoothly . Raw optical sectioning image data. The digital image data for each focal plane for one time frame of data was photographed from the TV monitor. Pictured is a 30 × 30/~m 2 section of a CHF cell showing the leading lamella. The image in A was taken with the focal plane coincident with the base of the CHF cell. For each of the following images (B--H), the focal plane was moved up (dorsally) 1 tzm. Images were taken at 1.0 s intervals. The arrow in D shows a region in which the rutlte extended parallel to the direction of optical shear, and hence where little image contrast is seen. In H are plotted the lines that define the cross-sectional planes that were used in defining the shapes of the ruffles in cross section (Fig. 5). The dark line that follows the center of the ruffling ridge in H represents the ridge edge contour used for three-dimensional reconstruction. A perspective drawing of outlined ridges defined for these optical sections is presented, after turning counterclockwise by 90 ° in L The bar in H represents 10 t~m. and steadily, as suggested previously by time lapse films. The extension data in Fig. 2 B were fitted by least squares to straight lines that have been included in the figure. For all of these fits, the correlation coefficients were > 0.97. When these data were fitted to a second order polynomial the reduced chi squared values for 15 of the 22 curves were not significantly decreased. For the seven curves that were improved, there was no preference for the second derivative of the fitted polynomial to be either positive or negative. Hence, to the resolution of the technique, which we estimate to be 0.2 t,m, lamellipodia extend smoothly and monotonically with constant velocity, beginning and ending with rapid acceleration and deceleration. Optical Sectioning Data The raw optical sectioning image data for one time frame for one cell are presented in Fig. 3. Pictured here are 30 × 30-~m sections of the cell that include the leading lameUa. In Fig. 3 A, the focal plane is set to view the base of the cell. Succeeding images were recorded at 1.0-s intervals with the focus plane elevated 1 t~m for each interval. The long, U-shaped ruffling ridge for this cell is clearly visible. Note that there I I I ~ I I I I I I I I I I I I I 1 I .... ::::::::::::::::::::::::: ..... is little crossover from section to section. Each image displays a separate view of the ruffle with little interference from parts of the cell that are out of focus. This is due to the high numerical apperture of the objective,and condenser (yielding a depth of field of 0.2 #m), and to the nature of Nomarski DIC optics (Allen et al., 1969), which greatly reduces the contribution of objects outside the focal plane region to picture contrast (cf Agard, 1984). After identification of the cell edge in semiautomatic fashion, cell outlines could be put together in perspective to reconstruct the shapes of the ruffles. This kind of reconstruction is shown in Fig. 3 I, for the set of optical sections of Fig. 3 (turned 90°). Lengths of the Ru~les in Cross Section The cross-sectional lengths of the ruffles at 37°C were analyzed as a function of time (Fig. 4, solid lines) for three cells observed at 37°C (4, A-C) and for one cell observed at 25°C (Fig. 4 D). These were calculated from the raw crosssectional data. -The length of the extension of the cell in the lowest, most ventral section (within '~0.3/zm of the substrate) are plotted in this figure as the dotted lines. The average rates of extension after the ruffles were first seen to lift from the substrate are presented in Table I. After the extending lamellipodia lifted upward to form ruffles, the outer extensions of the cells near the substrate plane (within the lowest optical section) moved toward the cell centers (Fig. 4, dotted lines). The average rates of these retractions of the bases of the cells after ruffle lifting are also presented in Table I. (See Materials and Methods for a description of these calculations.) Although the uncertainty in the assessment of the lengths of the ruffles was high, the dispersion of measured rates of extension was not higher than that of the measured rates of base retraction. Further, the amount of increase in length of the ruffles after lifting was 4-5/~m, much higher than the 0.7-1.5 #m estimated uncertainty of in-dividual measurements. We conclude that after a lamellipodium has bent and lifted up away from the substrate, its length increases at an average rate that is nearly the same as its rate of extension when it was parallel to the substratum (Fig. 2 C). Further, we conclude that the leading portion of the cell, which remains near the substrate after the ruffle has lifted, is retracted toward the cell center at a slower rate. These drawings show that the lifted ruffles are not uniformly curved but remain very nearly straight over most of their length and bend sharply about hinge points near their attachment to the substratum. At times long after the ruffle was raised, however, the shape was sometimes crinkled (see Table I. These ruffles were somewhat smaller than those observed at 37°C and did not lift or swing as quickly. The reduction in the speed of lamellar elevation permitted improved analysis of the shape changes of the ruffle because of effectively improved time resolution. The data support the conclusions reached from the 37°C data. Clearly, the extending lamellipodia continued to lengthen as they bent upward. The base of the one cell examined quantitatively (Fig. 4 D) did not move detectably toward the cell center, however. Again, similar to the 37°C data, lifted ruffles were straight in cross section. Fig. 2 B) and ruffle extension and base retraction after lifting (data of Fig. 4). All in ~m/min. Standard deviation, with the number of measurements, each on a different ruffle, shown in parentheses. Cross-sectional Shapes § Standard deviation, with total number of measurements (six from each of the cells A-C). Deformation of Ruffles and Their Recovery The fact that ruffles remain largely straight in cross section after lifting suggests they are stiff structures. We, then, used quartz fishpole probes to exert force on ruffles and to measure the force with which they resisted deformation. The resisting force exerted by the ruffle was determined by measuring the bending of the probe. Only the deformability of large ruffles, which formed smooth, long (up to 10/=m) ridges parallel to the edges of the ceils, were tested. These were contacted as near the tips of the ruffles as possible, and were pushed perpendicular to their long axis either away from or toward the cell center. Since these ruffling ridges generally spontaneously fold back toward the center of the cell (Abercrombie et al., 1970b), the push out was against and the push in was with the normal direction of motion for these structures. Fig. 6 presents an example of a ruffle first pushed inward and then outward. Notice that the shape of the ruffle did not change very much; the ruffle bent as a unit. This was found to be true for all deformations made. The ruffling ridge generally bent about an axis located approximately at the base of the ruffle and parallel to the substratum and the edge of the cell. In most experiments, the deformed ruffles recovered their shape very rapidly after removal of,the fishpole probe. As the probe supped off ~ ruffles at ~ end of a sweep, the ruffles sprang back most ~ the way ,to their prestressed positions in about one-fourth of a second, as determined from the time. lapse video recordings. This is displayed in video ~ames l~g and 25 of Fig. 6. This rapid recovery was observed for both inward and ~ deformations. The dolled Iines drawn in these frames represent the shape of the ruffle in the previous frames, 17 and 24, reslmclively. Since defomagiom rsa~n~d up to 5/sin, this recovery invalid a very fast rr~iort. The speed and extent of shape recov~y :suggest that it was dominated by elastic rather than viscous f~m~s. In a few ir~tm~es, ruffles did not recover from ~ery large deformations. F~r all structures, however, ruffle retraction, as observed tyy time lapse recording (data 'not shown), continued even after several deformations. This ~was true even for two ruffles that were pushed away from the cell center and flattened onto the substrate. These did not recover I~eir shape ~,~pidly but, after slight delays, bent upwards again and resumed the ndfling activity. Linearity of Force Versus Displacement for Ruffle D~rmations To compare different measurements, we must know the de- Figure 6. A ruffle pushed in and then out. One ruffle on a BC3H1 cell was pushed in toward the ~uter of the cell in video frames 16 and 17, and then released before frame 18 to spring back outward. 5 s later the same ruffle was pushed outward away from the center of the cell in video frames 23 and 24, and was released before frame 25. The dotted lines in frames 18 and 25 represent the shapes of the ruffle in the previous frames under stress, frames 17 and 24, respectively. Frames were recorded at 1 s intervals. Bar, 5/~m. that were pushed out, and closed circles represent ruffles pushed in. Similar data were obtained for the BC3HI cells (not shown). Generally force scaled linearly with deformation, except for smaller deformations where signal to noise was low. In Fig. 7 B, the distribution of measured forces of resistance to deformation for both CHF and BC3H1 cells are plotted. The fraction of total trials measured within each range of force per unit of deformation is plotted as a function of the force per unit of deformation. Dotted lines represent the distribution for ruffles pushed outward (n = 26), and solid lines represent that for ruffles pushed inward (n = 21). pendence of the force on the amount of deformation. The data for all ruffles tested on CHF cells for which successive force/displacement measurements (in successive frames) were recorded are plotted in Fig. 7. Measurements for individual are ruffles are connected by lines. To good approximation, the resisting force was a linear function of the size of the deformation and extrapolated back to near zero force at zero displacement. This relation, however, did not always hold (see examples in Fig. 7) as a result, most likely, of the low signal to noise ratio. This held equally well for ruffles pushed out (away from the center of the cell) and for ruffles pushed in. Hence measurements can be characterized and compared in terms of the average force per unit of deformation measured. Forces of Deformation for Ruffles The values obtained for the forces of deformation of ruffles in both BC3H1 and CHF cells are presented in Table H. The large number of measurements, ranging from 10 to 14 for each cell and direction, were taken because of the wide spread in the data. Listed measurements are the average force per unit of deformation measured for each ruffle. Most ruffles were measured two or three times, either in successive frames as presented in Fig. 7 or with multiple sweeps, for which the resting shape was different. The ruffles for both cell types resisted deformations with a force of 15 to 30 tzdyn//zm of deformation. It appeared that the ruffles resisted deformation with greater force when pushed out (against their normal direction of motion) than when pushed in. For BC3H1 cells, the measured average force resistant to an outward deformation was twofold greater than that for an inward deformation. For CHF cells, this ratio was •1.3. A histogram of the force measurements in Fig. 8 characterizes the difference between outward and inward resistance. To facilitate comparison, data from the two cell types have been pooled. There is a non-Gaussian, skew distribution for these measurements, and hence a simple t test cannot be used to decide whether the difference is significant. Some of the difference in deformability for outward and inward pushes may have been due to the natural centripetal movement of the ruffles, which could range up to 0.2-0.4/~m during a measurement. This motion would cause us to overestimate the amount of deformation of the ruffle caused by the probe itself when pushed in, and underestimate it when pushed out. Since deformations made were usually between 1 and 2/~m (see Fig. 7), the forces of deformation could have been inaccurate by 10-30% in the two different directions. Strictly Speaking, since ruffles deformed largely at their bases, the measurements of deformability would better have been made as torques and angular displacements. In fact, pushing on ruffles at different distances from their bases may have contributed to the relatively large dispersion in the measurements. Unfortunately, our experimental procedure did not provide for a direct measure of the distance between the point of application of force and the hinge about which the ruffle bent. We estimate that the force was applied to a ruffle 7-10 t~m from its base and that the direction of force was mainly perpendicular to the plane of the lamella. The average value of 20/~dyn//~m of displacement converts to an estimated average angular deformability (torque per unit of angular deformation) of 1.4 #m-mdyn/radian. Traction on the Leading LameUa The measurement of ruffle deformability provides information about the structural strength of the ruffle, but not about the force that can be generated by the ruffle or by the leading lamellas of these cells. On an active cell particles picked up at the leading edge (where the ruffle occurs) are transported back along its dorsal and ventral surfaces across the broad, thin leading lamella to a region anterior to the nucleus (Abercrombie et al., 1970c;Dembo and Harris, 1981;Sheetz et al., 1989), suggesting that centripetal forces are being generated. We observed centripetal movement of our quartz fish- pole probes resting on the leading lamellas, and have attempted to measure the centripetal force that drives this motion. The tip of the fishpole probe was brought down to touch the cell surface, and the pipette holding the moored end of the fiber was lowered another 5-10/~m with no noticeable change in the position of the tip. This translates to the probe pushing down on the cell with a force of roughly 15/tdyn. Since spread cells resist indentation with a force in the range of 1 mdyn//~m for a probe of roughly the same dimensions (Petersen et al., 1982), this downward force would be expected to produce a deformation on the order of 0.01/zm into the surface of the cells. The fishpole probe was moved perpendicular to its long axis away from the cell center during the measurements in an attempt to counterbalance the force inward exerted on the probe by the cell and thereby obtain approximately a static force measurement. The force exerted outward by the probe and the movement inward of the probe tip are plotted in Fig. 9. For the measurement of the BC3H1 cells, the outward force of the probe was maintained at 5 + 1 #dyn for most of the 14 s of the experiment. The cell overcame this nearly steady outward force and pulled the probe tip toward its center with an average velocity of 1.5/~m/min. If the tip were sliding to some extent along the surface of the cell during the experiment, this would be an underestimate of the rate of motion of the cell surface. For the CHF cell, the force exerted against the inward movement by the fishpole probe was gradually increased over the 10 s of the experiment, and reached a similar value of 5 #dyn. The probe tip was pulled in at a rate of ~4/~m/min. The probe came into contact with the base of a ruffle that was retracting inward at about the fifth second of the experiment. For both trials, the force of bending of the Cell .. ,._~. f Structure Figure 9. Geometry of force measurement. The fishpole probe consists of a quartz fiber (f) glued onto the broken end of a glass micropipette (p) with a small drop of epoxy. The probe is drawn in two positions. In the upper position the probe extends along the Z axis just touching an object (o) without exerting force on it. The origin of the Z axis has been placed at the fastened tip of the quartz fiber, and the fiber has a length L. In the lower drawing, the micropipette has been moved down a distance, -X, and the tip of the probe has deformed the object by an mount, D. The probe has been bent so that the tip of the probe forms an angle, 0, with the Z axis. The angle, 0, is measured, and the amount of displacement of the quartz fiber's tip, X-D, and hence the force, F, on the tip are calculated as described in the Appendix I. probe was not sufficient to stop the progress of the tip completely. Hence, the leading lamella can move the probe inward with a force at least as great as that which we have measured, but this may not be the maximal force that can be generated. Attempts to resist motion with larger force, however, resulted in the probe sliding off the cells. Discussion We have studied the extension and retraction of the leading edge of locomoting fibroblasts by high resolution light microscopy and analysis of digitized video images. We have demonstrated that lamellar extensions begin randomly in time and occur with constant velocity of extension both before and after the lamellas bend upward to form ruffles. Further, we have shown that the ruffling lamellas are straight in cross section, quite stiff and elastic, and appear to be subject to relatively large retractive forces. Our observations allow us to draw conclusions about the mechanisms and mechanics of these processes. Extension of the LameUipodium We have observed that the leading edge of a locomoting fibroblast extends as a broad, flat sheet or lamellipodium at rates of 2-7 #m/rain, in agreement with previous estimates of from 2 to 8/~m/min (Abercrombie et al., 1970a;Chen, 1979). Most of the active lamella extends with the same velocity and to the same extent. Most informative is the observation that the velocity of extension remains constant during most of its duration within the resolution of our measurements and that the extension reaches a maximal velocity quickly; i.e., the period of acceleration to constant velocity is very brief. Viewed a priori, the rate of lamellar extension could be detern'fined by a balance between the forces driving the extension and those resisting it, or, if the resisting forces are negligibly small by the rate of application of the extending forces. If lamellar extension were limited by a balance between driving and resisting forces, its rate could depend on the rate of application of the former and the dependence of the latter on the size and shape of the lameUipodium. A simple linear viscoelastic model predicts an exponential time course for larneUar extension and so is contradicted by the experimental observations. This model, although rudimentary helps to clarify the various contributions to the force balance and is discussed in greater detail in Appendix II. As another example, the time course for acrosomal extension (Tilney and Inoue, 1982) has recently been explained in terms of a changing balance between viscoelastic forces which resist acrosomal extension and osmotic forces which drive it (Oster et al., 1982;Tilney and Inoue, 1985;Oster and Perelson, 1987). This model yields a dependence on t-ire, in agreement with their experimental observations. A similar model has been suggested to account for lamellipodial extension (Oster and Perelson, 1985). Interestingly, the time course for the extension of filopodia in nerve growth cones, although much slower, qualitatively resembles that of the Thyone acrosomal process (Argiro et al., 1985). Hence, the shape of the process may play a significant role in determining its time course of extension. None of these simple models yield a behavior consistent with our results, that extension occurs with constant velocity. Although it is possible to develop force balance models that are consistent with these observations, this requires assumptions that cannot yet be justified experimentally, such as time or shape dependent resistance to extension. Hence, it seems simplest at present to suppose that the constant rate of lamellipodial extension results from the constant rate of application of force, presumably due to the steady operation of the motor that drives it, with little effect of viscoelastic resistance. The proposed constant rate of operation oftbe driving motor may be controlled by a constant rate of delivery of a limiting material, or by the speed of a rate-limiting biochemical reaction. Force for LameUar Extension The origin of the force responsible for lamellar extension is unclear. One possibility is an increase of intraccllular hydrostatic pressure due to an osmotically driven influx of water (Tilney and Inoue, 1985). Since the hydrostatic pressure would be increased throughout the cell, this hypothesis also requires a mechanism for confining the cellular deformation to the extending lamella (cf Oster, 1988). In our opinion it is simpler to attribute tentatively the extensional force either to the polymerization of microfilaments in the lamella or to actin-myosin interactions (cf Smith, 1988;Mitchison and Kirschner, 1988). Clearly, polymerization of proteins can produce sufficient force to deform membranes as demonstrated by the deformation of erythrocytcs by sickle cell hemoglobin (Mozzarelli et al., 1987) and recent observations of the deformation of lipid vesicles by the polymerization of enclosed actin (Cortese et al., 1989). The observations that rhodamine-labeled filamentous actin does not move out into an extending process (Felder, 1984;Wang, 1985) and that freshly microinjected monomeric rhodamine-labeled actin is preferentially incorporated into the extending lobopodia of amoebae (Taylor et al., 1980) are consistent with this interpretation. The rate of polymerization might be limited by the concentration either of actin monomer or of filament ends or, in a more complex manner by specific actin binding proteins or ion fluxes. If these remained effectively constant, the polymerization rate and the consequent rate of application of extensile force would also be constant. If both viscous and elastic resistance to extension were negligible, then the rate of extension would be determined by the unloaded (maximal) rate of polymerization. The rapid diffusion of actin in cytoplasm (Kreis et al., 1982;Felder, 1984) is sufficient to provide a steady state monomer concentration and therefore constant rate of polymerization, but actin polymerization in living cells is not well enough characterized to assess this assumption. Alternatively, extension due to interaction of myosin (presumably myosin I; cf Mitchison and Kirschner, 1988), attached to the lamcllar membrane, with actin filaments anchored to a fixed position relative to the substrate, could produce lamellar extension with constant velocity. Recent observations of rapid forward transport of membrane glycoproteins in the lameUas of rapidly translocating fish epidermal keratocytes are consistent with the generation of forward extensile forces CKucik et al., 1989). RuJ]le Elevation As it extends, the lamellipodium begins to bend upward to form a ruffle at a point 2-4 #m behind its advancing tip. The outward extension of the elevating ruffle continues at its initial rate until it has swung past vertical, typically 40-60 s after beginning to bend upward. Hence, the initiation of bending is not correlated with cessation of extension. The bending its confined to a narrow region of high curvature, the "hinge," near the base of the ruffle, leaving most of the ruffle straight in vertical cross section (cf. Abercrombie et ai., 1970b). When pushed with a probe, a ruffle similarly bends at its base, resisting the imposed stress with forces of up to and beyond 50/zdyn, and then springs back rapidly after the probe is removed to continue its natural movement. Hence, the ruffles seem to be quite stiff for such thin structures and largely elastic. That the ruffles remain straight while growing in length after elevating suggests that the shape of the lameUipodium is intrinsic to its structure and not merely a consequence of its development in juxtaposition to the fiat surface of the substratum. The observed stiffness indicates that the material properties are determined by cytoskeletal structures rather than by the membrane. The ability of phospholipid membranes to sustain shear or bending strain is far too small to resist deformations with the forces that we have measured (Evans and Hochmuth, 1978). The stiffness and elasticity of fibroblast lamellipodia is reminiscent of the similar characteristics of neutrophil pseudopodia observed by micropipette aspiration (Schmid-Schoenbein et al., 1982). Force for Ruffle Bending The origin of the force that bends a lamella upward to form a ruffle is unknown. We assume that to bend the ruffles naturally, the cell must exert a torque similar in magnitude to the torque we have exerted in bending the ruffles with our quartz probe. (This may not be true, however. For example, the ruffle could be bent in only small increments requiring much less force, with rapidly coupled bending and cytoskeletal remodelling to relieve stress.) Because of the magnitude of the force required, we favor aetin-myosin interactions exerted through the actin filaments within the lameila. To bend the lamella requires the exertion of a torque about the hinge axis and therefore a component of force acting on the lamella perpendicular to this axis exerted some distance from the axis and normal to the plane of the lamella. The geometry of the extended lamella is highly inefficient for this function. The lamellas are very thin and so the filaments must be largely parallel to the lamellar plane. Further, they may exert force on the whole surface of the lamella. Thus, only a small component of the force is applied perpendicular to the lamellar plane and is exerted with an effective radius substantially less than the full height of the ruffle. In our measurements, however, the probe was applied to the tip of the ruffle, and the full force was applied perpendicular to the lameilar plane. Hence, we estimate that the natural retraction force exerted on the lamellipodium is substantially greater (perhaps >10-fold) than the force with which we were able to bend ruffles. Surface Traction Forces and Implications for Cell Locomotion The measurements discussed above place constraints on models both of lamellar motion and of the generation of force within lamellas. Ultimately, however, we want to know how these motions and forces are related to cell locomotion. Without a more complete description of the geometry, the origin of driving forces, and the nature of forces resisting cell motions interpretations remain speculative. To provide a working hypothesis and in agreement with suggestions of others (Abercrombie et al., 1977), we suppose that the forces which create and move ruffles are the same forces that drive cell movement. In this view, the leading lamella is extended to develop more forward contacts with the cell's surroundings. As the lameUa extends, fairly strong retractive forces are exerted, possibly by interaction of myosin and actin filaments. The retraetive forces either pull the lamella rearward and bend it upward (for cells attached on one side to a flat surface), or, if the lamella is more firmly attached to the substrate, draw the rearward portions of the cell forward to prepare for a further cycle of lamellar extension (cf Cben, 1979Cben, , 1981. From this perspective, the ruffle deformability measurements discussed above should relate directly to the retractile forces that pull the cell forward. Our direct measurements of surface retraction forces yield values that approximately agree with forces which resist ruffle bending. This supports the contention that the same forces are responsible both for ruffle elevation and for forward cellular retraction during locomotion. The dynamic, retractive force is ,05 #dyn/#m ~ of probe surface in contact with the cell surface. This is two orders of magnitude greater than the force estimated to be required to transport particles rearward along the leading lameila of CHF cells and mouse peritoneal macrophages (Dembo and Harris, 1981). This discrepancy presumably indicates that the rate of particle transport is determined by the rate of the motor that drives transport and is not influenced by the small viscous resistance to the particle motion. The measured force of surface retraction is in the range of pressures required for elastic shear (,0103 dyn/cm 2) but is 100-fold less than that required for elastic stretching (,011Y dyn/cm 2) of erythrocyte membranes (Evans and Skalak, 1980). The retractive force is also considerably less than the forces developed in stress fibers spanning focal contacts (lzzard and Lochner, 1980). This static tensile force has been estimated to be on the order of 10 mydnes for a fibroblast with a leading edge 10/zm in length (Harris et al., 1980). In contrast, cells which crawl more rapidly like polymorphonuclear leukocytes and macrophages do not develop stress fibers or focal contacts (Oliver et al., 1978;Painter et al., 1981). For these ceils, a retractive force of 5 to 50/zdyn exerted near the leading edge may be enough to pull the cells forward. The drag forces acting on these cells as well as the retractive force produced at their leading edges, however, remain to be measured. In summary, our measurements of the rates of lamellar extension and the shapes and deformability of ruffles suggest that the forces driving motion of the leading lamella are substantially larger than the forces that resist motion. Hence, the rates of movement appear to be determined by the rates of application of force. Furthermore, the forces that we have measured that resist lamellar deformation and retract the upper cell surface appear strong enough to be directly involved in forward motion of the ceils. Information on the details of filament polymerization and the operation of motor proteins within lamellas as well as characterization of other relevant forces, the force of lamellar extension and drag forces resisting motion for rapidly moving cells, for example, will allow a more definitive description of these processes. We are grateful to Dr. Richard Wrenn for many helpful discussions and assistance with video digitization methods and to Michael Scou for the development of programs for collection and handling of digital video data. This work was supported by National Institutes of Health grants GM 30299, GM 27160, and GM 38838, and by a grant from the Edward Mallinckxodt, Jr. Foundation. Received for publication 28 February 1990 and in revised form 4 September 1990. Bending of the lqshpole Probe as a Measure of Applied Force The bending moment of a beam is related to the Young's modulus for the material, E, the moment of inertia of the beam cross section, I, and the radius of curvature, R, by the equation M = El~R; where R is defined as 1/R = (d2x/dz2)/[1 + (dx/dz)2], which for small curvatures (i.e., where dx/dz << 1) becomes 1/R = dZx/dz 2. For the static case of a uniform beam held fixed and straight at one end while a force F is applied perpendicular to the long axis of the beam at the other end, the bending moment M is balanced at all points along the beam by the force component M = F (l-z); where l is the length of the beam and z is the distance along the beam. This situation along with a description of the axes is pictured in Fig. 9. Substitution yields: d2x/dz 2 = F(I-z)tEL The solution to this equation for the boundary conditions already stated (x = 0 at z = 0 and dx/dz = 0 at z = 0 is): where the angle 0 is the difference in direction of the probe tip for stressed and unstressed conditions (as in Fig. 9). T~aus, the deflection of the tip of the beam is proportional to the applied force F (Eq. 2) and can be measured by measurement of the tip angle 0 (Eq. 3), and the sensitivity of the beam (that is the amount of tip displacement per unit of applied force) is proportional to l (Eq. 2). Simple Mechanism for LameUar Extension Balance of Forces. We treat the lamella as the simplest onedimensional linear viscoelastic solid, a Voigt solid represented by an elastic element, a spring, in parallel with a viscous element, a dashpot. As the lameUa extends, the former generates a force -kx; the latter, a force -Tflx/dt, where x is the degree of extension of the lamella from its condition of mechanical equilibrium, and k and 7/are elasticity and viscosity coefficients, respectively. Then the forces resisting extension are F(t) = -kx -~tx/dt. If we further suppose the sudden application of a constant extensional force, F0, then the lamellar extension should follow the time course, x(t) = -(FolK) [1 -exp (-t/r)] where 7 = TI/K. This model, and also more complicated one-dimensional linear viscoelastic solid models, thus predict an exponential time course of lamellar extension inconsistent with our results. A constant rate of extension can be recovered from this model by supposing that the lamella behaves as a viscous fluid with negligible elastic resistance; i.e., ~/>> ~. As indicated in the text, however, evidence from our measurements and those of others suggests that the lamella is more likely to behave as an elastic solid than a viscous fluid. The forces resisting extension might depend on the degree of extension. For example, if the viscous resistance developed uniformly throughout the extending lamella, then might be proportional to the degree of lamellar extension, 7/ = cx, where c is a constant, as previously supposed for the thyone acrosomal process (Oster et al., 1982). For this latter situation -t/r" -x(O + X~q log[ll -x(t)/x,~l] where we have supposed that the lamella extends in response to a constant force, F0, and X.q = Fo/k and ¥ = c/k. This model does not predict a constant velocity of lamellar extension. If the elastic resistance were negligible, then the model becomes similar to that developed by Oster et al. (1982) for the Thyone acrosomal reaction. In both cases, x2(t) ~,t. Although this is true for the acrosomal reaction, this relationship is contradicted for lamellar extension by our experiments. Unloaded Extension Another possibility is that the viscous resistance to extension is very small, as was observed for ruffle bending. Then the lamella should respond instantaneously to the extensional driving force and so the rate of extension of the lamella would be governed by the rate of application of force. More generally, if F(t) = o¢, where o~ is the rate of application of force, then, for the Voigt model considered above, x(t) = (a/k) It -z(1 -exp)(-t/z)]. So that for times long compared to r, the lamella would extend at a constant rate dxldt = -odk. This would be preceded by an exponential approach to the constant asymptotic extension rate. If ~/, and therefore ~', were small, as expected from our studies of lamellar bending, this transient exponential phase might be too brief to detect. Hence our experimental results are consistent with a simple linear model in which the force is applied at constant rate.
2016-05-04T20:20:58.661Z
1990-12-01T00:00:00.000
{ "year": 1990, "sha1": "bd3b6822c3ee5ee71e1fe9fe94d061ec38af1b47", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/111/6/2513/1060831/2513.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bd3b6822c3ee5ee71e1fe9fe94d061ec38af1b47", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
222232907
pes2o/s2orc
v3-fos-license
Quantitation analysis by flow cytometry shows that Wt1 is required for development of the proepicardium and epicardium The epicardium is a cell layer found on the external surface of the heart. During development it has an epithelial identity and contains progenitor cells for coronary smooth muscle and cardiac fibroblasts. The epicardium has been suggested to have therapeutic potential in cardiac repair. Study of epicardial development has been difficult because it is dynamic and morphologically complex. We developed a flow cytometry-based method to quantify cardiac development including the epicardial lineage. This provided accurate and sensitive analysis of (1) the emergence of epicardial progenitors within the proepicardium (2) their transfer to the heart to form the epicardium, and (3) their epithelial-to-mesenchymal transition (EMT) to create the subepicardium. Platelet-derived growth factor alpha (Pdgfra) and Wilms tumor protein (Wt1) have both been reported to be pro-mesenchymal during epicardial EMT. Quantitative analysis with flow cytometry confirmed a pro-mesenchymal role for Pdgfra but not for Wt1. Analysis of Wt1 null embryos showed that they had (1) poor formation of proepicardial villi, (2) reduced transfer of proepicardial cells to the heart, (3) a discontinuous epicardium with poor epithelial identity, and (4) a proportionally excessive number of mesenchymal-like cells. This data shows that Wt1 is essential for epicardial formation and maintenance rather than being pro-mesenchymal. INTRODUCTION Epicardial-derived cells have shown potential therapeutic role in adults for cardiac repair, either as progenitor cells for several cardiac cell types, or through paracrine mechanisms (Smart and Riley, 2012;Wang et al., 2015). This has generated interest in the mechanisms that guide their emergence and differentiation. The developmental origin of the epicardium (EPI) has been traced back in embryos to the proepicardium (PE), a transient structure lying caudal to the heart (Mikawa and Gourdie, 1996;Viragh and Challice, 1981). PE cells form villi that extend towards the heart, leading to cell transfer onto the myocardial surface via a cellular bridge (Rodgers et al., 2008) and/or release of clusters into the pericardial cavity (Komiyama et al., 1987;Sengbusch et al., 2002). These cells initially cover the myocardium as a monolayer, forming the early EPI. Some epicardial cells then leave this epithelial environment and adopt a mesenchymal fate, i.e. an epithelial-to-mesenchymal transition (EMT). This mesenchyme initially accumulates underneath the EPI (subepicardial mesenchyme or SEM) prior to invading the myocardium and differentiating into coronary smooth muscle cells and cardiac fibroblasts (Cai et al., 2008;Dettman et al., 1998;Zhou et al., 2008). The transitions from PE to EPI, and from EPI to SEM, are highly dynamic and involve complex 3-dimensional changes. In the mouse this occurs over a 2-3 day period. Markers of the PE, EPI and SEM are limited and by themselves non-specific, at least initially. Some steps, like the accumulation of the SEM under the epicardium, show variability in timing and extent depending upon the location within the heart. For these reasons analysis of epicardial development in wild-type and mutant animals by 2-dimensional sections is qualitative and subject to significant sample error. We therefore screened potential cell surface antibodies for expression in PE, EPI and SEM. Using these markers and flow cytometry we developed a highly sensitive and quantitative method to analyze early epicardial development. Our results with this technique confirm and extend the known role of platelet-derived growth factor alpha (Pdgfra) in epicardial development, and identifies multiple novel roles for Wilms tumour 1 (Wt1). RESULTS We searched for combinations of cell surface markers that would allow us to follow epicardial development by flow cytometry in embryos of any genotype. Initially, we used antibodies to the transcription factor WT1 and the Wt1-GFP transgene (Wt1 GFPCre mice (Zhou et al., 2008)) to identify cells of the epicardial lineage during early cardiac development. Marker expression in the proepicardium (PE) At embryonic day 9.5 (E9.5) in the mouse, epicardial progenitors reside in the PE. PE cells were identified at the ventral surface of the septum transversum by expression of WT1 (Fig. 1A). Integrin alpha4 (Itga4), has a critical role in early epicardium (Sengbusch et al., 2002), and was also expressed in PE cells (Fig. 1B). High levels of ITGA4 and WT1 were confined to the surface of the septum transversum, where PE villi are located. Unfortunately, neither of two anti-WT1 antibodies we used gave reliable signals by flow cytometry. Mice heterozygous for the Wt1 GFPCre allele (Zhou et al., 2008) were therefore used to identify WT1-expressing cells in the PE with this technique (Fig. 1C). We found that most Wt1-GFP + cells were ITGA4 high and that all ITGA4 high cells were Wt1-GFP + , consistent with co-expression of WT1 and ITGA4 in PE villi. Quantification of either ITGA4 high PE cells by flow cytometry or manual counting of PE villi cells on H&E sections gave similar results (Fig. 1D). Epicardial progenitors begun to accumulate in the PE from 16-17 somite pairs and peaked around 24-28 somite pairs. PE cells also expressed PODOPLANIN (a known epicardial marker (Mahtab et al., 2008)), PDGFRA (Chong et al., 2011) and ALCAM (also found throughout the septum transversum (Asahina et al., 2011)) (Fig. S1A-C). Marker expression in the pre-EMT (E10.5) Epicardium WT1 + ITGA4 + cells were found on the myocardial surface from E9.5, rapidly covering the heart to create a continuous epicardial monolayer by E10.5 (Fig. 1E). As expected, Wt1-GFP + ITGA4 + cells were detected in whole E10.5 Wt1 GFPCre/+ hearts by flow cytometry (Fig. 1F). Since Wt1-GFP -ITGA4 + cells were also present in the heart ( Fig. 1F), additional markers were necessary to identify EPI in absence of the Wt1-GFPCre transgene. Early epicardial cells were found to retain expression of PODOPLANIN ( Fig. 1G) and ALCAM (Fig. S1E), and the combination of either marker with ITGA4 was unique to the EPI (Fig. 1H,I, Fig. S1F,G). In addition, EPI cells were the only ITGA4 + PECAMcells in the heart at this stage (Fig. S1H). Marker expression in EPI and SEM during EMT EMT of the EPI results in accumulation of the SEM. The SEM is initially most prominent in the atrioventricular and interventricular grooves (AVG, IVG). We found that the SEM expressed low levels of WT1 and high levels of PDGFRA, compared to the EPI (Fig. 1J,K). Flow cytometry showed that Wt1-GFP + cells could be subdivided into PDGFRA low/and PDGFRA high cells, putative EPI and SEM respectively (Fig. 1L,M). Mean Wt1-GFP epifluorescence was only slightly lower in SEM compared to EPI, suggesting that expression of GFP decreased more slowly than that of WT1. To confirm our identification of PDGFRA low/and PDGFRA high cells as EPI and SEM respectively, we used the Pdgfra-GFP allele, which was compatible with intracellular flow cytometry, allowing us to look at keratin expression, a hallmark of epithelial (epicardium) tissues. PDGFRA is expressed in PE (Fig. S1B), then downregulated as cells form the EPI and upregulated in nascent SEM (Fig. 1J,K). Given that the Pdgfra-GFP transgene (and Pdgfra itself) is also expressed in valve mesenchyme and neural crest cells, unlike Wt1-GFP, ventricle preparations were used to ensure that all Pdgfra-GFP + cells observed were of epicardial origin. Accumulation of SEM was lower than that observed with Wt1-GFP, as the AVG was not represented in ventricle fractions. Two Pdgfra-GFP + populations could be recognized from E11.5, a Pdgfra- Differential expression of PDGFRB, ITGA4 and PODOPLANIN between EPI and SEM was also observed using Wt1-GFP (Fig. S1I-K). Combinations of PODOPLANIN with PDGFRA, PDGFRB or ITGA4 were all sufficiently sensitive to separate EPI and SEM within the Wt1-GFP + population (Fig. 1N,O, Fig. S1L). Monitoring epicardial EMT in wild-type embryos Combinations of PODOPLANIN, PDGFRA, ITGA4 and PDGFRB were able to uniquely identify EPI and SEM in presence of either the Wt1-GFP or Pdgfra-GFP transgenes. To identify strategies that would allow us to do this without relying on any transgene, we profiled the expression of those markers in various cardiac derivatives. In addition to epicardial development, the PDGFRA/ITGA4/ALCAM or PODOPLANIN/PECAM combination allowed the monitoring of other cardiac events such as valve EMT. Removal of the outflow tract (OFT) and its corresponding cushions permitted quantification of atrioventricular (AV) cushions EMT in the remaining fraction, with no contamination from either OFT mesenchyme or neural crest cells ( Fig. S4A,B). AV valve mesenchyme (PECAM low ITGA4 mid ) accumulated rapidly between 22-32 somite pairs (Fig. S4C). OFT cushions EMT could also be followed accurately in OFT preparations by using Wnt1Cre to separate neural crest from the valve mesenchyme (Fig. S4D). Quantification of epicardial development in wild-type samples Some neural crest, and to a lesser degree valve mesenchyme, started to contaminate the EPI gate at E11.5 ( Fig. S5A-E). Ventricle preparations that exclude all valves and the OFT were used from this stage onwards as in these the number of either cell type was negligible ( Fig. S5F-J). Up to EMT, ALCAM and PODOPLANIN had been interchangeable for identifying the epicardial lineage as a whole, however the differential PODOPLANIN expression in EPI and SEM made it more attractive than ALCAM (whose levels were less affected, see below) from E11.5. EPI and SEM were therefore quantified in wild-type ventricles using the following strategy. PODOPLANIN + ITGA4 + cells were gated out of single viable cells ( Fig. 2A). Contaminating myocardial cells were then removed using their autofluorescence in the 488 channel (Fig. 2B). The remaining cells were then of the epicardial lineage, and relative levels of PODOPLANIN and PDGFRB were used to separate EPI from SEM (Fig. 2C). In Wt1-GFPCre + samples, GFP + cells were selected first on a fluorogold vs GFP plot (Tallquist and Soriano, 2003). The Pdgfra floxed allele is also conditional-null following Cre-mediated recombination. We generated three mutant models: Pdgfra GFP/GFP null embryos, Pdgfra GFP/floxed hypomorphic embryos and so called "epicardial-deleted" mutants, Gata5Cre/+ Pdgfra floxed/floxed . Pdgfra GFP/+ heterozygous mice developed without a gross morphological phenotype as reported (Hamilton et al., 2003). Pdgfra GFP/floxed hypomorphic embryos were obtained at Mendelian ratios at E12.5. Pdgfra GFP/GFP null embryos had a more severe phenotype, with up to 75% mortality by E10.5, as previously described (Soriano, 1997). Rare "healthy" null embryos could still be obtained at E12.5, before developing fatal cranial hemorrhages at E14. Although a SEM began to accumulate at E11.5 in Pdgfra GFP/GFP null hearts ( Fig. 3A,B), SEM was reduced at E12.5 in both hypomorphic and null embryos (2.1 and 2.4-fold less than control heterozygous mice, respectively, p<0.001, Fig. 3C). The average PDGFRA expression/cell in Pdgfra floxed/+ , Pdgfra GFP/+ , Pdgfra floxed/floxed and Pdgfra GFP/floxed embryos was 69%, 36%, 34% and 20% of wild-type respectively (after subtracting the Pdgfra GFP/GFP background, Fig. 3D), confirming that the floxed allele was hypomorphic, and that Pdgfra GFP/floxed embryos expressed very little PDGFRA. To assess the origin of this defect, early epicardial development was quantified in both Pdgfra GFP/GFP and Pdgfra GFP/floxed embryos. Embryos with morphological cardiac defects were excluded from this analysis. Accumulation of PE and early EPI was normal (Fig. 3G,H). SEM accumulation was reduced from its earliest point of accumulation at E11.5 (Fig. 3I), suggesting a defect in epicardial EMT. Flow cytometry showed normal expression of Pdgfra-GFP, PODOPLANIN, ITGA4 and PDGFRB in mutant EPI (Fig. S6D). However, 3D confocal analysis of whole-mount stained hearts revealed increased WT1 staining in Pdgfra mutant EPI at E11.5 (n=4/4, Fig. 3J,K). Morphological observation, as well as confocal imaging, showed that Pdgfra GFP/GFP epicardium often appeared " bubbly" from E11.5 onwards (red box, Fig. 3K). Although this could be observed in control hearts as well as E11.5, this was more severe and persistent in the mutants. These experiments confirmed a role for Pdgfra in SEM formation, and suggest that this is at least in part due to decreased SEM proliferation and possibly also due to reduced epicardial EMT. We attempted to delete Pdgfra specifically within the epicardial lineage with the Fig. S6G,H). This phenotype was significantly more severe than that of Pdgfra null embryos (Fig. 3F). In addition, Pdgfra GFP/+ Gata5Cre/+ embryos had significantly less SEM than Pdgfra GFP/+ embryos (p= 0.0015, Fig. 3C). This data showed that the Gata5Cre transgene itself interfered with epicardial development independently of deletion of a target floxed allele, and is therefore unsuitable for studying epicardial development. Fig. 4H). As the mesenchymal (PDGFRA high Wt1-GFP low ) cells were located on the surface of the myocardium, and not between EPI and myocardium as in controls, they were termed "SEM-like" cells. In Wt1 GFPCre/GFPCre null mutants, SEM-like cells were increased in proportion within the epicardial lineage, whether in whole hearts, or ventricle preparations (Fig. 4I, n=11/11, p<0.0001). Wt1 is required for epithelial identity of the epicardium Decreased EPI and increased SEM-like cells in Wt1 nulls suggested that WT1 has a pro-epithelial role, either by repressing epicardial EMT, or alternatively impairing the epithelial character of the EPI. Since Pdgfra null embryos showed increased WT1 staining and a reduced EMT, we investigated whether lowering WT1 expression in Pdgfra mutants could rescue the EMT defect. Ventricle samples from healthy embryos between 44-50 somite pairs were analyzed to ensure reliably measurable amounts of SEM and stage matching. Cells of the epicardial lineage were gated as described in Fig. 2 and the levels of PODOPLANIN vs PDGFRB used to assess the proportions of EPI and SEM. Pdgfra hypomorphic and null embryos had 45% and 43% of wild-type SEM content respectively (p<0.001, Fig. 3I). Deletion of one Wt1 copy in Pdgfra null or hypomorphic embryos resulted in increased SEM content in about half of the embryos, but in the limited number of embryos that we were able to generate (n=6) this did not reach statistical significance (p=0.13, Fig. S7D). However, we also found that Pdgfra GFP/+ heterozygotes, although grossly normal, also showed a reduction in SEM content to about 80% of controls (p=0.015, Fig. 4J). Deletion of one Wt1 allele (Pdgfra GFP/+ Wt1 GFPCre/+ ) was sufficient to restore the number of SEM cells to wildtype levels (p<0.0001, Fig. 4J). Expression of epithelial and mesenchymal markers was assessed in EPI and SEM-like cells in Wt1 mutants. E11.5 EPI cells (Wt1-GFP + PODOPLANIN high PDGFRB -) from Wt1 nulls showed inappropriately high expression of ITGA4 and ALCAM, and low expression of PODOPLANIN ( Fig. 5A-C). This marker pattern resembled that of wild-type PE (Fig. 5A-C), suggesting that differentiation of the EPI was impaired in Wt1 mutants. E11.5 SEM-like cells in Wt1 nulls (Wt1-GFP + PODOPLANIN low PDGFRB + ) showed higher expression of PDGFRB (PE, p=0.032; E10.5 EPI, p=0.0047) and lower expression of PODOPLANIN (PE, P=0.0005; E10.5 EPI, p=0.0031) than in PE or the E10.5 EPI (Fig. 5C,D). This showed that the SEM-like cells had matured as mesenchymal cells. Wt1 null SEM-like cells also showed a small but significant increase in apoptosis (Fig. 5E). The results of these experiments are consistent with a pro-epithelial role for WT1, since the absence of Wt1 led to a less differentiated EPI but did not impair mesenchymal differentiation of SEM-like cells. The rescue of SEM cell numbers in Pdgfra heterozygotes also suggests that Wt1 represses epicardial EMT. Wt1 is required for transfer of proepicardial cells to the epicardium The role of WT1 was also examined in the PE. There was no significant change in the total number of PE+EPI cells in Wt1 nulls from E8.8 to E10.5 (Fig. 6A). Consistent with this, Wt1 nulls showed no alteration in the frequency of apoptosis in PE and EPI cells (Fig. 5E) and proliferation of EPI cells was unaffected at E11.5 (Fig.5F). However, Wt1-GFP + cell numbers increased in the Wt1 null PE without increasing in the EPI, suggesting that WT1 is required for transfer from the PE to the heart (Fig. 6B,C). Of the PE markers we analyzed, only ALCAM was abnormal, showing increased expression in mutant PE cells (Fig. 5A-D). Expression of ITGA4, a predicted transcriptional target of WT1 (Kirschner et al., 2006), was unaffected in the Wt1 GFPCre/GFPCre null PE (Fig. 5A) but reduced in the E10.5 EPI (Fig. 5A). A significant proportion of EPI cells lost all ITGA4 staining (Fig. 6D). Since Itga4 is required for epicardial attachment to the heart (Sengbusch et al., 2002), this is likely to contribute to the low numbers of Wt1-GFP + cells from E10.5 on (Fig. 6C,F) and the restoration of ITGA expression per EPI cell at E11.5 (Fig. 6A). However, reduced ITGA4 expression did not contribute to poor PE-to-EPI transfer in Wt1 nulls because Wt1-GFP + ITGA high cells also accumulated in the PE (Fig. 6E,F). Despite having normal to excessive numbers of Wt1-GFP + PE cells, Wt1 GFPCre/GFPCre embryos showed a severe reduction in density and number of PE villi (n=4/4, Fig. 6G,H). DISCUSSION Epicardial development is a complex multistep process beginning with specification of the PE and culminating with differentiation of cardiac fibroblasts and smooth muscle cells. In order to understand the molecular mechanisms involved in these processes we developed methodology to identify, quantify and potentially isolate early epicardial derivatives throughout embryogenesis. We identified unique immunoprofiles which we used to perform a comprehensive and quantitative analysis of early epicardial development. This showed that PE formation, cell transfer to the EPI, formation of the SEM by EMT, and the accompanying marker expression are highly dynamic. In addition this method showed promise for quantifying other aspects of early cardiogenesis such as endothelium-to-mesenchyme transition (valve mesenchyme formation) and cardiac neural crest migration. We tested this methodology on Pdgfra and Wt1 mutants which are known to have epicardial phenotypes. Both genes are reported to be pro-mesenchymal and to pro- Embryos with less (hypomorphic and heterozygous) or no PDGFRA showed quantitative reductions in SEM. In contrast, we found that Wt1 was required for formation of the PE villi, cell transfer to the heart, epithelial differentiation of the epicardium and epicardial EMT. Wt1 -/ventricles had a single external cell layer which has previously been interpreted as EPI without a SEM. Hence Wt1, like Pdgfra, was considered essential for epicardial EMT and interpreted as a pro-mesenchymal factor (Martinez-Estrada et al., 2010;von Gise et al., 2011). However, our quantitative analysis of Wt1 null hearts showed that this external layer is composed of SEM-like mesenchymal cells and poorly epithelialized EPI cells. Defective formation of the SEM in Pdgfra heterozygous embryos was rescued when Wt1 gene dosage was halved. Our experiments support previous in vitro reports suggesting that WT1 represses rather than promotes EMT (Bax et al., 2011;Takeichi et al., 2013). Wt1 nulls also had a severe reduction in PE villi, another an epithelial structure (Hirose et al., 2006;Manner, 1992;Nahirney et al., 2003). Collectively our data pro- EMT has been the focus of many studies as it is a mechanism widely used in developing embryos and disease. EMT is tightly regulated, with often some donor epithelium retained, while sufficient mesenchyme is produced. This is particularly true of epicardial EMT, which must spare epicardial integrity. Control of epicardial EMT reflects this duality, integrating the activity of epicardial maintenance factors such as WT1 with that of EMT inducers such as PDGF and TGF-beta pathways (von Gise and Pu, 2012) to generate coronary precursors. Pdgfra-H2BGFP mice (Hamilton et al., 2003) (referenced here as Pdgfra GFP ), maintained on a C57BL/6 background, were bred once to FVB mice to generate F1 (C57BL/6 X FVB) females that were used for embryo collection. Survival and phenotype of null embryos was similar whether on C57BL/6 or a mixed C57BL/6 X FVB background, however the latter background was used for most analyzes due to a greater litter size. Animal models All experiments were approved by the WEHI Animal Ethics Committee and conducted according to the Prevention of Cruelty to Animals Act 1986 (the Act) and the National Health and Medical Research Council Australian Code of Practice for the Care and Use of Animals for Scientific Purposes 8th edition (the NHMRC Code). Dissection All embryos were scored for morphological criteria and health, and somite staged (when possible) under the microscope. Proepicardial regions were dissected and cleaned of sinus venosus and pericardium. Depending on the experiment, heart, out-flow tract or ventricles were dissected. When ventricle tissue was processed, endocardial cushions were removed to avoid contamination by valve tissues and neural crest derivatives. Flow cytometry Dissociation protocols were optimized for each stage using survival of Wt1-GFP + cells as a read-out. Dissected tissues were incubated in collagenase 2 (Worthington, 50u/ml) in PBS -(without Ca and Mg), at 37 degrees for 10-30mins, depending on the stage/thickness of tissue. For late stage embryos or adult hearts, tissue was minced before enzymatic dissociation. PBS -+ 7%FCS was then added to the mix, followed by an optimized regime of gentle mechanical dissociation with a 1ml pipetman. Samples were washed, fitered and labelled for flow cytometry. Samples were incubated for 30mins to 1hr with primary or secondary antibodies in PBS -+ 7%FCS, followed by 2x2ml washes. Fluorogold (Sigma-Aldrich, 1/300) was used a viability marker. For intracellular FACS (Keratin labeling), samples were fixed 20mins in Cytofix/Cytoperm (BD Biosciences) on ice before staining. Primary antibodies were incubated overnight in wash buffer (BD Biosciences) at 4 degrees. Cells were then incubated with secondary antibodies for 4hrs on ice. For proliferation studies, cells were fixed in ice cold 80% ethanol for 45mins and incubated with anti-Ki67 (BD Pharmingen kit) overnight. Cells were then stained with DAPI for 30mins at room temperature. For all antibodies, pilot experiments included isotype controls. For Ki67 staining, the BD Pharmingen kit included an isotype control which was used in all experiments to ensure appropriate gating of Ki67 + cells. All FACS amples were run on a Fortessa (BD Biosciences) and analyzed with FlowJO software. Statistics Statistical analysis (unpaired Student's t-test) was performed in Prism. (1/400 for PODOPLANIN), and 1/50 for immunofluorescence. Specificity of staining was ascertained by isotype or secondary-only controls. Immunofluorescence Samples were embedded in OCT following sucrose embedding, without prior fixation. 10μm sections were cut on a cryostat and stored at -80. After defrosting, sections were fixed for 3mins in 1% formalin, blocked for 1hr in PBS -+ 10% donkey serum and processed for antibody labeling. Antibodies incubation ranged from 1hr at room temperature to overnight at 4 degrees in PBS -+ 10% donkey serum. After antibody staining, slides were washed in PBSand incubated with DAPI (1/1000) before mounting in Immuno-Fluore (ImmunO). Samples were imaged on a LSM780 and processed with ImageJ or Imaris softwares. Whole-mount imaging Hearts were dissected free from lung tissue and the outflow tract was sometimes removed to ensure proper mounting. Samples were fixed for 30mins in 2% paraformaldehyde 0.1% Tween 20 PBS + (with Ca/Mg) at room temperature, then washed and blocked for 1hr in whole-mount buffer (WMB = PBSwith 10%FCS + 0.6% Triton X-100). Overnight incubation with primary antibodies was followed by 6x 1hr washes in WMB, and incubation with secondary antibodies + DAPI when required. After washes, hearts were cleared overnight on glycerol gradient (Ferkowicz and Yoder, 2011), mounted between 2 microscope coverslips and imaged on a LSM780. Imaging data was processed with ImageJ or Imaris softwares. Histology Embryos were fixed for 2 days in ice-cold 4% paraformaldehyde and processed for paraffin embedding. 7μm sections were counterstained with hematoxylin and eosin.
2020-10-10T13:16:45.410Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "a3d37fec657d2c147b05affa391b49e88fbc7655", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/10/07/2020.10.06.329151.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "a3d37fec657d2c147b05affa391b49e88fbc7655", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
10417549
pes2o/s2orc
v3-fos-license
Counterselection against D mu is mediated through immunoglobulin (Ig)alpha-Igbeta. The pre-B cell receptor is a key checkpoint regulator in developing B cells. Early events that are controlled by the pre-B cell receptor include positive selection for cells express membrane immunoglobulin heavy chains and negative selection against cells expressing truncated immunoglobulins that lack a complete variable region (D mu). Positive selection is known to be mediated by membrane immunoglobulin heavy chains through Ig alpha-Ig beta, whereas the mechanism for counterselection against D mu has not been determined. We have examined the role of the Ig alpha-Ig beta signal transducers in counterselection against D mu using mice that lack Ig beta. We found that D mu expression is not selected against in developing B cells in Ig beta mutant mice. Thus, the molecular mechanism for counterselection against D mu in pre-B cells resembles positive selection in that it requires interaction between mD mu and Ig alpha-Ig beta. T he object of B lymphocyte development is to produce cells with a diverse group of clonally restricted antigen receptors that are not self reactive (1). Antigen receptor diversification is achieved through regulated genomic rearrangements that result in the random assembly of Ig gene segments into productive transcription units (2,3). These gene rearrangements are in large part regulated by the pre-B cell receptor (BCR) 1 . B cells undergoing Ig heavy chain gene rearrangements (pre-B) can express at least two types of BCRs. One form of the receptor is composed of membrane immunoglobulin heavy chain (mIg ), 5, V-pre-B, and Ig ␣ -Ig ␤ , and is referred to as the pre-BCR (4)(5)(6). A second form of the pre-B cell receptor, known as the D pre-BCR (7), is found only in pre-B1 cells (8) and contains truncated mIg chains lacking a V H domain (mD ). mD is produced by Ig genes that have rearranged DJ H gene segments in reading frame (RF) 2 producing an in-frame start codon and a truncated transcription unit (7). Like authentic mIg , mD is a membrane protein that forms a complex with 5, V-pre-B, and Ig ␣ -Ig ␤ , and in tissue culture cell lines the D pre-BCR can activate cellular signaling responses (9)(10)(11)(12)(13)(14). But despite its ability to activate nonreceptor tyrosine kinases, D pre-BCR producing pre-B cells are selected against by a process that is mediated through the transmembrane domain of the mD protein (15). In contrast, pre-B cells that express intact mIg containing pre-BCRs are positively selected. Counterselection is reflected in the relative lack of mature B cells that express mIg in RF2 (15)(16)(17). The mechanism by which mD activates counterselection has not been defined, but is known to require expression of syk (18). Here we report on experiments showing that Ig ␤ is essential for counterselection against mD in vivo. Fluorescence Analysis and Cell Sorting. Single cell suspensions prepared from bone marrow or spleen were stained with PE-labeled anti-B220 and FITC-labeled anti-CD43 (PharMingen, San Diego, CA) or FITC-labeled anti-IgM, and analyzed on a FACScan ® . For cell sorting, bone marrow cells from four to six mice were stained with the same reagents and separated on a FACSvantage ® . CD43 ϩ B220 Ϫ and CD43 ϩ B220 ϩ cells were collected based on gating with RAG-1 Ϫ / Ϫ controls. DNA and PCR. Total bone marrow DNA was prepared for PCR as previously described (22). DNA from sorted cells was prepared for PCR in agarose plugs (23). Primers for V H -DJ H and D H -J H rearrangement were as in reference 22; these primers are mouse specific and do not detect the human Ig transgene. All experiments were performed a minimum of three times with two independently derived DNA samples. Nonrearranging Ig gene intervening sequences were amplified in parallel with other reactions and used as a loading control (22). Amplified DNA was visualized after transfer to nylon membranes by hybridization with a 6-kb EcoR1 fragment that spans the mouse J H region. for VDJ H joints, it was for 0.5 min at 94 Њ C, 1 min at 68 Њ C, and 1.5 min at 72 Њ C. PCR products were purified by agarose gel electrophoresis, subcloned into pBluescript, sequenced using an Applied Biosystems (Foster City, CA) DNA sequencing kit, and analyzed on a genetic analyzer (ABI-310; Applied Biosystems). Results mIgM Cannot Induce the Pre-B Cell Transition or Allelic Exclusion in the Absence of Ig ␤ . Expression of Ig ␤ is required for B cells to efficiently complete Ig V H to DJ H gene rearrangements (19). B cells in Ig ␤ Ϫ / Ϫ mice fail to express normal levels of mIg , and B cell development is arrested at the CD43 ϩ B220 ϩ pre-B1 stage (19). A similar celltype specific developmental arrest is also found in mice that carry a mutation in the transmembrane domain of mIg (24), and mice that fail to complete Ig V(D)J recombination (25)(26)(27)(28)(29). In view of the abnormally low levels of mIg in Ig ␤ Ϫ / Ϫ mice, failed pre-B cell development might simply be due to lack of Ig expression. To determine whether mIg could induce the pre-B cell transition in the absence of Ig ␤ , we introduced a productively rearranged immunoglobulin gene (20) into the Ig ␤ Ϫ / Ϫ background (TG.m Ig ␤ Ϫ / Ϫ ). We then measured B cell development by staining bone marrow cells with anti-CD43 and anti-B220 monoclonal antibodies (30). We found that expression of a pre-rearranged Ig transgene was not sufficient to activate the pre-B cell transition in the absence of Ig ␤ (Fig. 1). TG.m Ig␤ Ϫ/Ϫ B cells did not develop past the CD43 ϩ B220 ϩ pre-B cell stage (Fig. 1). In control experiments, the same mIg transgene did induce the appearance of more mature CD43 Ϫ B220 ϩ pre-B cells in a RAG Ϫ/Ϫ mutant background where B cell development was similarly arrested at the CD43 ϩ B220 ϩ stage (20,25,26; data not shown). We conclude that in the absence of Ig␤, a productively rearranged mIg is unable to activate the pre-B cell transition. Allelic exclusion is established as early as the CD43 ϩ B220 ϩ stage of B cell development (31)(32)(33). This early stage of development is found in the bone marrow of Ig␤ Ϫ/Ϫ mice (19). However, we were initially unable to measure allelic exclusion in Ig␤ mutant mice due to the low efficiency of complete Ig V H to DJ H gene rearrangements and absence of surface Ig expression (19). To determine whether expression of mIg could activate allelic exclusion in TG.m Ig␤ Ϫ/Ϫ mice, we measured inhibition of V H to DJ H gene rearrangements by PCR (34). In controls, the mIg transgene inhibited V H to DJ H gene rearrangement (22), but the same transgene had no effect in the Ig␤ Ϫ/Ϫ background (Fig. 2). We had previously shown that the cytoplasmic domains of Ig␣ and Ig␤ are sufficient to activate allelic exclusion (20,35). The finding that mIg is unable to induce allelic exclusion in the absence of Ig␤ suggests that Ig␤ is essential for allelic exclusion. Ig␤ Is Required for RF2 Counterselection. Igs with D H joined to J H in RF2 are rarely found in mature B cells (15)(16)(17). Genetic experiments in mice have shown that counterselection against RF2 requires the transmembrane domain of mIg and the syk tyrosine kinase (15,18). To determine whether counterselection is mediated through Ig␤, we sequenced DJ H joints amplified from sorted CD43 ϩ B220 ϩ pre-B cells from Ig␤ Ϫ/Ϫ mice and controls. In control samples, only 10% of the DJ H joints were in RF2 (Fig. 3), which is in agreement with similar measurements performed in other laboratories (15)(16)(17)(31)(32)(33). In contrast, there was no counterselection in the bone marrow cells of Ig␤ Ϫ/Ϫ mice; 13 out of 30 DJ H joints were in RF2 with the remainder being distributed in RF1 and 3 (Fig. 3). Thus, in the absence of Ig␤, there was no RF2 counterselection at the level of DJ H rearrangements in CD43 ϩ B220 ϩ cells in the bone marrow. V H to DJ H joining and counterselection are normally completed in CD43 ϩ B220 ϩ pre-B cells (31)(32)(33), but in Ig␤ Ϫ/Ϫ mice, V H to DJ H joining is inefficient (19). To determine whether RF2 was counterselected in the few Ig␤ mutant B cells that completed V H to DJ H rearrangements, we amplified and sequenced VHJ558L-DJ H 4 joints from unfractionated bone marrow cells (Fig. 4). As with the DJ H joints, we found no evidence for counterselection against RF2 in VDJ H joints in Ig␤ Ϫ/Ϫ B cells. 10/33 VHJ558L-DJ H 4 joints sequenced from Ig␤ Ϫ/Ϫ mice were in RF2. By contrast, RF2 was only found in 1 of 11 mature Ig's in the controls. The VDJ H and DJ H Ig␤ Ϫ/Ϫ joints otherwise resembled the wild type in the number of N and P nucleotides as well as in the extent of nucleotide deletion (Figs. 3 and 4). We conclude that there was no selection against RF2 in the absence of Ig␤, and that the absence of Ig␤ has no significant impact on the mechanics of recombination as measured by the variability of the joints. Discussion The transmembrane domain of mIg is required to produce the signals that mediate several antigen-independent events in developing B cells, including allelic exclusion and the pre-B cell transition (24,(36)(37)(38)(39). However, mIg itself is insufficient for signal transduction (40), and it requires the Ig␣ and Ig␤ signaling proteins to activate B cell responses in vitro and in vivo. The earliest developmental checkpoint regulated by Ig␣-Ig␤ appears to involve either activation of cellular competence to complete V H to DJ H rearrangements, or positive selection for cells that express mIg (19). In the next phase of the B cell pathway, the same transducers are necessary (Fig. 2) and sufficient to produce the signals that activate allelic exclusion and the pre-B cell transition (19,20,35,41). In the present report, we show that in addition to these functions, Ig␣-Ig␤ transducers are also necessary for negative selection against D. Two models have been proposed to explain counterselection against mD. The first model states that mD is toxic, and that cells expressing this protein are deleted by a mechanism that involves inhibition of proliferation (31). A second theory postulates that D proteins produce the signal for heavy chain allelic exclusion and block the completion of productive heavy chain gene rearrangements (15). According to this second model, cells expressing mD are then unable to continue along the B cell pathway. Support for the active signaling model comes from three sets of observations: (a) that there is no counterselection in the absence of a Ig transmembrane exon (15); (b) that there is no RF counterselection in the absence of syk (18); and (c) that there is no counterselection in early CD43 ϩ B220 ϩ B cell precursors in the absence of 5 (33). These experiments partially define the receptor structure for counterselection as composed of mD associated with 5. Our observation that negative selection against D does not occur in the absence of Ig␤ supports the signaling model, and identifies Ig␣-Ig␤ as the transducers that activate counterselection possibly by linking mD to nonreceptor tyrosine kinases. Why does the expression of the D pre-BCR lead to arrested development, whereas mature mIg in the same complex activates positive selection in early B cells? Both signals are produced in CD43 ϩ B220 ϩ pre-B cells, both require 5 (33,39,42), and the Ig␣-Ig␤ coreceptors (19,41), and both are transmitted through a cascade that induces syk (18,43). One way to explain the difference between the cellular response to mD pre-BCR and mIg pre-BCR expression might be an inability of D to pair with conventional or Ig light chains (14). According to this model, cells expressing mD should be trapped in the CD43 Ϫ B220 ϩ pre-B cell compartment since B cell development can progress to the CD43 Ϫ B220 ϩ stage in the absence of conventional light chains (44,45). However, elegant single cell sorting experiments have shown that mD-producing cells are selected against before this stage in CD43 ϩ B220 ϩ pre-B cells (33,42). Thus, the idea that abnormal pairing of mD with light chains is responsible for counterselection fails to take into account the observation that counterselection normally occurs independently of light chain gene rearrangements. Two alternative explanations for the disparate cellular responses to the D pre-BCR and the mIg pre-BCR are: (a) that there are qualitative differences between signals generated by a mD and a mIg receptor complex, and (b) pre-B-I cells that contain DJ H rearrangements are in a different stage of differentiation than pre-B-II cells that have completed VDJ H and express mIg (8). An example of two qualitatively distinct signals resulting in alternative biologic responses has been found in the highly homologous TCR receptor (46,47). TCR interaction with ligand can produce either anergy or activation, depending on the affinity of the TCR for the peptide-MHC complex (48). High affinity ligands that produce T cell responses fully activate CD3 tyrosine phosphorylation, whereas peptides that induce anergy bind with low affinity and induce a reduced level of CD3 phosphorylation. The low level CD3 phosphorylation induced by the anergizing peptides is associated with less than optimal ZAP-70 kinase activation (46,47). Less is known about the physiologic responses activated by Ig␣-Ig␤ in developing B cells, but experiments in transgenic mice have shown that early B cell development requires tyrosine phosphorylation of Ig␤ (20), and by inference, receptor cross-linking. Although the cytoplasmic domains Ig␣ and Ig␤ appear to have redundant functions in allelic exclusion and the pre-B cell transition (20,35), neither Ig␣, (41) nor Ig␤ (Papavasiliou, N., and M.C. Nussenzweig, manuscript in preparation) alone are able to fully restore B cell development in the bone marrow, suggesting that there are specific functions for Ig␣ and Ig␤, or the Ig␣-Ig␤ heterodimer. Biochemical support for the idea that individual coreceptors could have unique biologic functions also comes from transfection experiments in B cell lines (49)(50)(51) and from the observation that the cytoplasmic do-mains of Ig␣ and Ig␤ bind to different sets of nonreceptor tyrosine kinases (52). We would like to propose that positive and negative selection in developing B cells, like activation and anergy in T cells, may be mediated by differential phosphorylation of Ig␣ and Ig␤ in the pre-BCR. Given the requirement for cross-linking in pre-BCR activation, the mechanism that produces the proposed differential phosphorylation of the mD and mIg pre-BCRs may be a function of their affinities for the cross-linker.
2014-10-01T00:00:00.000Z
1996-12-01T00:00:00.000
{ "year": 1996, "sha1": "8cbb61dda21316b7f88b47780c7306e54583e1cc", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/184/6/2079.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8cbb61dda21316b7f88b47780c7306e54583e1cc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219115992
pes2o/s2orc
v3-fos-license
Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia The existence of cooperatives has an important meaning for the welfare state of Indonesia. As a nation that was colonized for a long time, cooperatives as one of the implementations of a people's economy became a systematic effort to correct the economic structure of a colonial style. In this study examines the legal policies of cooperative arrangements from various eras in Indonesia. This research is normative legal research with secondary data. This research shows that the existence and development of cooperatives experience ups and downs in their legal policies. The colonial period of cooperative arrangements merely regulates cooperatives in existence and makes cooperatives one of the business actors. During the independence period, the aim of cooperatives was as a people's economic movement which was expected to be able to equalize welfare. Unfortunately, cooperatives in the old and new order regimes were used as political tools to perpetuate government power. During the reform period, the regulation of cooperatives was getting worse because it made cooperatives like companies pursuing mere profits. Keyword: Cooperative; historical study; indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 34 INTRODUCTION The existence of cooperatives has an important meaning for the welfare state of Indonesia. As a nation that was colonized for a long time, cooperatives as one of the implementations of a people's economy became a systematic effort to correct the economic structure of a colonial style (Swasono, 2005). The economic system practiced by the capitalist colonial pattern produced a bitterness of life for the people due to the absence of humanity and justice practiced. On the other hand, the concept of cooperatives is an economic model based on the principle of kinship and the principle of cooperation which has a close relationship with the values that live in Indonesian society. A cooperative is a partnership that implements principles that guide joint efforts and the results of a common goal and aims to advance public welfare by realizing social justice for all Indonesian people. Considering the importance of the existence of cooperatives above, it is natural that cooperatives become the main pillars for the economy in Indonesia. Legally, the recognition of cooperatives is contained in the explanation of Article 33 paragraph (1) of the 1945 Constitution. Article 33 states the basis of economic democracy, production is carried out by all, for all under the leadership or control of community members. It is the prosperity of the people that comes first not the prosperity of individuals (Chaniago, 1984). Therefore, the economy is structured as a joint effort based on the family business. Build a company that is compatible with it is a cooperative. The explanation of the constitution is following the thoughts of Moh. Hatta over Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 35 cooperatives as the embodiment of the principle of kinship that underlies the Indonesian economy. From Hatta's thought above, the success of cooperatives should be built on two principles, namely the principle of solidarity and the principle of individuality. The principle of social solidarity emphasizes the desire to achieve mutual prosperity. Individuality relies on self-esteem, and the ability of individuals to advance cooperatives. Individuals who are members of cooperatives must realize that they must not depend on the cooperative for their fate without doing any actions that advance the cooperative. By adhering to these two principles, cooperatives will revive the life of collective life while maintaining individuality. Referring to the position of cooperatives above, cooperatives should be the future of the Indonesian economy in addition to State-Owned Enterprises (SOEs/BUMN), becoming a permanent economic supporter replacing business entities established from the capitalism system. But unfortunately, this has not yet been realized. Cooperatives are far behind compared to other business entities, namely SOEs and Conglomerate Companies. The above problems do not suddenly arise in cooperatives. The history of cooperatives experiences dynamics that cause cooperatives to progress at one time and backward at other times that occur unstably (Chaniago, 1984). The dynamics of cooperative development apart from being influenced by politics are also influenced by cooperative laws. The life of a cooperative is determined by law. Cooperative laws that are formed are also bound and influenced by the conditions of political and economic ideology in their time. And in this case, the government as a legislator has an important role in determining the course of political politics. Legal politics has an extraordinary influence on the existence of cooperatives in Indonesia. Based on the background that has been described previously, the main problem in this study is about the historical study of cooperative settings policy in indonesia METHOD This research is legal research including normative legal research. The study was conducted using the mechanism of library research. Literature research was conducted to obtain secondary data derived from primary materials, secondary materials and tertiary materials. For this library research, the material to be used is in the form of documents. This research material in the form of books, articles, research results, and legislation, as well as related expert opinions regarding the legal politics of cooperatives in Indonesia: a historical juridical review of cooperative arrangements in Indonesia. Normative legal research, the data associated with this legal research is analyzed descriptively qualitatively, namely by conducting an analysis which is returned to three aspects, namely classifying, comparing, and connecting. In other words, a researcher who uses qualitative methods is not merely aimed at revealing the truth, but to understand the truth. The data collected from library research will then be analyzed qualitatively to answer the proposed research problems. RESULTS AND DISCUSSION Legal Policies on Cooperative Regulations in the Colonial Period The cooperative was first established in Purwokerto, Central Java in 1898 with the establishment of “hulp en spaarbank” by R. Aria Wiriaatmadja, a regent in Purwokerto whose purpose was to safeguard the interests of civil servants, so that they were free of debts to usurers. Although not entirely as a cooperative bank, the presence of the institution has moved the hearts of Assistant De Wolf van Westerrode to encourage the construction of credit cooperatives for farmers throughout the Banyumas residency (Harsoyo, 2006). De Wolf Van Westerrode then created an agricultural credit organization according to the type of Bank Raiffeisen of Germany. Lending is no longer limited to civil servants but is extended to farmers who fall prey to usurers and bonded labor who must be cured of social ills (Tambunan, 2008). Cooperatives are increasingly developing at the same time as the national movement. Budi Utomo, which was founded in 1908, advocated the establishment of cooperatives for household use, while Sarekat Islam developed cooperatives engaged in Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 36 daily necessities. Before the birth of Budi Utomo, there were no ideals for establishing cooperatives by the people. The existing cooperative was established by Dutch employees and its existence depends on the Dutch Indian government (Swasono, 2005). The cooperative that was established at that time had a national spirit towards improving the people's lot. The establishment of cooperatives was facilitated by the Dutch colonial government to undermine Indonesia's growing young capitalism. Colonial capitalism did not provide a developing opportunity for young Indonesian capitalism, thus opening the way of life for its opponents, namely Indonesian cooperatives. Unfortunately, with the rapid development of cooperatives making the colonial government worried because the cooperative movement also brought political and social enthusiasm in it. Therefore the government is trying to prevent the growth and development of cooperatives through legislation. The first cooperative arrangement was verordening op de cooperative verenigingen (Statsblad 431 of 1915). This regulation regulates cooperatives in general both established by Dutch people and by indigenous people. In the regulation, there are several provisions which are very burdensome for the growth and development of cooperatives. The first difficulty is the notarial deed of establishment. The second difficulty is because the bureaucracy is not easy to deal with in registration (Swasono, 2005). To establish a cooperative must go to court it must first obtain permission from the GovernorGeneral of the Dutch East Indies government. The third difficulty is the obligation to announce in newspapers in Malay and Dutch. The fourth difficulty is in the financing because the material costs incurred are very large, the costs have not been added to the notary public and the costs of making newspaper announcements. To take care of the establishment and so on the Cooperative must at least issue 170 Gulden for financing. 50 Gulden is equivalent to nine quintals of rice (Chaniago, 1984). Furthermore, in cooperative management arrangements, strong interventions from the government were also found. The regulation is found in Article 11 and Article 12. The regulation in Article 11 states the obligation to provide an official register book from the government. The book contains detailed data of each member, management and employee which includes the name, first name or other name and place of residence as well as the work of the board and supervisor; Furthermore, Article 12 regulates the entry of a person to become a member of an association which must be verified by member parties and third parties by signature and must be notarized. In the above regulation, the government carries out strict supervision for members, management and supervisors of cooperatives, particularly the regulation of Article 11 which is specifically for indigenous people. This can be seen from the government's efforts to find out their background with the rules to mention their first names. Even the management and supervisors must also provide information about their residence and work to the government (Melianti, 2002). Also, in the organization of membership in cooperatives, the government closes access to society to become members. Cooperatives in these arrangements have lost their principles regarding the principle of member openness and independence. Cooperatives become elitist organizations for certain circles (Nurhayati & Wibowo, 2011). The regulation has received widespread resistance in society especially from the movement. In response to the refusal in 1920 a "Cooperative Commission" was chaired by J.H Boeke. The commission is tasked to examine the extent to which the needs of the population of the son of the earth to cooperate. The results of this commission report that cooperatives in Indonesia indeed need to be developed and given ease in their establishment. Based on that in 1927 the Inlandsce Cooperative Vereningen Statsblad 91 was established in 1927 (Regulation on Bumiputera Cooperative Groups 1927 State Gazette Number 91. In these Regulations for the establishment of cooperatives was facilitated. Easy arrangements include Article 3, Article 5, Article 7 paragraph (1) up to paragraph (5) and Article 11. According to the provisions of Article 3, cooperatives established by Bumiputera use the law that applies to Bumiputera. One of the permits under Bumiputera law is that cooperatives receive land rights, that is, they can buy and/or pawn land and rice fields. This right is more suitable and more beneficial for Indonesian cooperatives, especially agricultural cooperatives. Article 5 of the Cooperative Regulation of 1927 concerning Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 37 governing deeds of the establishment of cooperatives is simpler and easier than the previous arrangement. In this article, the choice is given in the use of the language used to make the deed, namely regional languages, Malay languages or Dutch languages. In addition to the use of language, the process of submitting a deed of the establishment of the government to obtain legal status is also made easier. In the process of filing a deed for the establishment of a cooperative to obtain a legal entity in the regulation becomes simpler. The deed of establishment no longer uses the deed from a notary, it does not require permission from the Governor-General. The deed is sufficient to be submitted to an advisor for the cooperative people's credit affairs. Besides the establishment of cooperatives also need not be announced in the newspaper. The most important convenience is that all processes in registration, submission, and announcement are free of charge. This will certainly be very helpful for cooperatives. To set up cooperatives to be cost-effective (Tambunan, 2008). Another arrangement that provides the last facility that can be found in the regulation of cooperative membership openness principles. The entry and exit of members are sufficiently evidenced by the recording in the list. Thus, there is no need for a third party, nor do officials or notaries make official letters as members. Community access to participate in building cooperatives is becoming more open and easier. Other facilities were provided as a follow up to the regulation Cooperatie Dienst (Cooperative Service) in 1930 under the Department van Binnenlandshe Bestuur (Ministry of Home Affairs) and in 1932 issued Government Decree Number 29 contained in Staatsablad Number 634 of 1932, which established that cooperatives established under Staatsblad Year 1927 Number 91 have been tax free for 10 years since they were founded. With the special arrangement of the child's earth, Indonesia applies the concept of dualism in law. This concept was proposed by J.H Boeke who examined the reasons for the failure of Dutch colonial (economic) policy in Indonesia from the standpoint of economic sociology (Sitio, 2001). The concept was developed based on the reality of two different social systems between native and non-native. If differences are not accommodated, there will be a clash in the community. Therefore, by arrangement, if the two cannot be united then two different rules must be given for each. This dualism of cooperative arrangement continued when Verordening op De Cooperative Verenigingen Statsblad 431 of 1915 was replaced by Algemeene Regeling op de Cooperative Vereeniiging Staatsblad 103 of 1933. The replacement was adjusted to the new Dutch Cooperative Law, which was formed in 1925 (Nurhayati & Wibowo, 2011). During the Japanese occupation, the conditions of cooperative arrangements did not change much. Japan does not make new rules in cooperatives. Japan only stipulates that all Government Agencies and legal and statutory powers from the previous Government remain to be recognized temporarily, provided that they do not conflict with Military Government Regulations (Marpaung, 2014). Although Japan recognized the 1927 Regulation, Japan also issued a regulation that had an impact on the existence of cooperatives, namely regulation No. 23 of 1942 which governs associations and trials. In Article 2 the regulation stipulates that to establish an association (including cooperatives), as well as to hold hearings or meetings of the association, the founders or their management must obtain prior permission from the Resident. Permission is granted on condition that the association or trial concerned is in no way a political movement. If it is observed that this regulation intends to supervise associations, including cooperatives in terms of the police. This arrangement ultimately limits and even kills the space for cooperatives in Indonesia. That is because cooperatives are one of the most important things that the movement organization fought for at the time. With the Japanese regulation, many cooperatives stopped their businesses because they did not get a permit. This coincided with the banning of movement organizations during the Japanese occupation. The Political Law of Post-Independence Cooperative Arrangements Cooperatives have gained a special place since the founding of Indonesia. This is indicated in the Indonesian constitution, especially Article 33 paragraph (1) which states: Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 38 “The economy is organized as a joint effort based on the principle of kinship”. On the principle of kinship. Build a company that is compatible with it is a cooperative. The question is, why are cooperatives chosen? There are two answers to respond to that question. First, cooperatives are considered concepts that can fight oppression by capitalism. Second, cooperatives are the most suitable and most appropriate concept for creating people's welfare. For these two answers, for example, it can be found in the thoughts of Sukarno and Hatta as the proclamation and leader of Indonesia at that time. In his thoughts Sukarno once wrote: “we are moving because we are not willing to set capitalism and imperialism, and the first condition for aborting capitalism and imperialism must be independence. We must be independent so that we can freely hook the rope to abort the system of capitalismimperialism. We must be them so that we can freely establish a new society without capitalism, imperialism” (Tambunan, 2008). In the article, Soekarno stated that the purpose of the struggle was not merely to obtain independence, more than that independence was only a means to fight capitalism and imperialism. Why? Because Indonesia has felt the pain caused by colonialism with the character of capitalism and imperialism. The Indonesian people cannot enjoy the natural wealth generated from their earth. Economic democracy in the explanation of the 1945 Constitution by Sukarno in his writings was described as a concept of democracy that provides welfare to its people. The economy operates from, by, and for the people. Soekarno explicitly wanted to say that capitalism and imperialism would be lost if the economy was not ruled by a handful of people but by all the people. The right container and following the principle recommended by Sukarno was a cooperative (Marpaung, 2014). The existence of cooperatives is not only as a micro-economy which is one of the business entities but also as a macro-economy. The path of the Indonesian economy should be based on cooperative policies (Nurhayati & Wibowo, 2011). The state's obligation is not merely to acknowledge cooperatives by allowing them to compete with the giants of capitalism, but the state should give priority to cooperatives for good growth and development. But on the other hand, the state is also not allowed to intervene because it will eliminate the identity of the cooperative itself. Regulations on Cooperatives in the Old Order The statutory regulations that initially governed cooperatives were Cooperative Societies 1949 State Institutions 1949 No. 179. Judging from the material contained in the Act, it was no different from the Regulations of Inlandsce Cooperative Vereningen Statsblad 91 of 1927. Hal as stated in Article 1 which reads: “Regulations on Indonesian Cooperative Societies (Bumiputera) as referred to in the March 19, 1927, ordinance, are reestablished as follows: Regulations on Cooperative Societies in 1949”. As was the 1927 cooperative regulations, In the 1949 regulation, the role of the government was only to facilitate the administration. The government only regulates how cooperatives are established and how they are endorsed (Swasono, 2005). The state acts passively towards the growth and development of cooperatives in Indonesia. Such an arrangement was certainly not following the wishes of the nation's leaders at the time who expected cooperatives to have more contribution in developing the people's economy. The government must make efforts so that the cooperative develop well without removing the character of the cooperative itself. As a result of the mismatch of these arrangements finally in 1958, Law No. 79 of 1958 concerning Cooperative Societies was made. The regulation is a cooperative law that was first made by the Indonesian people themselves. In-Law Number 79 of 1958 concerning cooperative associations, the government has an active role in promoting cooperatives. There are several important points in the arrangement (Swasono, 2005). First, the regulation of cooperative characters that are more active include: 1). requiring and activating its members to save regularly; 2). educate members towards cooperative awareness; and 3) organize one or several business fields in the field of cooperatives (Harsoyo, 2006). Secondly, there is an active role of the government in promoting cooperatives Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 39 which hold guidelines to guide the people to live cooperatively towards the implementation of the law and provide assistance and concessions to the cooperative movement (Abdullah, 2014). Guidance on the cooperative movement lies with the government intending to ensure the operation of Indonesian people's businesses to control the economy through the cooperative. The cooperatives also receive government assistance including legal protection, education, subsidies and facilities to run their businesses. One manifestation of this assistance is the ease of establishing cooperatives. Beyond positive arrangements above, there are some arrangements which are seen as too much government intervention. Some of these arrangements include restrictions on the type and level of cooperatives in each cooperative work area and the existence of too far involvement in coaching and supervising cooperatives by the state (Masngudi, 1990). Although there is no absolute limit on the amount of work in each area, it means that similar cooperatives or similar cooperatives can be established, but this regulation will have a difficult effect on proposing the establishment of cooperatives to the government. In terms of mentoring, the involvement of the government to have the right to speak and even be allowed to hold member meetings and executive meetings will eliminate the independence of the cooperative itself (Sitio, 2001). The good spirit in promoting cooperatives as they should have not lasted long. One year after the formation of the quo Law, an implementing regulation was formed in the form of Government Regulation Number 60 of 1959 concerning the Development of the Cooperative Movement. This rule distorts the laws on cooperative collections. This is evident in the main points of thought contained in the consideration of the Government Regulation which essentially places cooperatives in state intervention. Cooperatives must adjust to government policies and the government must take an active attitude in fostering cooperative movements based on the principles of guided democracy (Marpaung, 2014). Some provisions of Government Regulation No. 60 of 1959 concerning the development of the Cooperative Movement are considered to have denied the spirit to promote cooperatives. First, the provisions regarding cooperative management are developed uniformly for all types of cooperatives. May be members of cooperatives in each type of cooperative are people who have the same interests or have direct interests in the type of cooperative. For example in farmer cooperatives, only farmers and those who are allowed to become members of cooperatives. Besides that, in villages, first-level regions, second-level regions and at the central level, the cooperative structure has been uniformed. Second, if there is more than one type of cooperative in one working area, in the shortest possible time it will be united (merged) by the government. Third, cooperatives are spoiled by the state by eliminating or avoiding as far as possible from the competition with private businesses. The existence of these regulations caused the cooperative to become limited in space and lose the initiative to develop its business. In addition to these arrangements, cooperatives during the guided democracy period were not cooperatives that were born and developed from the community. Cooperatives are institutions that are set up top-down, meaning that most of the establishment is carried out by the state. This is reflected in the general explanation that the government is still scheduled for the formation of cooperatives by the government in business fields that control the lives of many people. Cooperatives as an extension of the government are not only in their structure and establishment. In MPRS Tap Number II / MPRS / 1960 concerning the Outline of the First Stage of National Development Planning in 1961-1969 and Government Regulation Number 140 of 1961 concerning Distribution of Goods and Materials for Basic Needs of the People, the Cooperative stands for hands of the country. The cooperatives chosen by the state to become a liaison institution with the people. The cooperative is given business by the state as a distributor of staples for the people. Even in the TAP MPRS a quo the role of cooperatives has more priority than national private companies. This is valid as long as it does not kill the creativity of the cooperative, but the facts on the field of the cooperative become dependent on the government (Masngudi, 1990). Government intervention on cooperatives does not stop there. As the peak of the politicization of cooperatives in the old order, in 1965 Law No. 14 of 1965 concerning Cooperatives was made. This law clearly and firmly places cooperatives under state Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 40 intervention. Cooperatives are economic organizations and tools of the Revolution that function as a nursery for people and vehicles for Indonesian Socialism based on Pancasila. Article 5 also emphasizes Cooperatives, the structure, activities of the coaching tools and equipment of cooperative organizations, reflecting the progressive national cooperation of the NASAKOM revolutionary (Suryani, 2015) Regulations on Cooperatives in the New Order The new order is the antithesis of the old order. The direction of state policy adopted by these two regimes is contradictory to one another. If during the old order Indonesia followed socialism in economic development, during the new order liberalism was a guide in economics. This is marked by the inclusion of Law No. 1 of 1967 concerning Foreign Investment (PMA). With the existence of this Law, the domination of foreign capital over national capital is inevitable and defeats the work done by Indonesians, especially cooperatives (Suryani, 2015). At the same time as the birth of the Foreign Investment Law, almost all policies taken by the old order, in the new order were annulled (Marpaung, 2014). Including those annulled by the new order are cooperative arrangements. With the establishment of the new order regime, cooperative arrangements have also changed. Law Number 14 of 1965 Cooperatives are revoked and replaced with Law Number 12 of 1967 concerning Cooperative Principles. The reason for the revocation is because Law No. 14 of 1965 concerning Cooperatives contains thoughts that want to place cooperatives as political servants, this is indicated because of government intervention that has gone too far. In connection with this matter in the explanation of this law which states: “The role of the Government that is too far in regulating the problems of Indonesian cooperatives as has been reflected in the past is essentially non-protective, even very limiting the movement and implementation of basic economic strategies that are not following the soul and meaning of Article 33 of the 1945 Constitution. it will hinder the steps and limit the characteristics of self-reliance, self-reliance, and participation which are the main elements of the principles of trust in oneself, which in turn will be detrimental to the community itself”. The old order policies that pulled cooperatives into the vortex of political conflict were broken by the new order. Cooperatives are returned to their functions as stated in the Act, the functions of cooperatives are 1). economic struggle tools to enhance people's welfare; 2). tools for democratizing the national economy; 3). as one of the arteries of the Indonesian economy; and 4). the tools of community development to strengthen the economic position of the Indonesian nation and be united in regulating the management of the people's economy. With the quo law, it is as if the intervention of the government is over, and the cooperative becomes an independent people's economic movement. Unfortunately, some arrangements contradict this spirit. This is stated in the regulation of cooperative type, especially Article 15 and Article 17. In Article 15 regulates: 1). following needs and for efficiency purposes, cooperatives can focus on higher-level coordination; 2). The lowest level cooperatives up to the top level in centralizing relations as referred to in paragraph (1) of this Article constitute an inseparable unit; 3). Higher-level cooperatives are obliged and authorized to carry out guidance and inspections of lower-level cooperatives, and 4). Relationships between levels of similar cooperatives are regulated in the Articles of Association of each similar Cooperative. Whereas Article 17 regulates: 1). The type of cooperative is based on the needs of oneself and for the efficiency of a group in a homogenous society because of the similarity of economic activities/interests to achieve the common goals of its members; and 2). For efficiency and order, in the interests and development of the Indonesian Cooperative, in each working area, there are only similar and equivalent cooperatives. In the above arrangements, government intervention is very strong even though the government does not directly interfere in the daily management of cooperatives. The design of cooperatives regulated in the above regulation in Article 15 is top-down, there are superior cooperatives and subordinate cooperatives. Centralization occurred in cooperative institutions. Primary cooperatives focus on central cooperatives, central cooperatives under Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 41 joint cooperatives and joint cooperatives under the parent cooperative. Relationships that occur are not from the bottom as elements that form superiors' cooperatives that are authorized to inspect and supervise, the opposite of cooperatives at the top level who guide and inspect cooperatives underneath (Budiyono & S, 2015). In the centralized cooperative arrangement, the government through the Parent cooperatives can intervene to all cooperatives under it. The above is increasingly emphasized by the regulation of Article 17 which regulates the organization of cooperatives. Cooperatives are classified based on the homogeneity of the community based on the common activities / economic interests of their members. This means that a cooperative is established and consists of people who have the same profession (Sitio, 2001). So it is not surprising if based on the Law a quo, civil servant cooperatives, and ABRI cooperatives are raised. Cooperatives under the state institutions will be easily intervened and commanded by the government because they must comply with all existing policies. In the development of the new order regime, Law Number 12 of 1967 concerning Cooperative Fundamentals in 1992 was changed to Law Number 25 of 1992 concerning Cooperatives. In terms of legal ideas, this Law continues the Laws that were previously formed. In Cooperative Law, the definition of Cooperative undergoes a fundamental paradigm change. Cooperatives are given the meaning of cooperatives as a business entity consisting of individuals or legal entities of cooperatives by basing their activities based on cooperative principles as well as a people's economic movement based on the principle of kinship. in this definition the cooperative has highlighted its form as a business entity, meaning that the cooperative has been released to conduct economic business. The cooperative has been allowed to seek business profits. This is different from the previous definition that defines cooperatives as associations or as economic organizations (Suryani, 2015). In-Law Number 25 Year 1992 concerning Cooperatives, Regulations that further restrict and limit the space for cooperatives are those concerning Cooperative Movement Institutions. In Article 57 concerning this institution, the Cooperative regulated together to establish a single organization that functions as a forum to fight for interests and act as carriers of Cooperative aspirations. In this arrangement, all cooperatives will later be joined in one single organization. If there is a cooperative that does not agree with organizational policies, the cooperative cannot do anything because the organization is the only legitimate cooperative forum. As a single forum, the organization is vulnerable to being politicized from its internal management. This organization has the following tasks: 1). Fighting and channeling Cooperative aspirations; 2). Increase awareness of cooperatives in the community; 3). Conducting cooperative education for members and communities; and 4). Develop cooperation between cooperatives and members of cooperatives with other business entities, both at national and international levels. Task point (1) above derogates the existence of cooperatives as autonomous organizations. After the founding of the organization, the voice of each cooperative to channel their aspirations was taken by the organization. Cooperatives no longer have the right to fight for and channel their aspirations. In this new order period, cooperatives became an extension of the government. Its existence is engineered in such a way that it abandons the basic principles of cooperatives. Cooperative Arrangements in the Reformation Period More than a decade after the reform, new cooperative arrangements were replaced. It is Law Number 17 of 2012 concerning Cooperatives which replaces Law Number 25 of 1992. Unfortunately, this Law is far from the fire to advance cooperatives. What exists weakens cooperatives and contradicts the Basic Law. This law has only been carried out for two years and was then canceled by the Constitutional Court in its entirety. The Cooperative Law has been deemed to have lost its spirit because it is no longer based on cooperative principles. Cooperatives are designed as capitalist companies that are solely looking for profit, not aimed at the welfare of its members. Even cooperatives by definition become land for profit-seeking by individuals. In this examination, the Court granted the Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 42 petition and canceled the entire Law number 17 of 2012 with several considerations (Sulaiman, Irwansyah, & Maryono, 2014). First, it relates to the definition of cooperatives set out in Article 1 number 1 of Law Number 17 of 2012 concerning “Cooperatives are legal entities established by individuals or Cooperative legal entities, with the separation of the wealth of their members as capital to run a business, which fulfills common aspirations and needs in the economic, social and cultural fields in accordance with Cooperative values and principles”. The Court responded that the formulation of cooperatives is (as) a legal entity that does not contain a substantive understanding of cooperatives as referred to in Article 33 paragraph (1) of the 1945 Constitution and its Explanations which refer to the understanding as building a typical company. According to the Court, the cooperative is about who the cooperative is, or in other words, the formulation that prioritizes cooperatives in the perspective of the subject or as economic actors, which is part of the economic system. For this purpose, it is formulated with words or phrases, associations, economic organizations, or people's economic organizations. Or, at least in a cooperative formulated as a “business entity”. So that the formulation of cooperatives in Law Number 17 of 2012 which does not indicate an economic agent, is contrary to the Constitution because it contains individualism (Rochmadi, 2011). Second, regarding the appointment of non-member management. The existence of these regulations hinders or even negate the rights of cooperative members to express opinions, vote, and be elected as well as the values of kinship, responsibility, democracy, and equality that form the basis of cooperatives. Cooperative is an organization that is built and developed based on the association of people, the goal is the welfare of individuals. If the person who is going to decide on management is outside the members, then it is a deviation from the fundamental principles of the cooperative (Sulaiman & Maryono, 2014). Cooperatives are expected to grow better along with the capacity of members of the cooperative who are qualified in managing cooperatives (Susilo, 2013). To make cooperatives a professional organization, the members of the cooperative must be built to become professionals. To realize this, certainly, the principle of education and training for members of the cooperative plays an important role. Third, about cooperative capital. In the regulation of the quo Law regulating cooperative capital is the Principal Deposit and Cooperative Capital Certificate as initial capital, besides it can also come from grants, investment capital, loan capital from members, etc. Concerning the provisions which state that the Principal payment is paid by the Member at the time the application is filed as a Member and cannot be returned, it cannot be justified. The principal deposit in a cooperative must be seen as a form of a person's decision to join voluntarily as a member of the cooperative, so if the member decides to leave or quit for some reason, naturally, the principal savings are withdrawn. About capital certificates that require members to buy, this is not following the principles of volunteerism and openness. This means, the orientation of the cooperative has shifted towards a pool of capital, which has thus denied the identity of the cooperative as a gathering of people with joint ventures as its main capital. With this arrangement, it is feared that the cooperative will lose its uniqueness in making important decisions. Meanwhile, capital certificate arrangements which when released cannot be sold outside but must be purchased by other members or by cooperatives. This is the same as the principal deposit arrangement. there is an element of coercion in it. What if fellow members do not want to buy, or cooperative money is not enough. This is detrimental to members of the Cooperative (Sulaiman et al., 2014). The last thing related to cooperative capital is an investment. This must be avoided because it opens the intervention of outside parties, including the government and foreign parties, through unlimited capital. Cooperatives as a group of people are thus no different from a limited liability company as a collection of capital, or even as a publicly listed public limited company that raises capital as much as possible with no limit to the risk of opening opportunities for intervention from parties outside the cooperative (Swasono, 2005). Fourth, the Prohibition of Distribution of Surplus Business Results Derived from Transactions with Non-Members. There is an injustice related to rights and Cooperative Law Policy: Historical Study of Cooperative Settings in Indonesia Jurnal Hukum PRASADA E-ISSN 2548-4524 CC-BY-SA 4.0 License Page 43 obligations, that is, when a cooperative experiences a surplus of the results of operations the member is not entitled to but when the cooperative experiences a deficit of operating results, whether caused by transactions with members or non-members, members are required to deposit the cooperative's capital certificate as additional capital. In this arrangement, the cooperative seems to place itself as an entity separate from its members. In fact, what is owned by cooperatives should be used for the welfare of its members, because that is the purpose of cooperatives. Fifth, about the types of cooperatives. In this arrangement, the cooperative is forced to choose one of the types specified in this law. This arrangement does not match the facts in the field relating to the development of cooperatives. limiting the types of cooperative business activities has contained the creativity of cooperatives to determine their types of business activities, which may be, along with the development of science, technology, culture, and economy, also developing types of business activities to meet human economic needs. This arrangement has wanted to dwarf the existence of cooperatives (Rachman, 2016). If a corporation (PT) is permitted to form a conglomerate, why does the Cooperative have to be limited in their business fields? This makes the injustice of cooperatives actors and movers. Many multi-purpose cooperatives have succeeded. With all of the above considerations, it would be appropriate if the Constitutional Court annulled the entire Cooperative Law. Arrangements relating to fundamental principles are contrary to the identity of the cooperative itself. The Constitutional Court has saved the political direction of legal regulation of cooperatives in Indonesia. CONCLUSION One thing that becomes homework together concerning the decision of the Constitutional Court is the dictum of the ruling which states that Law Number 25 of 1992 concerning Cooperatives is valid temporarily until the formation of the new Law. As discussed in the above research, Law Number 25 of 1992 itself has violated many cooperative principles and is no longer relevant for enforcement. Even in the stipulations in Law Number 25 of 1992 in some cases regulating the same content as the Act that was canceled. for example, in the capital, Law Number 25 of 1992 also recognizes capital investment. Besides that, in the SHU distribution, members are also only given a profitsharing proportional to the business services performed by each member. What if the transaction is done with non-members? The Court should also provide an interpretation of some of the provisions in Law Number 25 of 1992. Of course, this decision also raises constitutional problems, especially the Constitutional Court does not set a time limit on how long the new cooperative law must be made. INTRODUCTION The existence of cooperatives has an important meaning for the welfare state of Indonesia. As a nation that was colonized for a long time, cooperatives as one of the implementations of a people's economy became a systematic effort to correct the economic structure of a colonial style (Swasono, 2005). The economic system practiced by the capitalist colonial pattern produced a bitterness of life for the people due to the absence of humanity and justice practiced. On the other hand, the concept of cooperatives is an economic model based on the principle of kinship and the principle of cooperation which has a close relationship with the values that live in Indonesian society. A cooperative is a partnership that implements principles that guide joint efforts and the results of a common goal and aims to advance public welfare by realizing social justice for all Indonesian people. Considering the importance of the existence of cooperatives above, it is natural that cooperatives become the main pillars for the economy in Indonesia. Legally, the recognition of cooperatives is contained in the explanation of Article 33 paragraph (1) of the 1945 Constitution. Article 33 states the basis of economic democracy, production is carried out by all, for all under the leadership or control of community members. It is the prosperity of the people that comes first not the prosperity of individuals (Chaniago, 1984). Therefore, the economy is structured as a joint effort based on the family business. Build a company that is compatible with it is a cooperative. The explanation of the constitution is following the thoughts of Moh. Hatta over cooperatives as the embodiment of the principle of kinship that underlies the Indonesian economy. From Hatta's thought above, the success of cooperatives should be built on two principles, namely the principle of solidarity and the principle of individuality. The principle of social solidarity emphasizes the desire to achieve mutual prosperity. Individuality relies on self-esteem, and the ability of individuals to advance cooperatives. Individuals who are members of cooperatives must realize that they must not depend on the cooperative for their fate without doing any actions that advance the cooperative. By adhering to these two principles, cooperatives will revive the life of collective life while maintaining individuality. Referring to the position of cooperatives above, cooperatives should be the future of the Indonesian economy in addition to State-Owned Enterprises (SOEs/BUMN), becoming a permanent economic supporter replacing business entities established from the capitalism system. But unfortunately, this has not yet been realized. Cooperatives are far behind compared to other business entities, namely SOEs and Conglomerate Companies. The above problems do not suddenly arise in cooperatives. The history of cooperatives experiences dynamics that cause cooperatives to progress at one time and backward at other times that occur unstably (Chaniago, 1984). The dynamics of cooperative development apart from being influenced by politics are also influenced by cooperative laws. The life of a cooperative is determined by law. Cooperative laws that are formed are also bound and influenced by the conditions of political and economic ideology in their time. And in this case, the government as a legislator has an important role in determining the course of political politics. Legal politics has an extraordinary influence on the existence of cooperatives in Indonesia. Based on the background that has been described previously, the main problem in this study is about the historical study of cooperative settings policy in indonesia METHOD This research is legal research including normative legal research. The study was conducted using the mechanism of library research. Literature research was conducted to obtain secondary data derived from primary materials, secondary materials and tertiary materials. For this library research, the material to be used is in the form of documents. This research material in the form of books, articles, research results, and legislation, as well as related expert opinions regarding the legal politics of cooperatives in Indonesia: a historical juridical review of cooperative arrangements in Indonesia. Normative legal research, the data associated with this legal research is analyzed descriptively qualitatively, namely by conducting an analysis which is returned to three aspects, namely classifying, comparing, and connecting. In other words, a researcher who uses qualitative methods is not merely aimed at revealing the truth, but to understand the truth. The data collected from library research will then be analyzed qualitatively to answer the proposed research problems. Legal Policies on Cooperative Regulations in the Colonial Period The cooperative was first established in Purwokerto, Central Java in 1898 with the establishment of "hulp en spaarbank" by R. Aria Wiriaatmadja, a regent in Purwokerto whose purpose was to safeguard the interests of civil servants, so that they were free of debts to usurers. Although not entirely as a cooperative bank, the presence of the institution has moved the hearts of Assistant De Wolf van Westerrode to encourage the construction of credit cooperatives for farmers throughout the Banyumas residency (Harsoyo, 2006). De Wolf Van Westerrode then created an agricultural credit organization according to the type of Bank Raiffeisen of Germany. Lending is no longer limited to civil servants but is extended to farmers who fall prey to usurers and bonded labor who must be cured of social ills (Tambunan, 2008). Cooperatives are increasingly developing at the same time as the national movement. Budi Utomo, which was founded in 1908, advocated the establishment of cooperatives for household use, while Sarekat Islam developed cooperatives engaged in daily necessities. Before the birth of Budi Utomo, there were no ideals for establishing cooperatives by the people. The existing cooperative was established by Dutch employees and its existence depends on the Dutch Indian government (Swasono, 2005). The cooperative that was established at that time had a national spirit towards improving the people's lot. The establishment of cooperatives was facilitated by the Dutch colonial government to undermine Indonesia's growing young capitalism. Colonial capitalism did not provide a developing opportunity for young Indonesian capitalism, thus opening the way of life for its opponents, namely Indonesian cooperatives. Unfortunately, with the rapid development of cooperatives making the colonial government worried because the cooperative movement also brought political and social enthusiasm in it. Therefore the government is trying to prevent the growth and development of cooperatives through legislation. The first cooperative arrangement was verordening op de cooperative verenigingen (Statsblad 431 of 1915). This regulation regulates cooperatives in general both established by Dutch people and by indigenous people. In the regulation, there are several provisions which are very burdensome for the growth and development of cooperatives. The first difficulty is the notarial deed of establishment. The second difficulty is because the bureaucracy is not easy to deal with in registration (Swasono, 2005). To establish a cooperative must go to court it must first obtain permission from the Governor-General of the Dutch East Indies government. The third difficulty is the obligation to announce in newspapers in Malay and Dutch. The fourth difficulty is in the financing because the material costs incurred are very large, the costs have not been added to the notary public and the costs of making newspaper announcements. To take care of the establishment and so on the Cooperative must at least issue 170 Gulden for financing. 50 Gulden is equivalent to nine quintals of rice (Chaniago, 1984). Furthermore, in cooperative management arrangements, strong interventions from the government were also found. The regulation is found in Article 11 and Article 12. The regulation in Article 11 states the obligation to provide an official register book from the government. The book contains detailed data of each member, management and employee which includes the name, first name or other name and place of residence as well as the work of the board and supervisor; Furthermore, Article 12 regulates the entry of a person to become a member of an association which must be verified by member parties and third parties by signature and must be notarized. In the above regulation, the government carries out strict supervision for members, management and supervisors of cooperatives, particularly the regulation of Article 11 which is specifically for indigenous people. This can be seen from the government's efforts to find out their background with the rules to mention their first names. Even the management and supervisors must also provide information about their residence and work to the government (Melianti, 2002). Also, in the organization of membership in cooperatives, the government closes access to society to become members. Cooperatives in these arrangements have lost their principles regarding the principle of member openness and independence. Cooperatives become elitist organizations for certain circles (Nurhayati & Wibowo, 2011). The regulation has received widespread resistance in society especially from the movement. In response to the refusal in 1920 a "Cooperative Commission" was chaired by J.H Boeke. The commission is tasked to examine the extent to which the needs of the population of the son of the earth to cooperate. The results of this commission report that cooperatives in Indonesia indeed need to be developed and given ease in their establishment. Based on that in 1927 the Inlandsce Cooperative Vereningen Statsblad 91 was established in 1927 (Regulation on Bumiputera Cooperative Groups 1927 State Gazette Number 91. In these Regulations for the establishment of cooperatives was facilitated. Easy arrangements include Article 3, Article 5, Article 7 paragraph (1) up to paragraph (5) and Article 11. According to the provisions of Article 3, cooperatives established by Bumiputera use the law that applies to Bumiputera. One of the permits under Bumiputera law is that cooperatives receive land rights, that is, they can buy and/or pawn land and rice fields. This right is more suitable and more beneficial for Indonesian cooperatives, especially agricultural cooperatives. Article 5 of the Cooperative Regulation of 1927 concerning governing deeds of the establishment of cooperatives is simpler and easier than the previous arrangement. In this article, the choice is given in the use of the language used to make the deed, namely regional languages, Malay languages or Dutch languages. In addition to the use of language, the process of submitting a deed of the establishment of the government to obtain legal status is also made easier. In the process of filing a deed for the establishment of a cooperative to obtain a legal entity in the regulation becomes simpler. The deed of establishment no longer uses the deed from a notary, it does not require permission from the Governor-General. The deed is sufficient to be submitted to an advisor for the cooperative people's credit affairs. Besides the establishment of cooperatives also need not be announced in the newspaper. The most important convenience is that all processes in registration, submission, and announcement are free of charge. This will certainly be very helpful for cooperatives. To set up cooperatives to be cost-effective (Tambunan, 2008). Another arrangement that provides the last facility that can be found in the regulation of cooperative membership openness principles. The entry and exit of members are sufficiently evidenced by the recording in the list. Thus, there is no need for a third party, nor do officials or notaries make official letters as members. have been tax free for 10 years since they were founded. With the special arrangement of the child's earth, Indonesia applies the concept of dualism in law. This concept was proposed by J.H Boeke who examined the reasons for the failure of Dutch colonial (economic) policy in Indonesia from the standpoint of economic sociology (Sitio, 2001). The concept was developed based on the reality of two different social systems between native and non-native. If differences are not accommodated, there will be a clash in the community. Therefore, by arrangement, if the two cannot be united then two different rules must be given for each. This dualism of cooperative arrangement continued when Verordening op De Cooperative Verenigingen Statsblad 431 of 1915 was replaced by Algemeene Regeling op de Cooperative Vereeniiging Staatsblad 103 of 1933. The replacement was adjusted to the new Dutch Cooperative Law, which was formed in 1925 (Nurhayati & Wibowo, 2011). During the Japanese occupation, the conditions of cooperative arrangements did not change much. Japan does not make new rules in cooperatives. Japan only stipulates that all Government Agencies and legal and statutory powers from the previous Government remain to be recognized temporarily, provided that they do not conflict with Military Government Regulations (Marpaung, 2014). Although Japan recognized the 1927 Regulation, Japan also issued a regulation that had an impact on the existence of cooperatives, namely regulation No. 23 of 1942 which governs associations and trials. In Article 2 the regulation stipulates that to establish an association (including cooperatives), as well as to hold hearings or meetings of the association, the founders or their management must obtain prior permission from the Resident. Permission is granted on condition that the association or trial concerned is in no way a political movement. If it is observed that this regulation intends to supervise associations, including cooperatives in terms of the police. This arrangement ultimately limits and even kills the space for cooperatives in Indonesia. That is because cooperatives are one of the most important things that the movement organization fought for at the time. With the Japanese regulation, many cooperatives stopped their businesses because they did not get a permit. This coincided with the banning of movement organizations during the Japanese occupation. The Political Law of Post-Independence Cooperative Arrangements Cooperatives have gained a special place since the founding of Indonesia. This is indicated in the Indonesian constitution, especially Article 33 paragraph (1) which states: "The economy is organized as a joint effort based on the principle of kinship". On the principle of kinship. Build a company that is compatible with it is a cooperative. The question is, why are cooperatives chosen? There are two answers to respond to that question. First, cooperatives are considered concepts that can fight oppression by capitalism. Second, cooperatives are the most suitable and most appropriate concept for creating people's welfare. For these two answers, for example, it can be found in the thoughts of Sukarno and Hatta as the proclamation and leader of Indonesia at that time. In his thoughts Sukarno once wrote: "we are moving because we are not willing to set capitalism and imperialism, and the first condition for aborting capitalism and imperialism must be independence. We must be independent so that we can freely hook the rope to abort the system of capitalismimperialism. We must be them so that we can freely establish a new society without capitalism, imperialism" (Tambunan, 2008). In the article, Soekarno stated that the purpose of the struggle was not merely to obtain independence, more than that independence was only a means to fight capitalism and imperialism. Why? Because Indonesia has felt the pain caused by colonialism with the character of capitalism and imperialism. The Indonesian people cannot enjoy the natural wealth generated from their earth. Economic democracy in the explanation of the 1945 Constitution by Sukarno in his writings was described as a concept of democracy that provides welfare to its people. The economy operates from, by, and for the people. Soekarno explicitly wanted to say that capitalism and imperialism would be lost if the economy was not ruled by a handful of people but by all the people. The right container and following the principle recommended by Sukarno was a cooperative (Marpaung, 2014). The existence of cooperatives is not only as a micro-economy which is one of the business entities but also as a macro-economy. The path of the Indonesian economy should be based on cooperative policies (Nurhayati & Wibowo, 2011). The state's obligation is not merely to acknowledge cooperatives by allowing them to compete with the giants of capitalism, but the state should give priority to cooperatives for good growth and development. But on the other hand, the state is also not allowed to intervene because it will eliminate the identity of the cooperative itself. Regulations on Cooperatives in the Old Order The statutory regulations that initially governed cooperatives were Cooperative Societies 1949State Institutions 1949. Judging from the material contained in the Act, it was no different from the Regulations of Inlandsce Cooperative Vereningen Statsblad 91 of 1927. Hal as stated in Article 1 which reads: "Regulations on Indonesian Cooperative Societies (Bumiputera) as referred to in the March 19, 1927, ordinance, are reestablished as follows: Regulations on Cooperative Societies in 1949". As was the 1927 cooperative regulations, In the 1949 regulation, the role of the government was only to facilitate the administration. The government only regulates how cooperatives are established and how they are endorsed (Swasono, 2005). The state acts passively towards the growth and development of cooperatives in Indonesia. Such an arrangement was certainly not following the wishes of the nation's leaders at the time who expected cooperatives to have more contribution in developing the people's economy. The government must make efforts so that the cooperative develop well without removing the character of the cooperative itself. As a result of the mismatch of these arrangements finally in 1958, Law No. 79 of 1958 concerning Cooperative Societies was made. The regulation is a cooperative law that was first made by the Indonesian people themselves. In-Law Number 79 of 1958 concerning cooperative associations, the government has an active role in promoting cooperatives. There are several important points in the arrangement (Swasono, 2005). First, the regulation of cooperative characters that are more active include: 1). requiring and activating its members to save regularly; 2). educate members towards cooperative awareness; and 3) organize one or several business fields in the field of cooperatives (Harsoyo, 2006). Secondly, there is an active role of the government in promoting cooperatives which hold guidelines to guide the people to live cooperatively towards the implementation of the law and provide assistance and concessions to the cooperative movement (Abdullah, 2014). Guidance on the cooperative movement lies with the government intending to ensure the operation of Indonesian people's businesses to control the economy through the cooperative. The cooperatives also receive government assistance including legal protection, education, subsidies and facilities to run their businesses. One manifestation of this assistance is the ease of establishing cooperatives. Beyond positive arrangements above, there are some arrangements which are seen as too much government intervention. Some of these arrangements include restrictions on the type and level of cooperatives in each cooperative work area and the existence of too far involvement in coaching and supervising cooperatives by the state (Masngudi, 1990). Although there is no absolute limit on the amount of work in each area, it means that similar cooperatives or similar cooperatives can be established, but this regulation will have a difficult effect on proposing the establishment of cooperatives to the government. In terms of mentoring, the involvement of the government to have the right to speak and even be allowed to hold member meetings and executive meetings will eliminate the independence of the cooperative itself (Sitio, 2001). The good spirit in promoting cooperatives as they should have not lasted long. One year after the formation of the quo Law, an implementing regulation was formed in the form of Government Regulation Number 60 of 1959 concerning the Development of the Cooperative Movement. This rule distorts the laws on cooperative collections. This is evident in the main points of thought contained in the consideration of the Government Regulation which essentially places cooperatives in state intervention. Cooperatives must adjust to government policies and the government must take an active attitude in fostering cooperative movements based on the principles of guided democracy (Marpaung, 2014). Some provisions of Government Regulation No. 60 of 1959 concerning the development of the Cooperative Movement are considered to have denied the spirit to promote cooperatives. First, the provisions regarding cooperative management are developed uniformly for all types of cooperatives. May be members of cooperatives in each type of cooperative are people who have the same interests or have direct interests in the type of cooperative. For example in farmer cooperatives, only farmers and those who are allowed to become members of cooperatives. Besides that, in villages, first-level regions, second-level regions and at the central level, the cooperative structure has been uniformed. Second, if there is more than one type of cooperative in one working area, in the shortest possible time it will be united (merged) by the government. Third, cooperatives are spoiled by the state by eliminating or avoiding as far as possible from the competition with private businesses. The existence of these regulations caused the cooperative to become limited in space and lose the initiative to develop its business. In addition to these arrangements, cooperatives during the guided democracy period were not cooperatives that were born and developed from the community. Cooperatives are institutions that are set up top-down, meaning that most of the establishment is carried out by the state. This is reflected in the general explanation that the government is still scheduled for the formation of cooperatives by the government in business fields that control the lives of many people. Cooperatives as an extension of the government are not only in their structure and establishment. In MPRS Tap Number II / MPRS / 1960 concerning the Outline of the First Stage of National Development Planning in 1961-1969 and Government Regulation Number 140 of 1961 concerning Distribution of Goods and Materials for Basic Needs of the People, the Cooperative stands for hands of the country. The cooperatives chosen by the state to become a liaison institution with the people. The cooperative is given business by the state as a distributor of staples for the people. Even in the TAP MPRS a quo the role of cooperatives has more priority than national private companies. This is valid as long as it does not kill the creativity of the cooperative, but the facts on the field of the cooperative become dependent on the government (Masngudi, 1990). Government intervention on cooperatives does not stop there. As the peak of the politicization of cooperatives in the old order, in 1965 Law No. 14 of 1965 concerning Cooperatives was made. This law clearly and firmly places cooperatives under state intervention. Cooperatives are economic organizations and tools of the Revolution that function as a nursery for people and vehicles for Indonesian Socialism based on Pancasila. Article 5 also emphasizes Cooperatives, the structure, activities of the coaching tools and equipment of cooperative organizations, reflecting the progressive national cooperation of the NASAKOM revolutionary (Suryani, 2015) Regulations on Cooperatives in the New Order The new order is the antithesis of the old order. The direction of state policy adopted by these two regimes is contradictory to one another. If during the old order Indonesia followed socialism in economic development, during the new order liberalism was a guide in economics. This is marked by the inclusion of Law No. 1 of 1967 concerning Foreign Investment (PMA). With the existence of this Law, the domination of foreign capital over national capital is inevitable and defeats the work done by Indonesians, especially cooperatives (Suryani, 2015). At the same time as the birth of the Foreign Investment Law, almost all policies taken by the old order, in the new order were annulled (Marpaung, 2014). Including those annulled by the new order are cooperative arrangements. With the establishment of the new order regime, cooperative arrangements have also changed. Law Number 14 of 1965 Cooperatives are revoked and replaced with Law Number 12 of 1967 concerning Cooperative Principles. The reason for the revocation is because Law No. 14 of 1965 concerning Cooperatives contains thoughts that want to place cooperatives as political servants, this is indicated because of government intervention that has gone too far. In connection with this matter in the explanation of this law which states: "The role of the Government that is too far in regulating the problems of Indonesian cooperatives as has been reflected in the past is essentially non-protective, even very limiting the movement and implementation of basic economic strategies that are not following the soul and meaning of Article 33 of the 1945 Constitution. it will hinder the steps and limit the characteristics of self-reliance, self-reliance, and participation which are the main elements of the principles of trust in oneself, which in turn will be detrimental to the community itself". The old order policies that pulled cooperatives into the vortex of political conflict were broken by the new order. Cooperatives are returned to their functions as stated in the Act, the functions of cooperatives are 1). economic struggle tools to enhance people's welfare; 2). tools for democratizing the national economy; 3). as one of the arteries of the Indonesian economy; and 4). the tools of community development to strengthen the economic position of the Indonesian nation and be united in regulating the management of the people's economy. With the quo law, it is as if the intervention of the government is over, and the cooperative becomes an independent people's economic movement. Unfortunately, some arrangements contradict this spirit. This is stated in the regulation of cooperative type, especially Article 15 and Article 17. In Article 15 regulates: 1). following needs and for efficiency purposes, cooperatives can focus on higher-level coordination; 2). The lowest level cooperatives up to the top level in centralizing relations as referred to in paragraph (1) of this Article constitute an inseparable unit; 3). Higher-level cooperatives are obliged and authorized to carry out guidance and inspections of lower-level cooperatives, and 4). Relationships between levels of similar cooperatives are regulated in the Articles of Association of each similar Cooperative. Whereas Article 17 regulates: 1). The type of cooperative is based on the needs of oneself and for the efficiency of a group in a homogenous society because of the similarity of economic activities/interests to achieve the common goals of its members; and 2). For efficiency and order, in the interests and development of the Indonesian Cooperative, in each working area, there are only similar and equivalent cooperatives. In the above arrangements, government intervention is very strong even though the government does not directly interfere in the daily management of cooperatives. The design of cooperatives regulated in the above regulation in Article 15 is top-down, there are superior cooperatives and subordinate cooperatives. Centralization occurred in cooperative institutions. Primary cooperatives focus on central cooperatives, central cooperatives under joint cooperatives and joint cooperatives under the parent cooperative. Relationships that occur are not from the bottom as elements that form superiors' cooperatives that are authorized to inspect and supervise, the opposite of cooperatives at the top level who guide and inspect cooperatives underneath (Budiyono & S, 2015). In the centralized cooperative arrangement, the government through the Parent cooperatives can intervene to all cooperatives under it. The above is increasingly emphasized by the regulation of Article 17 which regulates the organization of cooperatives. Cooperatives are classified based on the homogeneity of the community based on the common activities / economic interests of their members. This means that a cooperative is established and consists of people who have the same profession (Sitio, 2001). So it is not surprising if based on the Law a quo, civil servant cooperatives, and ABRI cooperatives are raised. Cooperatives under the state institutions will be easily intervened and commanded by the government because they must comply with all existing policies. In the development of the new order regime, Law Number 12 of 1967 concerning Cooperative Fundamentals in 1992 was changed to Law Number 25 of 1992 concerning Cooperatives. In terms of legal ideas, this Law continues the Laws that were previously formed. In Cooperative Law, the definition of Cooperative undergoes a fundamental paradigm change. Cooperatives are given the meaning of cooperatives as a business entity consisting of individuals or legal entities of cooperatives by basing their activities based on cooperative principles as well as a people's economic movement based on the principle of kinship. in this definition the cooperative has highlighted its form as a business entity, meaning that the cooperative has been released to conduct economic business. The cooperative has been allowed to seek business profits. This is different from the previous definition that defines cooperatives as associations or as economic organizations (Suryani, 2015). In-Law Number 25 Year 1992 concerning Cooperatives, Regulations that further restrict and limit the space for cooperatives are those concerning Cooperative Movement Institutions. In Article 57 concerning this institution, the Cooperative regulated together to establish a single organization that functions as a forum to fight for interests and act as carriers of Cooperative aspirations. In this arrangement, all cooperatives will later be joined in one single organization. If there is a cooperative that does not agree with organizational policies, the cooperative cannot do anything because the organization is the only legitimate cooperative forum. As a single forum, the organization is vulnerable to being politicized from its internal management. This organization has the following tasks: 1). Fighting and channeling Cooperative aspirations; 2). Increase awareness of cooperatives in the community; 3). Conducting cooperative education for members and communities; and 4). Develop cooperation between cooperatives and members of cooperatives with other business entities, both at national and international levels. Task point (1) above derogates the existence of cooperatives as autonomous organizations. After the founding of the organization, the voice of each cooperative to channel their aspirations was taken by the organization. Cooperatives no longer have the right to fight for and channel their aspirations. In this new order period, cooperatives became an extension of the government. Its existence is engineered in such a way that it abandons the basic principles of cooperatives. Cooperative Arrangements in the Reformation Period More than a decade after the reform, new cooperative arrangements were replaced. It is Law Number 17 of 2012 concerning Cooperatives which replaces Law Number 25 of 1992. Unfortunately, this Law is far from the fire to advance cooperatives. What exists weakens cooperatives and contradicts the Basic Law. This law has only been carried out for two years and was then canceled by the Constitutional Court in its entirety. The Cooperative Law has been deemed to have lost its spirit because it is no longer based on cooperative principles. Cooperatives are designed as capitalist companies that are solely looking for profit, not aimed at the welfare of its members. Even cooperatives by definition become land for profit-seeking by individuals. In this examination, the Court granted the petition and canceled the entire Law number 17 of 2012 with several considerations (Sulaiman, Irwansyah, & Maryono, 2014). First, it relates to the definition of cooperatives set out in Article 1 number 1 of Law Number 17 of 2012 concerning "Cooperatives are legal entities established by individuals or Cooperative legal entities, with the separation of the wealth of their members as capital to run a business, which fulfills common aspirations and needs in the economic, social and cultural fields in accordance with Cooperative values and principles". The Court responded that the formulation of cooperatives is (as) a legal entity that does not contain a substantive understanding of cooperatives as referred to in Article 33 paragraph (1) of the 1945 Constitution and its Explanations which refer to the understanding as building a typical company. According to the Court, the cooperative is about who the cooperative is, or in other words, the formulation that prioritizes cooperatives in the perspective of the subject or as economic actors, which is part of the economic system. For this purpose, it is formulated with words or phrases, associations, economic organizations, or people's economic organizations. Or, at least in a cooperative formulated as a "business entity". So that the formulation of cooperatives in Law Number 17 of 2012 which does not indicate an economic agent, is contrary to the Constitution because it contains individualism (Rochmadi, 2011). Second, regarding the appointment of non-member management. The existence of these regulations hinders or even negate the rights of cooperative members to express opinions, vote, and be elected as well as the values of kinship, responsibility, democracy, and equality that form the basis of cooperatives. Cooperative is an organization that is built and developed based on the association of people, the goal is the welfare of individuals. If the person who is going to decide on management is outside the members, then it is a deviation from the fundamental principles of the cooperative (Sulaiman & Maryono, 2014). Cooperatives are expected to grow better along with the capacity of members of the cooperative who are qualified in managing cooperatives (Susilo, 2013). To make cooperatives a professional organization, the members of the cooperative must be built to become professionals. To realize this, certainly, the principle of education and training for members of the cooperative plays an important role. Third, about cooperative capital. In the regulation of the quo Law regulating cooperative capital is the Principal Deposit and Cooperative Capital Certificate as initial capital, besides it can also come from grants, investment capital, loan capital from members, etc. Concerning the provisions which state that the Principal payment is paid by the Member at the time the application is filed as a Member and cannot be returned, it cannot be justified. The principal deposit in a cooperative must be seen as a form of a person's decision to join voluntarily as a member of the cooperative, so if the member decides to leave or quit for some reason, naturally, the principal savings are withdrawn. About capital certificates that require members to buy, this is not following the principles of volunteerism and openness. This means, the orientation of the cooperative has shifted towards a pool of capital, which has thus denied the identity of the cooperative as a gathering of people with joint ventures as its main capital. With this arrangement, it is feared that the cooperative will lose its uniqueness in making important decisions. Meanwhile, capital certificate arrangements which when released cannot be sold outside but must be purchased by other members or by cooperatives. This is the same as the principal deposit arrangement. there is an element of coercion in it. What if fellow members do not want to buy, or cooperative money is not enough. This is detrimental to members of the Cooperative (Sulaiman et al., 2014). The last thing related to cooperative capital is an investment. This must be avoided because it opens the intervention of outside parties, including the government and foreign parties, through unlimited capital. Cooperatives as a group of people are thus no different from a limited liability company as a collection of capital, or even as a publicly listed public limited company that raises capital as much as possible with no limit to the risk of opening opportunities for intervention from parties outside the cooperative (Swasono, 2005). Fourth, the Prohibition of Distribution of Surplus Business Results Derived from Transactions with Non-Members. There is an injustice related to rights and obligations, that is, when a cooperative experiences a surplus of the results of operations the member is not entitled to but when the cooperative experiences a deficit of operating results, whether caused by transactions with members or non-members, members are required to deposit the cooperative's capital certificate as additional capital. In this arrangement, the cooperative seems to place itself as an entity separate from its members. In fact, what is owned by cooperatives should be used for the welfare of its members, because that is the purpose of cooperatives. Fifth, about the types of cooperatives. In this arrangement, the cooperative is forced to choose one of the types specified in this law. This arrangement does not match the facts in the field relating to the development of cooperatives. limiting the types of cooperative business activities has contained the creativity of cooperatives to determine their types of business activities, which may be, along with the development of science, technology, culture, and economy, also developing types of business activities to meet human economic needs. This arrangement has wanted to dwarf the existence of cooperatives (Rachman, 2016). If a corporation (PT) is permitted to form a conglomerate, why does the Cooperative have to be limited in their business fields? This makes the injustice of cooperatives actors and movers. Many multi-purpose cooperatives have succeeded. With all of the above considerations, it would be appropriate if the Constitutional Court annulled the entire Cooperative Law. Arrangements relating to fundamental principles are contrary to the identity of the cooperative itself. The Constitutional Court has saved the political direction of legal regulation of cooperatives in Indonesia. CONCLUSION One thing that becomes homework together concerning the decision of the Constitutional Court is the dictum of the ruling which states that Law Number 25 of 1992 concerning Cooperatives is valid temporarily until the formation of the new Law. As discussed in the above research, Law Number 25 of 1992 itself has violated many cooperative principles and is no longer relevant for enforcement. Even in the stipulations in Law Number 25 of 1992 in some cases regulating the same content as the Act that was canceled. for example, in the capital, Law Number 25 of 1992 also recognizes capital investment. Besides that, in the SHU distribution, members are also only given a profitsharing proportional to the business services performed by each member. What if the transaction is done with non-members? The Court should also provide an interpretation of some of the provisions in Law Number 25 of 1992. Of course, this decision also raises constitutional problems, especially the Constitutional Court does not set a time limit on how long the new cooperative law must be made.
2020-04-30T09:09:59.536Z
2020-04-07T00:00:00.000
{ "year": 2020, "sha1": "6fbb046d94b5d6570f6145479e8bf92fada749d6", "oa_license": "CCBY", "oa_url": "https://www.ejournal.warmadewa.ac.id/index.php/prasada/article/download/1267/1213", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1ef654563c654ab002b3d39201d0feb1da444dc6", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
253151916
pes2o/s2orc
v3-fos-license
Nontoxic and Naturally Occurring Active Compounds as Potential Inhibitors of Biological Targets in Liriomyza trifolii In recent years, novel strategies to control insects have been based on protease inhibitors (PIs). In this regard, molecular docking and molecular dynamics simulations have been extensively used to investigate insect gut proteases and the interactions of PIs for the development of resistance against insects. We, herein, report an in silico study of (disodium 5′-inosinate and petunidin 3-glucoside), (calcium 5′-guanylate and chlorogenic acid), chlorogenic acid alone, (kaempferol-3,7-di-O-glucoside with hyperoside and delphinidin 3-glucoside), and (myricetin 3′-glucoside and hyperoside) as potential inhibitors of acetylcholinesterase receptors, actin, α-tubulin, arginine kinase, and histone receptor III subtypes, respectively. The study demonstrated that the inhibitors are capable of forming stable complexes with the corresponding proteins while also showing great potential for inhibitory activity in the proposed protein-inhibitor combinations. Introduction A concerning problem that threatens food security around the world is the emergence of insects capable of developing resistance to insecticides [1]. Excessive use of many of these insecticides is associated with various health and environmental issues [2][3][4]. Liriomyza trifolii is a highly polyphagous pest in crop fields and greenhouses that has detrimental economic impacts [5]. Both larvae and adults selectively eat only the layers with the least amount of plant cellulose [6]. Stippling is one example of the damage in crop plants caused by the sap-sucking female fly; internal mining caused by larvae is another such example. These various types of damage allow pathogenic fungi to enter Binding Site Prediction and Protein-Ligand Interaction The putative ligand binding sites (both major and minor) for the predicted proteins were identified through Discovery studio software and were visualized ( Figure 1). All target proteins (acetylcholinesterase, α-tubulin, actin, arginine kinase, histone subunit III, Hsp90, elongation factor 1-alpha, and carbomoylphosphate synthase) were docked with the ligands, most of which were phytochemicals derived from the leaves of Phaseolus vulgaris [16,17] and the yeast extract. We evaluated the protein-ligand interaction through SAMSON software [18]. It was found that the tool has discrepancies in results for accurate pose prediction among the various putative docking poses. Molecular Docking and Binding Free Energy Calculation The prepared protein structures of (acetylcholinesterase, α-tubulin, actin, arginine kinase, histone subunit III, Hsp90, elongation factor 1-alpha, and carbamoyl phosphate synthase) were docked using SAMSON software with phytochemical compounds and yeast extracted compounds listed in the supplementary data. The results of the docking studies were provided in Table 2, and it was revealed that the phytochemical compounds were superior to the yeast extract compounds based on the docking score. All docking results were monitored by scoring functions that predict how well the ligand binds in a particular docked pose. This scoring function gives the ranking of the ligands. In the present study, the docking score was taken into consideration for the selection of the best ligands. This allowed us to explain the mechanisms of insect death. A mathematical empirical scoring function was used to approximately predict the binding affinity between two molecules after they have been docked by approximating the ligand's binding free energy [20]. It includes various force field interactions such as electrostatic and van der Waals contributions, which influence ligand binding. Subsequently, the docked structures were queried for binding free energy calculation. The results of binding free energy calculation were provided in interaction through SAMSON software [18]. It was found that the tool has discrepanci in results for accurate pose prediction among the various putative docking poses. [19]. The large red sphere represents th cavities surrounding the active sites and was visualized using the visualization module Discovery Studio 3.0 visualization. Molecular Docking and Binding Free Energy Calculation The prepared protein structures of (acetylcholinesterase, α-tubulin, actin, arginin kinase, histone subunit III, Hsp90, elongation factor 1-alpha, and carbamoyl phospha synthase) were docked using SAMSON software with phytochemical compounds an yeast extracted compounds listed in the supplementary data. The results of the dockin studies were provided in Table 2, and it was revealed that the phytochemic compounds were superior to the yeast extract compounds based on the docking scor All docking results were monitored by scoring functions that predict how well th ligand binds in a particular docked pose. This scoring function gives the ranking of th ligands. In the present study, the docking score was taken into consideration for th selection of the best ligands. This allowed us to explain the mechanisms of insect deat A mathematical empirical scoring function was used to approximately predict th binding affinity between two molecules after they have been docked by approximatin the ligand's binding free energy [20]. It includes various force field interactions such a electrostatic and van der Waals contributions, which influence ligand bindin Subsequently, the docked structures were queried for binding free energy calculatio The results of binding free energy calculation were provided in Table 2. It was foun that binding energy values supported the docking result well. Hesperidin, Naringin, an Rosmarinic acid have higher binding energies than other compounds. All of the oth values contribute to the ΔG values which reflect the binding energy of the protein-ligan complex. [19]. The large red sphere represents the cavities surrounding the active sites and was visualized using the visualization module of Discovery Studio 3.0 visualization. Binding Pose Analysis The binding mode of the compounds with proteins (acetylcholinesterase, α-tubulin, actin, arginine kinase, histone subunit III, Hsp90, elongation factor 1-alpha, and carbamoyl phosphate synthase) showed the different interactions between the proteins and ligands showed in Table 3. The interactions between the inhibitors and their target proteins, as well as their binding modes and orientations, are shown in Figures 2-9. Root Mean Square Deviation (RMSD) Analysis Calculations of the RMSD for the ligand-enzymes complex were used to determine the dynamic stability and conformational perturbation, which occur in each of the simulated systems during the simulation time scale. The RMSD values were calculated for the following protein-inhibitors combinations: acetylcholinesterase with disodium 5′inosinate and petunidin 3-glucoside; actin with calcium 5′-guanylate D and chlorogenic acid; α-tubulin with chlorogenic acid alone; arginine kinase with kaempferol-3,7-di-Oglucoside I, hyperoside D, delphinidin 3-glucoside ID, and histone subunit III complexes with myricetin 3′-glucoside ID and hyperoside D. All the trajectories reached equilibrium state after 20 ns, as shown in Figure 10. The RMSD values for all complexes are observed to be stable during the 50 ns simulation. Root Mean Square Deviation (RMSD) Analysis Calculations of the RMSD for the ligand-enzymes complex were used to determine the dynamic stability and conformational perturbation, which occur in each of the simulated systems during the simulation time scale. The RMSD values were calculated for the following protein-inhibitors combinations: acetylcholinesterase with disodium 5 -inosinate and petunidin 3-glucoside; actin with calcium 5 -guanylate D and chlorogenic acid; αtubulin with chlorogenic acid alone; arginine kinase with kaempferol-3,7-di-O-glucoside I, hyperoside D, delphinidin 3-glucoside ID, and histone subunit III complexes with myricetin 3 -glucoside ID and hyperoside D. All the trajectories reached equilibrium state after 20 ns, as shown in Figure 10. The RMSD values for all complexes are observed to be stable during the 50 ns simulation. Radius of Gyration (Rg) Analysis The Rg factor is best described for the stability of receptor-ligand complexes during the molecular dynamics simulations. The results demonstrate that the Rg values during different time points for the acetylcholine esterase, actin, α-tubulin, arginine kinase, and histone subunit III complexes to their respective ligands are constant during 50 ns simulation, which indicates the compactness of all of the proteins (Figure 11). Radius of Gyration (Rg) Analysis The Rg factor is best described for the stability of receptor-ligand complexes during the molecular dynamics simulations. The results demonstrate that the Rg values during different time points for the acetylcholine esterase, actin, α-tubulin, arginine kinase, and histone subunit III complexes to their respective ligands are constant during 50 ns simulation, which indicates the compactness of all of the proteins (Figure 11). Hydrogen Bond Analysis The number of hydrogen bonds for the ligand-enzymes complexes are plotted over a 50-ns MD simulation interval ( Figure 12). Since hydrogen bonds constitute a transient connection that provides stability to the receptor-ligand complex, they constitute an important factor to consider when discussing receptor-ligand stability. These bonds determine the specificity of the binding mode. In this study, we have calculated all of the hydrogen bonds for all of the complexes. The numbers of hydrogen bonds at different time points have been calculated, as shown in Figure 12. The average number of hydrogen bonds calculated for inhibitors (disodium 5′-inosinate and petunidin 3glucoside), (calcium 5′-guanylate D and chlorogenic acid), chlorogenic acid, (kaempferol-3,7-di-O-glucoside I, hyperoside D, delphinidin 3-glucoside ID) and (myricetin 3′-glucoside ID, hyperoside D) are (0-6, 0-5), (0-7, 0-9), 0-9, (0-8, 0-9, 0-10), respectively. All of the predicated ligands have shown continuous hydrogen bonding Hydrogen Bond Analysis The number of hydrogen bonds for the ligand-enzymes complexes are plotted over a 50-ns MD simulation interval ( Figure 12). Since hydrogen bonds constitute a transient connection that provides stability to the receptor-ligand complex, they constitute an important factor to consider when discussing receptor-ligand stability. These bonds determine the specificity of the binding mode. In this study, we have calculated all of the hydrogen bonds for all of the complexes. The numbers of hydrogen bonds at different time points have been calculated, as shown in Figure 12. The average number of hydrogen bonds calculated for inhibitors (disodium 5 -inosinate and petunidin 3-glucoside), (calcium 5 -guanylate D and chlorogenic acid), chlorogenic acid, (kaempferol-3,7-di-O-glucoside I, hyperoside D, delphinidin 3-glucoside ID) and (myricetin 3 -glucoside ID, hyperoside D) are (0-6, 0-5), (0-7, 0-9), 0-9, (0-8, 0-9, 0-10), respectively. All of the predicated ligands have shown continuous hydrogen bonding during the 50 ns simulation, which demonstrates the stabil-ity of the complexes. The only exception was chlorogenic acid, which only shows stable hydrogen bonding in the span of 35 ns. during the 50 ns simulation, which demonstrates the stability of the complexes. The only exception was chlorogenic acid, which only shows stable hydrogen bonding in the span of 35 ns. Root Mean Square Fluctuation Analysis (RMSF) The RMSF value refers to the flexibility and mobility of structure-a higher value of RMSF indicates a loosely bonded structure with twists, curves, and coils, while a lower value of RMSF indicates a stable secondary structure, including α-helix and beta-sheets. Our RMSF analysis demonstrates that all of the ligands showed less conformational variations during binding and can act as stable complexes ( Figure 13). Root Mean Square Fluctuation Analysis (RMSF) The RMSF value refers to the flexibility and mobility of structure-a higher value of RMSF indicates a loosely bonded structure with twists, curves, and coils, while a lower value of RMSF indicates a stable secondary structure, including α-helix and beta-sheets. Our RMSF analysis demonstrates that all of the ligands showed less conformational variations during binding and can act as stable complexes ( Figure 13). Molecular Mechanics Poisson-Boltzmann Surface Area Free Energy Calculations The binding capacity of the ligand towards the receptor is quantitatively estimated by binding free energy analysis. Binding free energy is the summation of all non-bonded interaction energies. The binding free energy of the interactions between acetylcholine esterase, actin, α-tubulin, arginine kinase, and histone subunit III and the docked ligands has been estimated using the molecular mechanics Poisson-Boltzmann surface area tool (G_MMPBSA) [21]. This useful tool allows for efficient and reliable free energy simulation to model protein-ligand interactions. Our binding energy analysis spanning 50 ns MD simulation trajectories show that all ligands have a binding affinity towards enzyme inhibition and form stable complexes. Other different kinds of interaction energies, such as van der Waals energy, electrostatic energy, polar solvation energy, and solvent accessible surface area (SASA) energy, have been also calculated for all the Tools Shapes complexes (Figure 14). Results indicate that van der Waals, electrostatic, and SASA energy negatively contribute to the total interaction energy, while only polar Molecular Mechanics Poisson-Boltzmann Surface Area Free Energy Calculations The binding capacity of the ligand towards the receptor is quantitatively estimated by binding free energy analysis. Binding free energy is the summation of all non-bonded interaction energies. The binding free energy of the interactions between acetylcholine esterase, actin, α-tubulin, arginine kinase, and histone subunit III and the docked ligands has been estimated using the molecular mechanics Poisson-Boltzmann surface area tool (G_MMPBSA) [21]. This useful tool allows for efficient and reliable free energy simulation to model protein-ligand interactions. Our binding energy analysis spanning 50 ns MD simulation trajectories show that all ligands have a binding affinity towards enzyme inhibition and form stable complexes. Other different kinds of interaction energies, such as van der Waals energy, electrostatic energy, polar solvation energy, and solvent accessible surface area (SASA) energy, have been also calculated for all the Tools Shapes complexes (Figure 14). Results indicate that van der Waals, electrostatic, and SASA energy negatively contribute to the total interaction energy, while only polar solvation energy positively contributes to the total free binding energy. In particular, the contribution of van der Waals interactions is much greater than that of the electrostatic interactions in all cases except the complexes arginine kinase-delphinidin 3-glucoside and histone subunit-myricetin 3glucoside. Furthermore, the contribution of SASA energy is relatively small when compared to the total binding energy. The negative value of van der Waals energy also points to the significant hydrophobic interaction between the ligands and the enzymes [22]. solvation energy positively contributes to the total free binding energy. In particular, the contribution of van der Waals interactions is much greater than that of the electrostatic interactions in all cases except the complexes arginine kinase-delphinidin 3-glucoside and histone subunit-myricetin 3′-glucoside. Furthermore, the contribution of SASA energy is relatively small when compared to the total binding energy. The negative value of van der Waals energy also points to the significant hydrophobic interaction between the ligands and the enzymes [22]. Principal Component Analysis (PCA) Principal component analysis is a method that utilizes linear combinations of measured variables, which allows for the reduction of the dimensionality of data and helps identify the principal sources of variation. In molecular dynamics simulations, PCA is a popular method to account for the essential dynamics of the system on a lowdimensional free energy landscape [23]. To analyze the collective motion of all complexes, PCA analysis based on C-a atoms has been performed. It was observed that the first few eigenvectors of the principal components (PCs) of the structures play an important role and describe the overall motions of the entire system. These data suggest that kaempferol-3,7-di-O-glucoside ID has formed very stable complexes with arginine kinase and myricetin 3′-glucoside ID with histone subunit III, which can be considered as a lead compound ( Figure 15). Principal Component Analysis (PCA) Principal component analysis is a method that utilizes linear combinations of measured variables, which allows for the reduction of the dimensionality of data and helps identify the principal sources of variation. In molecular dynamics simulations, PCA is a popular method to account for the essential dynamics of the system on a low-dimensional free energy landscape [23]. To analyze the collective motion of all complexes, PCA analysis based on C-a atoms has been performed. It was observed that the first few eigenvectors of the principal components (PCs) of the structures play an important role and describe the overall motions of the entire system. These data suggest that kaempferol-3,7-di-O-glucoside ID has formed very stable complexes with arginine kinase and myricetin 3 -glucoside ID with histone subunit III, which can be considered as a lead compound ( Figure 15). solvation energy positively contributes to the total free binding energy. In particular, the contribution of van der Waals interactions is much greater than that of the electrostatic interactions in all cases except the complexes arginine kinase-delphinidin 3-glucoside and histone subunit-myricetin 3′-glucoside. Furthermore, the contribution of SASA energy is relatively small when compared to the total binding energy. The negative value of van der Waals energy also points to the significant hydrophobic interaction between the ligands and the enzymes [22]. Figure 14. Representation of the van der Waals, electrostatic, polar solvation, SASA, and binding energy for docked compounds into: Acetylcholine esterase, Actin, Α-tubulin, Arginine kinase and histone subunit III with inhibitors (disodium 5′-inosinate and petunidin 3-glucoside), (calcium 5′guanylate D and chlorogenic acid), chlorogenic acid, (kaempferol-3,7-di-o-glucoside I, hyperoside D, delphinidin 3-glucoside ID), and (myricetin 3′-glucoside ID, hyperoside D) complexes. Principal Component Analysis (PCA) Principal component analysis is a method that utilizes linear combinations of measured variables, which allows for the reduction of the dimensionality of data and helps identify the principal sources of variation. In molecular dynamics simulations, PCA is a popular method to account for the essential dynamics of the system on a lowdimensional free energy landscape [23]. To analyze the collective motion of all complexes, PCA analysis based on C-a atoms has been performed. It was observed that the first few eigenvectors of the principal components (PCs) of the structures play an important role and describe the overall motions of the entire system. These data suggest that kaempferol-3,7-di-O-glucoside ID has formed very stable complexes with arginine kinase and myricetin 3′-glucoside ID with histone subunit III, which can be considered as a lead compound ( Figure 15). Figure 15. The PCA analysis, the plot of eigenvalues vs. eigenvectors have been considered. Figure 15. The PCA analysis, the plot of eigenvalues vs. eigenvectors have been considered. Since it was previously found that the first five eigenvectors constitute the majority portion of the total dynamics of the whole system, we plotted only the first two eigenvectors against each other, where each dot represents correlated motions ( Figure 16). The well-stable clustered dots signify the more stable structure, and low-clustered dots represent the weaker stable structure. Since it was previously found that the first five eigenvectors constitute the majority portion of the total dynamics of the whole system, we plotted only the first two eigenvectors against each other, where each dot represents correlated motions ( Figure 16). The well-stable clustered dots signify the more stable structure, and low-clustered dots represent the weaker stable structure. Database Search, Structural Modeling, and Model Validation All protein sequences were obtained from NCBI (https://www.ncbi.nlm.nih. accessed on 10 August 2022) in FASTA format and are mentioned by their Gen accession number in Table 2. The Liriomyza trifolii NCBI taxonomy (tax ID: 32 proteins were selected by searching all of the sequential homolog and orthologs u NCBI Blast server [24] with the default values, and against the nonredundant pr sequences. The sequences were retrieved in the FASTA format as an amino sequence. The initial atomic structures of the proteins, based on homology mode were built using the Swissmodel server (https://Swissmodel.expasy.org/, accessed o August 2022). In this study, a sequence of Blast-P similarities for recognition of cl related structural homologs in Liriomyza trifolii was queried against a PDB database The first hit on the annotation Blast-p was obtained to identify the templates base PDB template ID. The Protein Data Bank collected the PDB file of the templates and aligned using BLAST. The Swissmodel server used the target sequence file alignment file, the PDB file for the prototype, and all the template proteins to build homology model. Homology models with a score of <−4 were chosen. The optim models (acetylcholinesterase, α-tubulin, actin, arginine kinase, histone subunit III, shock protein 90 (Hsp90), and elongation factor 1-alpha) were found to be suitable b on several qualitative background checks, including the PROCHECK (PDBSum) Swissmodel server (https://saves.mbi.ucla.edu/, accessed on 10 August 2022). Ramachandran plot evaluated that the predicted models were closer to the tem with (99.1%, 92.6%, 86.7%, 84.4%, 88.6%, 88.4%) residues lying in the favored reg The ERRAT score values of 99.1304, 89.7527, 96.4539, 82.0707, 96.5217, 90.9774, QMEAN score indicated that the predicted models were reliable and satisfactor they are higher than the ideal values of the QMEAN score <−4, and ERRAT around for a model with a satisfactory resolution [24]. Database Search, Structural Modeling, and Model Validation All protein sequences were obtained from NCBI (https://www.ncbi.nlm.nih.gov/, accessed on 10 August 2022) in FASTA format and are mentioned by their Gen Bank accession number in Table 2. The Liriomyza trifolii NCBI taxonomy (tax ID: 32264) proteins were selected by searching all of the sequential homolog and orthologs using NCBI Blast server [24] with the default values, and against the nonredundant protein sequences. The sequences were retrieved in the FASTA format as an amino-acid sequence. The initial atomic structures of the proteins, based on homology modeling, were built using the Swissmodel server (https://Swissmodel.expasy.org/, accessed on 10 August 2022). In this study, a sequence of Blast-P similarities for recognition of closely related structural homologs in Liriomyza trifolii was queried against a PDB database [18]. The first hit on the annotation Blast-p was obtained to identify the templates based on PDB template ID. The Protein Data Bank collected the PDB file of the templates and was aligned using BLAST. The Swissmodel server used the target sequence file, the alignment file, the PDB file for the prototype, and all the template proteins to build the homology model. Homology models with a score of <−4 were chosen. The optimized models (acetylcholinesterase, α-tubulin, actin, arginine kinase, histone subunit III, heat shock protein 90 (Hsp90), and elongation factor 1-alpha) were found to be suitable based on several qualitative background checks, including the PROCHECK (PDBSum) and Swissmodel server (https://saves.mbi.ucla.edu/, accessed on 10 August 2022). The Ramachandran plot evaluated that the predicted models were closer to the template with (99.1%, 92.6%, 86.7%, 84.4%, 88.6%, 88.4%) residues lying in the favored regions. The ERRAT score values of 99.1304, 89.7527, 96.4539, 82.0707, 96.5217, 90.9774, and QMEAN score indicated that the predicted models were reliable and satisfactory, as they are higher than the ideal values of the QMEAN score <−4, and ERRAT around 95% for a model with a satisfactory resolution [24]. Preparation of Proteins and Ligands The sequences of the Liriomyza trifolii proteins (acetylcholinesterase, actin, α-tubulin, arginine kinase, elongation factor 1-alpha, Hsp90, and histone subunit III) with GenBank accession no. number (CAI30732.1, ARQ84036.1, ARQ84030.1, ARQ84038.1, ARQ84034.1, AGI19327.1, ARQ84032.1, ABL07756.1, respectively) were obtained from NCBI. The protein sequences were retrieved in the FASTA format. The 3-D structures of proteins were built using the Swissmodel server (https://Swissmodel.expasy.org/, accessed on 10 August 2022). Here, proteins were selected as target receptor proteins and were imported to the 3-D refine server to perform energy minimization for the six proteins (http://sysbio. rnet.missouri.edu/3Drefine/, accessed on 10 August 2022). During docking studies, all water molecules and ligands were removed, and hydrogen atoms were added to the target proteins. The docking system was built using SAMSON software 2020 (https: //www.samson-connect.net/, accessed on 10 August 2022). The structures were prepared using the protein preparation wizard of the Autodock Vina extension of SAMSON 2020 software. The X, Y, and Z dimensions of the receptor grid, used for the blind docking of ligands to proteins, are reported in Table 3. The ligands were retrieved from the PubChem database in SDF format. Subsequently, each ligand was converted into MOL2 format using OpenBabel software (http://openbabel.org/wiki/Main_Page, accessed on 10 August 2022), followed by an energy minimization at pH 7.0 ± 2.0 in SAMSON software. Binding Site Prediction and Protein-Ligand Docking Discovery studio software and SAMSON software were used for binding site prediction. SAMSON software uses Autodock Vina as an extension to maximize the accuracy of these predictions while minimizing computer run-time [25]. It uses the interaction energy between the protein and a simple van der Waals probe to locate energetically favorable binding sites. The program is based on quantum mechanics, and it predicts the potential affinity, molecular structure, geometry optimization of the structure, vibration frequencies of coordinates of atoms, bond length, and bond angle. Following an exhaustive search, 100 poses were analyzed, and the best scoring poses were used to calculate the binding affinity of the ligands. The ligands that tightly bind to a target protein with high scores were selected in Table 3. The proteins were docked against a variety of bioactive compounds that are phytochemical components from the HPLC of leaves of Phaseolus vulgaris (ref) and yeast extract using SAMSON software [21]. The 2-D interaction was carried out to find favorable binding geometries of the ligand with the proteins using Discovery Studio software. Thus, the 2-D interaction images of the docked protein-ligand complexes with high scores to the predicted active sites were obtained. Protein Ligand Interaction Using SAMSON and Discovery Studio Software The ligands were docked with the target proteins (acetylcholinesterase, actin, αtubulin, arginine kinase, elongation factor 1-alpha, Hsp90, and histone subunit III), and the best docking poses were identified. Figures 1 and 2 show the 2-D and 3-D structures of the binding poses of the compounds, respectively. Protein-Protein Interaction Network The Liriomyza trifolii proteins were submitted to the server for functional interaction associated network between partners for the STRING (Research Online of Interacting Genes/Proteins Data Basis version 10.0)13 [24]. The interactions were examined at medium and high confidences. Molecular Dynamics Simulation The molecular dynamic approach is widely used to assess atoms' behavior and structural stability, and to study the conformational changes at an atomic level. Understanding the stability of protein upon ligand binding is significantly improved by molecular dynamics simulation studies. Gromacs 4.6.2 [26] with GROMOS96 54a7 force field [27] was used for MD simulation studies of two systems, at 50 ns each. The ProdrG2 Server was used to generate the topology of the analysis of enzyme-ligand complexes. Each system was placed in the center of the cubic box, with a distance of 1.0 nm between the enzyme and the edge of the simulation box. Each system was solvated with explicit water molecules. Before proceeding towards energy minimization, all systems were neutralized by adding Na + and Cl − ions, accordingly. The steepest descent method was used for the energy minimization of each system. MD simulations with NVT (isochoric-isothermal) and NPT (isobaric-isothermal) ensembles (N 1 4 constant particle number, V 1 4 Volume, P 1 4 Pressure, T 1 4 Temperature) were performed for 1 ns, each, to equilibrate the enzyme-ligand system for constant volume, pressure (1 atm), and temperature (300 K). To calculate electrostatic interaction, the Particle Mesh Ewald (PME) algorithm [25] was used with a grid spacing of 1.6 Å and a cutoff of 10 Å, and the LINCS method was used to restrain the bond length. Finally, the trajectories were saved at every 2-fs time step and the production MD simulation of the enzyme-ligand complex was performed for 50 ns [28]. Conclusions This study presented an array of naturally occurring, nontoxic, easily extractable, low-cost ligands that show great potential as inhibitors of a variety of proteins found in the gut of the polyphagous pest L. trifolii that is known to attack a myriad of crops. The target proteins are acetylcholinesterase, actin, α-tubulin, arginine kinase, and histone receptor III subtypes. The proposed inhibitors or inhibitor combinations are (disodium 5 -inosinate and praliciguat), (calcium 5 -guanylate and chlorogenic acid), chlorogenic acid alone, (kaempferol-3,7-di-O-glucoside with hyperoside and delphinidin 3-glucoside), and (myricetin 3 -glucoside and hyperoside), respectively. In lieu of an experimentally available structure of the target proteins, the initial models of the protein of L. trifolii origin were constructed using homology modeling. The analyses used in this investigation included structural modeling, binding site interaction prediction, molecular docking free energy calculations, binding pose analysis, dynamic stability and conformational perturbation analysis, radius of gyration analysis, hydrogen bond analysis, and molecular mechanics PBSA free energy calculations. The results demonstrated that the proposed inhibitors formed stable complexes with their target proteins while also having great potentials for inhibitory activity. All five ligand-protein complexes have favorable parameter values in RMSD, RMSF, RoG, intermolecular hydrogen bonding, and binding free energy for 50 ns. Trajectories analysis showed that the studied complexes displayed structural stability during the MD runs.
2022-10-27T15:26:37.966Z
2022-10-24T00:00:00.000
{ "year": 2022, "sha1": "8c2bbabe879fc272c17cb874811e8679b7265a4e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/21/12791/pdf?version=1666849440", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52c50632c3e01e87d4fb2fd9a54d262d78b86bf7", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
234231736
pes2o/s2orc
v3-fos-license
Detection method of space-time conflict for multi route planning In order to solve the problem of space-time conflict in multi route planning, this paper proposes a space-time conflict detection method based on bounding box. The main research contents include: dividing each route into a limited number of road sections, each road section uses a bounding box to represent the space occupied by the target in the period of time, and then using OBB collision detection method to judge whether there is different bounding box overlap in the same space at the same period of time, so as to judge whether there is a space-time conflict. In addition, calculating the distance among road sections to exclude road sections where conflicts are unlikely to occur before detection. The feasibility of the conflict detection method is verified by experiments on a multi route planning case. Introduction Before an aircraft performing a mission, formulating a flight route plan for the aircraft is necessary. The flight route plan includes time, position of the aircraft. When formulating the route plan with a number of aircrafts, it is necessary to consider whether there is a possible conflict among those aircrafts in the flight process to ensure the safety of the aircrafts. When formulating a route plan, we should avoid different aircrafts being in the same position at the same time. This is the problem of space-time conflict when formulating the multi route plan. In order to solve the problem, it is necessary to detect the space-time conflict to judge which aircraft in the flight route plan may collide at what time and at what position. Then the operator who formulate the flight plan can know how to adjust the plan. Many scholars have carried out research on space-time conflict detection [1][2] . Most research focuses on the space-time conflict when executing the plan [3][4] or space conflict without time [5] , but little research on space-time conflict when formulating the plan is studied. At present, space-time conflict detection method is still not mature. So, A new space-time conflict detection method for multi route planning is presented in this paper. and space at first. In this paper, we consider using the period of time to represent time, and using the bounding box occupied by each period of time to represent space. Giving the route L in three-dimensional space, L contains many waypoints, the information of waypoint includes the aircraft's position and time, so L can be represented by a Two-Tuple , P T < > , Variable set In this formula, i x represents the x-axis coordinate of the i-th waypoint, 1 i x + represents the x-axis coordinate of the i+1-th waypoint, i y represents the y-axis coordinate of the i-th waypoint, 1 i y + represents the y-axis coordinate of the i+1-th waypoint, and i z represents the z-axis coordinate of the i-th waypoint, 1 i z + represents the z-axis coordinate of the i+1-th waypoint, i t represents the time of the i-th waypoint, and 1 i t + represents the time of the i+1-th waypoint. In the same way, the position of the aircraft at e t can be derived. The space occupied by objects in the period of time [ , ] s e t t can be represented several bounding boxes. The bounding box belonging to a road section is represented a cuboid, as the figure 1 shows. In this figure, P1 and P2 are two adjacent waypoints. The connection between the two waypoints constitutes a road section P1P2. A1B1C1D1A2B2C2D2 is the bounding box of the road section P1P2. O1 and O2 are the centers of the squares A1B1C1D1 and A2B2C2D2 respectively. A1B1C1D1 and A2B2C2D2 have side lengths of 2* safe d , where safe d is safe distance. A1A2, B1B2, C1C2, D1D2 are parallel to P1P2, A1B1C1D1 and A2B2C2D2 are perpendicular to P1P2. From the above, giving any period of time, the space occupied by the aircraft can be determined in that period of time. So, the flight time can be divided into finite periods of time, and each period of time has a space which the aircraft occupied in that period of time. Space-time conflict detection By analysis, time is represented by period of time, space is represented by bounding box. So, to judge whether a space-time conflict occurs, we only need to judge whether the space occupied by different aircraft in the same period of time overlap. Space-time conflict detection based on OBB The OBB (Oriented Bounding Box) detection method was proposed by Gottschalk in 1988, which is used for conflict detection of oriented bounding boxes. Since a cuboid is used to represent the space, and the orientation of the cuboid is uncertain, the cuboid can be regarded as an oriented bounding box, and the OBB conflict detection method is used to judge whether there is different bounding box overlap. Generally, the separation axis theorem and the separation axis theorem are used: if an axis can be found that the projections of two convex objects on this axis do not overlap, then the two objects do not intersect, if an axis cannot be found that the projections of two convex objects on this axis do not overlap, then it can be determined that the two objects intersect. For OBB conflict detection of two cuboids, you only need to find a separating axis to determine whether the two cuboids intersect. There are 15 candidate separation axes: 3 coordinate axes of one cuboid, 3 coordinate axes of the other cuboid, and 9 axes perpendicular to a certain axis. You only need to judge whether the projections on these 15 axes overlap to judge whether the two objects intersect. Optimization of Space-time conflict detection based on distance judgment OBB conflict detection is very time-consuming, so it is necessary to minimize the times of using OBB conflict detection to improve the method performance. By analysis, OBB conflict detection is not necessary in the following case. When the distance between the two bounding boxes is too far, that is, the distance between the centers of the two bounding boxes is greater than the sum of the radii of the circumscribed sphere of the two bounding boxes, then there is no need to use OBB conflict detection. We can easily calculate the radii of the circumscribed sphere of the bounding box and the distance between the centers of the two bounding boxes, so wo just make sure: In this way, there is no need to use OBB conflict detection, among them, ε is the sum of the radii of the circumscribed sphere of the two bounding boxes, and c d is the distance between the centers of the two bounding boxes. Algorithm flow The algorithm flow is shown in figure 2. In this figure, 1 ik jm OBB C C means using OBB detection method to judge whether there is a conflict between ik C and jm C . Case analysis Two routes in a given Cartesian coordinate system: C and 23 C , 15 C and 24 C , 15 C and 23 C respectively, and get the conclusion that there is a conflict between 15 C and 24 C , that is, there will be a space-time conflict between the road section 15 r in 1 L and the route section 24 r in 2 L . To verify the result, assuming that the aircraft is traveling in a straight line at a constant speed in the road section, when t=10:47:30, the waypoint position on route 1 L at this time can be calculated to be (152, 136, 75), and the waypoint position on route 2 L at this time can be calculated to be (155, 134, 75), and the distance between the two waypoints at this time is calculated to be 3.6, which is less than the safe distance. So, there is a risk of conflict, that is, a space-time conflict will occur. Conclusion This paper studies the space-time conflict detection in multi route planning. In the past, most research on the space-time conflict detection is about the flight process. The time in the flight route planning in this paper is regarded as a set of periods of time, and space is regarded as a bounding box occupied by each aircraft in different periods of time. By judging whether there is different bounding box overlap in the same space at the same period of time, it can be derived that whether there is a time-space conflict in flight route plan. Finally, the effectiveness of the method is verified by experiments. However, the method has certain requirements for the interval of periods of time, if the period of time is too long, the method may judge a route that will not conflict actually as a route that will conflict.
2021-05-11T00:05:25.520Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "16d1952f5d098114aa5cfeed26d05bfa06bfed01", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/05/matecconf_cscns20_07008.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00c831a89ccd1dc774ccfa61d8e5dc2dff955394", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
248193285
pes2o/s2orc
v3-fos-license
Association of Gestational Hypertension with Anemia under 5 Years Old: Two Large Longitudinal Chinese Birth Cohorts Gestational hypertension may interfere with the placental iron metabolism, thus probably increasing the risk of childhood anemia. We aim to examine the association between gestational hypertension and childhood anemia at different ages in two large Chinese birth cohorts. Cohort 1 was conducted in 5 counties in northern China and was comprised of 17,264 mother–children pairs (97.3%) during 2006–2009, whereas cohort 2 was conducted in 21 counties in southern China and was comprised of 185,093 mother–children pairs (93.8%) during 1993–1996. All pregnant women were registered in a monitoring system and followed up until the termination of pregnancies. The childhood anemia was diagnosed at 6 month and 12 month in cohort 1 and at 55 month in cohort 2. The overall incidence of childhood anemia was 6.78% and 5.28% at 6 month and 12 month, respectively, in cohort 1 and 13.18% at 55 month in cohort 2. Gestational hypertension was associated with increased risk of anemia at 6 month (adjusted Odds Ratio (OR): 1.31; 95% confidence interval (CI): 1.05, 1.63) and at 12 month (adjusted OR: 1.50; 95% CI: 1.18, 1.90) in cohort 1 and at 55 month (adjusted OR: 1.06; 95% CI: 1.01, 1.12) in cohort 2. The hemoglobin values of children at different ages were lower among gestational hypertension group in the linear models, which was consistent with the results of binary regression analysis. Our study found gestational hypertension may associate with an increased risk of childhood anemia. It suggests a possible need for exploring changes in prenatal care that might prevent childhood anemia. Introduction Anemia is one of the most common nutritional disorders among preschool children worldwide, particularly in developing countries. The estimated prevalence of anemia under 5 years old was 47.4% globally, which affected 293 million children [1]. Anemia during childhood may lead to an increased risk of morbidity and mortality in younger age [2]. It is also associated with impaired neurological development and may cause cognitive damage in the long run [3,4]. Several factors may relate to childhood anemia such as genetic and malnutrition status of mother and child [5]. However, the potential etiology of anemia still remains unclear. It is estimated that iron deficiency is the leading cause of childhood anemia [6]. Impaired placental function may cause limited iron store and transfer, hence increasing the risk of anemia [7][8][9]. Gestational hypertension, defined as de novo hypertension, complicates 5-10% of all pregnancies [10]. Previous studies indicated women with hypertensive disorders of pregnancy might have placental structural changes which have effects on maternal-fetal transport of nutrients, including iron [11,12]. Both ferritin and calculated iron storage values of infants, markers used to evaluate the iron storage status, were less in the gestational hypertension group compared with the normal group. Yet, some studies indicated that infants might not be affected by anemia, though the iron storage was abnormal [13]. As former studies had conflicting results and suffered from small sample size to detect the effects, we need more evidence to specifically examine the relationship between gestational hypertension and childhood anemia. During the first 4-6 months of postnatal life, the endogenous iron stores of healthy infants will cover their demands for growth [14]. For infants with mothers who have gestational hypertension, the iron storage condition may be affected and these influences may exist hereafter. However, it still remains unclear whether the association between maternal hypertension during pregnancy and anemia exists for children. In recent years, the theory of "developmental origins of health and disease" (DOHaD) underlies the role of maternal health status on trajectory health development [15]. Although China has made a series of nutrition improvement projects for children over the past 30 years, maternal factors should be paid more attention. Therefore, we used two large Chinese birth cohort studies to investigate the association of gestational hypertension with childhood anemia at different times. Background and Subjects for Current Study Cohort 1 originated from a randomized controlled trial aiming to evaluate the effect of micronutrient supplementation during pregnancy on pregnant women and fetuses. This study was implemented in 5 rural counties in Hebei province of northern China. Briefly, 18,775 primiparous women were enrolled before 20 weeks of gestation from 2006 to 2009 and were randomly assigned to 3 groups (folic acid, folic acid plus iron, and multiple micronutrients) [16]. Of 18,775 women, all of them had hemoglobin concentrations ≥ 100 g/L and 17,748 women had live singleton births with available data registered on the monitoring system of both prenatal information and offspring examination records. We further excluded 43 (0.24%) women with multi-fetal gestations; 450 (2.54%) infants had no hemoglobin records during 5-7 month or 11-13 month. After these exclusions, 17,264 women (97.3% of the targeted population) from cohort 1 were finally included in the analysis. Cohort 2 was based on a large prospective cohort study aiming to study the effect of preconceptional use of folic acid on neural tube defects, as well as child growth and development. It was implemented in 21 cities or counties in Zhejiang and Jiangsu provinces of southern China. Briefly, of 215,871 women who prepared for marriage or became pregnant registered in a perinatal health care surveillance system from 1993 to 1996 [17]. Further, 197,333 of their children were followed up in 2000 to obtain the information of physical examinations. Of these women, we excluded 3784 (1.92%) for whose gestational hypertensive disorders of pregnancy records were unknown outliers (systolic blood pressure value < 60 mm Hg or >200 mm Hg; diastolic blood pressure value < 40 mm Hg or >164 mm Hg); 110 (0.06%) with multi-fetal gestations and 8687 (4.40%) whose infants did not have hemoglobin records during 40-79 month. After these exclusions, 185,093 women (93.8% of the targeted population) from cohort 2 were finally included in the analysis (Figure 1). The original study was approved by the Peking University Health Science Center Institutional Review Board. The secondary analyses of already collected data were deemed exempt by the institutional review board. The original study was approved by the Peking University Health Science Center Institutional Review Board. The secondary analyses of already collected data were deemed exempt by the institutional review board. Exposure and Covariates We collected data through the same perinatal health care surveillance systems in both birth cohorts. At each visit, blood pressure in the right arm was measured by trained health workers. They measured with a mercury sphygmomanometer and recorded on two or more consecutive occasions with an interval of ≥6 h. We defined gestational hypertension as an absolute blood ≥ 140/90 mm Hg after 20 weeks of gestation, or as a blood pressure increment of ≥30/15 mmHg after 20 weeks of gestation as compared with the first trimester [18] In addition to gestational hypertension, other covariates were collected with mainly consistent structured questionnaire. The information was first filled out in a brochure by health workers, then entered into computers by trained staff in hospitals and finally transferred to the Peking University project center. The covariates drawn from surveillance systems included continuous covariates: maternal age (years), body mass index (BMI, kg/m 2 ) at first prenatal visit, anemia during pregnancy (maternal hemoglobin concentration < 110 g/L at any time during pregnancy); and age at offspring hemoglobin measurement (months); categorical covariates: education (high school or higher, junior high school, primary school or lower, or unknown), occupation (farmer or other), ethnicity (Han or other), after 42 days postpartum feeding practices (exclusive breastfeeding or others), anemia during pregnancy (yes or no), micronutrient supplementation (folic acid, iron-folic acid, and multiple micronutrients) for cohort 1, parity (primigravida or multigravida) and periconceptional folic acid consumption status (yes or no) for cohort 2. Hemoglobin Measurement Before study initiation, all doctors were trained to measure hemoglobin according to standard procedures. A step-by-step instruction leaflet was pasted on the wall of doctors' rooms to ensure the compliance. To minimize testing bias, the doctors' rooms were equipped with heating devices in winter to maintain temperature at over 18 °C. In cohort 1, infants' hemoglobin was measured by using capillary blood via the HemoCue system (HemoCue AB, Angelholm, Sweden). In cohort 2, children's hemoglobin was measured with a standard cyanmethemoglobin method by using capillary blood via devices available at each hospital; two commonly used devices were the visible spectrophotometer and hemoglobinometer (model 721, China). All project hospitals were provided with standard hemoglobin solutions (50, 100, 150 and 200 g/L) and with a step-by-step procedure for Exposure and Covariates We collected data through the same perinatal health care surveillance systems in both birth cohorts. At each visit, blood pressure in the right arm was measured by trained health workers. They measured with a mercury sphygmomanometer and recorded on two or more consecutive occasions with an interval of ≥6 h. We defined gestational hypertension as an absolute blood ≥ 140/90 mm Hg after 20 weeks of gestation, or as a blood pressure increment of ≥30/15 mmHg after 20 weeks of gestation as compared with the first trimester [18] In addition to gestational hypertension, other covariates were collected with mainly consistent structured questionnaire. The information was first filled out in a brochure by health workers, then entered into computers by trained staff in hospitals and finally transferred to the Peking University project center. The covariates drawn from surveillance systems included continuous covariates: maternal age (years), body mass index (BMI, kg/m 2 ) at first prenatal visit, anemia during pregnancy (maternal hemoglobin concentration < 110 g/L at any time during pregnancy); and age at offspring hemoglobin measurement (months); categorical covariates: education (high school or higher, junior high school, primary school or lower, or unknown), occupation (farmer or other), ethnicity (Han or other), after 42 days postpartum feeding practices (exclusive breastfeeding or others), anemia during pregnancy (yes or no), micronutrient supplementation (folic acid, iron-folic acid, and multiple micronutrients) for cohort 1, parity (primigravida or multigravida) and periconceptional folic acid consumption status (yes or no) for cohort 2. Hemoglobin Measurement Before study initiation, all doctors were trained to measure hemoglobin according to standard procedures. A step-by-step instruction leaflet was pasted on the wall of doctors' rooms to ensure the compliance. To minimize testing bias, the doctors' rooms were equipped with heating devices in winter to maintain temperature at over 18 • C. In cohort 1, infants' hemoglobin was measured by using capillary blood via the HemoCue system (HemoCue AB, Angelholm, Sweden). In cohort 2, children's hemoglobin was measured with a standard cyanmethemoglobin method by using capillary blood via devices available at each hospital; two commonly used devices were the visible spectrophotometer and hemoglobinometer (model 721, China). All project hospitals were provided with standard hemoglobin solutions (50, 100, 150 and 200 g/L) and with a step-by-step procedure for calibrating the hemoglobinometer and for preparing the standard curve for the 721 visible spectrophotometer. We defined anemia as hemoglobin concentrations < 110 g/L for infants and children aged < 60 month, and concentrations < 150 g/L for children aged ≥ 60 month, based on WHO recommendations [19]. Statistical Analysis We compared the characteristics of women between gestational hypertension and non-gestational hypertension subjects in the mean age and BMI, ethnicity, education, occupation, anemia during pregnancy, feeding practices and age at follow-up visit. We used the Student's t-test for quantitative variables and the χ 2 test for categorical variables. Logistic regression models were used to estimate crude and adjusted odds ratios (ORs) after adjusting for the main underlying confounders such as maternal age, BMI, education, occupation, ethnicity, anemia during pregnancy and week of gestation at hemoglobin measurement. One additional confounder for cohort 1 was micronutrient supplementation; additional confounders for cohort 2 were parity and folic acid intake. We further used the linear regression model to estimate the crude and adjusted mean difference in hemoglobin for different groups. Modification of the effect of gestational hypertension by covariates was examined by adding an interaction term to the multivariable logistic regression model. The effect of gestational hypertension was subsequently estimated by multilogistic regression in strata of maternal characteristics with p-values < 0.1 for interaction. All data were analyzed using SPSS for Windows software (ver. 20.0; SPSS Inc, Chicago, IL, USA). Results Of the 17,264 women included in cohort 1, 1105 (6.40%) had gestational hypertension, whereas of the 185,093 women in cohort 2, 17,677 (9.55%) had gestational hypertension. Table 1 shows maternal and children characteristics according to gestational hypertension. For both cohorts, women with gestational hypertension were more likely to be older, have a non-farmer occupation, have higher BMI and had a lower proportion of exclusive breastfeeding. Most women were educated to a junior high or higher school level in both cohorts. The percentage of women with anemia during pregnancy in gestational hypertension group was similar with non-gestational hypertension group in cohort 1, whereas the percentage was higher in gestational hypertension group in cohort 2. Mean age at follow-up visit for children with mother of gestational hypertension was younger than those without gestational hypertension in cohort 1, whereas the age of children was similar across two groups in cohort 2. The total incidences of anemia at 6 month (1171) and 12 month (911) in cohort 1 were 6.78% and 5.28%, respectively, and 13.18% at 55 month (24,403) in cohort 2. The associations between gestational hypertension and children's anemia at different ages is demonstrated in Table 2. In both cohorts, the incidence of childhood anemia was higher in women with gestational hypertension. All crude analysis revealed gestational hypertension was associated with an increased risk of childhood anemia. After adjustment for confounders, associations of gestational hypertension with anemia at 6, 12, and 55 month remained almost unchanged (ORs (95% CI): 1. 31 (1.05, 1.63), 1.50 (1.18, 1.90), and 1.06 (1.01, 1.12), respectively). OR, odds ratio; CI, confidence interval. * Common confounders adjusted for in the multiple logistic regression for both cohorts included maternal age, BMI, education, occupation, ethnicity, feeding practices, anemia during pregnancy and week of gestation at hemoglobin measurement. One additional confounder for cohort 1 was micronutrient supplementation; additional confounders for cohort 2 were parity and folic acid intake. We further compared the mean hemoglobin concentration of children according to their mother's gestational hypertension status in Table 3. Mean (±SD) hemoglobin concentration was 121.71 ± 8.67 g/L and 122.07 ± 8.18 g/L at 6 month and 12 month, respectively, in cohort 1, and was 119.52 ± 10.19 g/L at 55 month in cohort 2. Gestational hypertension was associated with significant reduction of hemoglobin concentrations in both cohorts, which was consistent with results of childhood anemia. The adjusted change of hemoglobin at 6, 12 and 55 month were −1.12 g/L (95% CI: −1.65, −0.59 g/L), −1.48 g/L (95% CI: −1.98, −1.00 g/L) and −0.20 g/L (95% CI: −0.37, −0.03 g/L) among children born to mothers with gestational hypertension relative to those without. Table 3. Crude and adjusted mean difference in hemoglobin (g/L) for gestational hypertension group compared with non-gestational group. Non-Gestational Hypertension Group Mean Age at Follow-Up Mean ± SD (g/L) CI, confidence interval. * Common confounders adjusted for in the multiple logistic regression for both cohorts included maternal age, BMI, education, occupation, ethnicity, feeding practices, anemia during pregnancy and week of gestation at hemoglobin measurement. One additional confounder for cohort 1 was micronutrient supplementation; additional confounders for cohort 2 were parity and folic acid intake. In the analysis of effect modification, occupation as a maternal characteristic was a significant interaction term. When the analysis was stratified by occupation, we observed gestational hypertension might increase the risk of anemia for women with farmer occupation, but not for non-farmer occupation in both cohorts (Table 4). OR, odds ratio; CI, confidence interval. * Common confounders adjusted for in the multiple logistic regression for both cohorts included maternal age, BMI, education, ethnicity, feeding practices, anemia during pregnancy and week of gestation at hemoglobin measurement. One additional confounder for cohort 1 was micronutrient supplementation; additional confounders for cohort 2 were parity and folic acid intake. Discussion In these two large longitudinal Chinese birth cohorts, we found gestational hypertension was positively associated with childhood anemia at 6 month, 12 month and 55 month. Children's hemoglobin in the gestational hypertension group was significantly less than the normal group across different ages. These findings help to acquire a better understanding about the effect of hypertension management during pregnancy on prevention of childhood anemia. Several studies have investigated the association between hypertensive disorders of pregnancy and the occurrence of infant anemia [11][12][13]20,21]. Some studies demonstrated that infants born to mothers with maternal hypertension had lower ferritin levels and iron stores, which reflected offspring's potential risk of having anemia [11][12][13]. One Korean study [12] compared the iron status of newborn infants and found serum ferritin of appropriate gestational age infants from mother with hypertensive disorders of pregnancy group significantly lower than normal group (median (interquartile range, IQR): 108.5 (46.8-184.8) ng/mL versus 143.0 (88.0-235.0) ng/mL). Similar results were found in the comparison of total body iron stores of the infants. However, this study used retrospective data and only enrolled specific neonatal individuals which may lead to selection and recall bias. Another cohort study [13] also found ferritin values in children aged 0.5 to 1 year were inversely associated with adverse maternal factors after adjustment for some covariates (β: −0.330, p-value: 0.01). However, neither iron deficiency (ferritin < 12 mg/µg) nor iron deficiency anemia (iron deficiency and hemoglobin < 110 g/L) were different according to the presence or absence of hypertension groups. Negative association also found in other studies [20,21]. Yet, these studies were all conducted in developed countries and children had better nutrition status. Meanwhile, these studies had a small sample size (number of anemia < 50) which might not have the capacity to detect the effects. Our study combined two large birth cohorts to compare the effect of gestational hypertension on childhood anemia at different times. In cohort 1, we measured the children's hemoglobin twice at 6 month and 12 month, respectively. We found the effect of gestational hypertension existed consistently in children aged 6 month, 12 month and even 55 month. Uijterschout et al. [13] used data in one time examination but across different ages and indicated the difference of children's ferritin levels across hypertension and normal groups was not observed after the age of 12 month. Conflicting results may be due to the heterogeneity of population and study design. The effect of gestational hypertension significantly decreased at 55 month (adjusted OR = 1.06 (95% CI: 1.01, 1.12)). One possible explanation was that children might absorb nutrition through more pathways such as formula and cow's milk. The richer dietary pattern might attenuate the adverse effect of gestational hypertension [22]. Interestingly, we noted the risk ratio of hypertension was not decreased at 12 month compared with 6 month. Although the number of children with childhood anemia decreased, the number of anemic children with non-gestational hypertension mother decreased more greatly. This suggested gestational hypertension might have a greater effect at early age despite the supplement of nutrition. The health condition of these children should be paid more attention. Further studies were needed to provide more evidence. Few studies investigated the interaction effects between gestational hypertension and maternal characteristics. Previous studies revealed that mother's occupation may have an effect on childhood anemia [23,24]. A study conducted in Ghana [24] showed children whose mothers were farmers were less likely to have anemia (adjusted OR = 0.17 (95% CI: 0.05-0.60)). However, the possible joint effect of occupation was less discussed. Our findings indicated that the mother's occupation modified the association between gestational hypertension and childhood anemia. The negative associations were stronger in the farmer group across different times. The exact mechanisms of the association between gestational hypertension and childhood anemia still remain unknown. However, the existing evidence provides the possibility that placental dysfunction might be involved. Several recent studies showed that higher blood pressure during pregnancy was associated with modifications of placenta DNA methylation [25,26], hence, it might have an effect on placental functions. The placenta plays an important role in storing and transporting iron and nutrients [27,28]. Maternal gestational hypertension might relate to impaired placental function and decreased placental perfusion [29,30]. These conditions restricted the amount of available iron to the fetus and may result in infant anemia. This study had several strengths. First, we used two large birth cohorts to investigate the association. The population in these cohorts represented different regions of China, cohort 1 for northern China and cohort 2 for southern China. They had different living conditions and various maternal characteristics; however, we reached the same conclusions, which made our results more reliable. Meanwhile, we conducted the uniformly procedure to collect and manage data. The blood pressure and hemoglobin were all measured by trained health workers and diagnosed by specialists, which minimized the risk of misclassification bias. The use of a surveillance system guaranteed that over 90% of the mother-child information could be followed, which subsequently enabled us to detect the effects with a large sample size. Furthermore, our study conducted repeated measurements of hemoglobin at different times, which provided the chance to explore the long-term effect of gestational hypertension. Some limitations should be considered when the results are interpreted. Limited by local health disparities, we used the different devices to measure children's hemoglobin values in these two cohorts. As the difference of measurement methods between these two devices was not evaluated, it may lead to the misclassification of anemic outcomes. Some potential confounding information, such as maternal smoking and alcohol drinking, were not collected. However, smoking and alcohol use were both rare among women in rural China, especially among pregnant women at the time of our study. We did not measure the iron status and iron indicator such as serum ferritin and transferrin in this study; therefore, we could not conclude the effects of different types of anemia on childhood anemia. Further studies are needed to explore the possible mechanisms that iron plays a part in. The gestational age of when anemia during pregnancy and gestational hypertension were diagnosed was unavailable in this study. The diagnosis gestational age might affect the medical intervention afterwards, and hence influence the association. Additionally, the participants in our study were Han (China's predominant ethnic group), so our results may not be generalizable to other populations. Conclusions In conclusion, we used two large birth cohort studies in China to investigate the association of gestational hypertension and childhood anemia. After adjusting for potential confounders, we found that gestational hypertension could increase the risk of anemia at 6 month, 12 month and 55 month. Considering the possible long-term effect brought by gestational hypertension, the management of blood pressure during pregnancy should be paid more attention to prevent the anemia under 5 years old. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. Data Availability Statement: The data are available in the main text, or can be obtained by contacting the corresponding author (Nan Li).
2022-04-16T15:02:51.498Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "27e626be09d715bdf2d16f34676d6ee19a1d9dd1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/8/1621/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd036ab7ac00bec8604472f54f0a010328b50b08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257056673
pes2o/s2orc
v3-fos-license
Determination of key residues in MRGPRX2 to enhance pseudo-allergic reactions induced by fluoroquinolones MAS-related G protein-coupled receptor X2 (MRGPRX2), expressed in human mast cells, is associated with drug-induced pseudo-allergic reactions. Dogs are highly sensitive to the anaphylactoid reactions induced by certain drugs including fluoroquinolones. Recently, dog MRGPRX2 was identified as a functional ortholog of human MRGPRX2, with dog MRGPRX2 being particularly sensitive to fluoroquinolones. The aim of this study was to determine key residues responsible for the enhanced activity of fluoroquinolone-induced histamine release associated with MRGPRX2. Firstly, a structure model of human and dog MRGPRX2 was built by homology modeling, and docking simulations with fluoroquinolones were conducted. This model indicated that E164 and D184, conserved between human and dog, are essential for the binding to fluoroquinolones. In contrast, F78 (dog: Y) and M109 (dog: W) are unconserved residues, to which the species difference in fluoroquinolone sensitivity is attributable. Intracellular calcium mobilisation assay with human MRGPRX2 mutants, in which residues at positions 78 and 109 were substituted to those of dog MRGPRX2, revealed that M109 and F78 of human MRGPRX2 are crucial residues for enhancing the fluoroquinolone-induced histamine release. In conclusion, these key residues have important clinical implications for revealing the mechanisms and predicting the risks of fluoroquinolone-mediated pseudo-allergic reactions in humans. www.nature.com/scientificreports/ the risks or investigate the mechanisms of drug-induced anaphylactoid reactions. Recently, we identified that dog MRGPRX2 is a functional ortholog of human MRGPRX2, with dog MRGPRX2 being highly sensitive to fluoroquinolones 20 . However, the mechanism underlying its enhanced activity in fluoroquinolone-induced histamine release has not been clarified. The present study was designed to identify key residues associated with fluoroquinolone-induced histamine release inducing the above-mentioned species difference in fluoroquinolone sensitivity. Firstly, a docking simulation with fluoroquinolones was conducted using a structure model of human and dog MRGPRX2 established by homology modeling. Comparison of amino acids around the predicted ligand binding pocket between human and dog indicated that two residues, F78 and M109, might play an important role in fluoroquinolone sensitivity. Then, HEK293 cells transfected with human MRGPRX2 mutants, in which amino acids at positions 78 and 109 were replaced by those in dog MRGPRX2 (F78Y, M109W, and F78Y/M109W), were treated with fluoroquinolones [ciprofloxacin (CPFX), gatifloxacin (GFLX), levofloxacin (LVFX), and pazufloxacin (PZFX)] and subjected to an intracellular calcium mobilisation assay, to assess Gq-coupled receptor activity. Determination of key residues associated with fluoroquinolone-induced histamine release would be clinically valuable for elucidating the mechanisms and predicting the risks of drug-induced pseudo-allergic reactions to fluoroquinolones in humans. Results Homology modeling of human MRGPRX2 and docking of fluoroquinolones. To perform docking simulations with fluoroquinolones, a homology model of human MRGPRX2 was built because no crystal structure information was available. As one of the closest homologs in the Protein Data Bank (https:// www. rcsb. org), human KOR (PDB accession id: 6B73, agonist-bound form) was used as a template for the modeling (Fig. 1a). Next, a docking simulation incorporating the induced fit effect with fluoroquinolones was performed. In this simulation, fluoroquinolones binding to human MRGPRX2 were simulated at the same site of the agonist in PDB 6B73. The top-ranked pose is shown in Fig. 1b. In the docking model with CPFX, a fluoroquinolone that induces histamine release by mast cells 14,21 , characteristic salt bridges were found between a basic substituent at the 7 position of CPFX and acidic side chains of E164 and D184 in human MRGPRX2. On the other hand, in the model with PZFX, a fluoroquinolone that does not induce histamine release by mast cells 22 , a side chain at the 7 position was located at a more distant location from E164 and D184 than in CPFX because of the difference in position of the terminal nitrogen between these two fluoroquinolones. Thus, E164 and D184 in human MRG-PRX2 were considered as essential residues involved in molecular interactions associated with the activation of MRGPRX2 by fluoroquinolones. Amino acid sequence alignment and comparison of human and dog MRGPRX2. The result of aligning human and dog MRGPRX2 is shown in Fig. 2a. Typical motifs of class A GPCRs such as DRY motif and cysteine in transmembrane (TM)-3 were conserved in KOR, while they were not conserved in both human and dog MRGPRX2. E164 (dog: 270) and D184 (dog: 290), which were thought to be involved in key molecular interactions with fluoroquinolones as mentioned above, were shared between human and dog MRGPRX2. Next, a dog MRGPRX2 structure model was built by swapping amino acids of human MRGPRX2 constructed as described above. To identify key residues associated with a species difference, residues around the predicted binding pocket of fluoroquinolones were compared between human and dog MRGPRX2. Among the 21 residues within 5 Å around CPFX docked in human and dog MRGPRX2, only three amino acids (F78, M109, and A189 in human MRGPRX2) were unconserved residues between human and dog (Fig. 2b); the remaining residues (86%) were conserved. Among these three residues, F78 (dog: Y185) and M109 (dog: W215) in human MRGPRX2 were predicted to play some roles in the interaction with its ligands because the side chains of these two residues are oriented towards CPFX, and likely to affect it. Therefore, we selected the residues F78 and M109, located in TM2 and TM3, and constructed human MRGPRX2 mutants in which "dog-type" mutations were introduced (Fig. 2c). Functional assay of human MRGPRX2 mutants. To determine whether the dog-type human MRG-PRX2 mutants were associated with increased responses, intracellular calcium mobilisation against fluoroquinolones was evaluated. For this evaluation, three types of expression vector, F78Y and M109W (single mutations), and F78Y/M109W (double mutation), were constructed. HEK293 cells were transiently transfected with these human MRGPRX2 mutants and evaluated for reactivity to compound 48/80, a typical histamine releasing agent, and fluoroquinolones (CPFX, GFLX, LVFX, and PZFX). In the mutant F78Y, slightly increased reactivity was observed against treatment with CPFX and LVFX compared with the case for wild type (WT) human MRG-PRX2 (Fig. 3a). On the other hand, in the mutants M109W and F78Y/M109W, significantly increased responses to compound 48/80, CPFX, GFLX, and LVFX were observed from lower concentrations than for WT (Fig. 3a,b). In particular, the reactivity of the double mutant F78Y/M109W against CPFX was equal to or greater than that of dog MRGPRX2. The EC 50 of CPFX in F78Y/M109W was ca. one-eighth that of the WT ( Table 1). The order of the EC 50 values was compound 48/80 < CPFX < LVFX < GFLX, which was the same for all of the mutants. PZFX, which does not induce histamine release by mast cells 22 , did not induce intracellular calcium mobilisation in any of the cells. Discussion In the present study, we identified M109 and F78 in human MRGPRX2 as key residues associated with fluoroquinolone-induced histamine release inducing the difference in fluoroquinolone sensitivity between human and dog. Introducing dog-type mutations into human MRGPRX2 markedly enhanced responses to fluoroquinolones. Our results provide important insights into the mechanism behind fluoroquinolone-induced histamine www.nature.com/scientificreports/ release in dogs, which are highly sensitive to these drugs. Moreover, the results suggest the mechanisms behind MRGPRX2-related hypersensitivity in humans. Reddy et al. and Lansu et al. identified E164 and D184 of human MRGPRX2 as essential residues for the activation of MRGPRX2 by substance P and opioids 5,6 . These two amino acids are negatively charged residues that were found to interact with cationic opioid ligands 6 . Additionally, it was demonstrated that replacing E164 with a positively charged residue resulted in the loss of responses to substance P 6 . In contrast, the response to LL-37 or dynorphin was not lost even in the same mutant, suggesting that different ligands might interact with different amino acids around the predicted ligand-binding pocket 6 . In this study, our docking model suggested that E164 and D184 would be essential for MRGPRX2 activation associated with fluoroquinolones because of the characteristic interactions between CPFX and E164/D184. We previously reported that a basic substituent at the 7 position of the fluoroquinolone ring may be associated with histamine release 14 . Consistent with that report, a terminal basic nitrogen of piperazine at the 7 position of the fluoroquinolone ring, which is present in the three fluoroquinolones used in this study (CPFX, GFLX, and LVFX), was predicted to form interactions such as a salt bridge or a hydrogen bond with acidic side chains of E164 and D184. On the other hand, in PZFX, which does not induce histamine release by mast cells 22 , no salt bridge or hydrogen bond was formed because of the greater distance between the nitrogen and E164/D184. The difference between CPFX and PZFX in our MRGPRX2 www.nature.com/scientificreports/ docking model was consistent with their distinct histamine release potential in vivo/vitro 17,22 . However, these residues were not considered to be a key factor contributing to the enhanced activity to fluoroquinolones because they were conserved between human and dog. Human MRGPRX2 is considered as "an atypical opioid receptor" 5 ; many of the motifs conserved in class A GPCRs are not found in human and dog MRGPRX2. While the homology of amino acid sequences between human and dog MRGPRX2 was 62% 20 , residues around the predicted ligand-binding site were found to be well conserved; only three amino acids differed. In the functional assay with "dog-type" human MRGPRX2 mutants, M109W and F78W/M109W showed markedly increased responses compared with the WT. In particular, the double mutant F78Y/M109W appeared to demonstrate a greater response than dog MRGPRX2 against CPFX. These results clearly suggest that M109 (dog: W215) and F78 (dog: Y185) of human MRGPRX2 are key residues contributing to the enhanced responses to fluoroquinolones. Introducing a mutation into a GPCR often affects its basal activity or ligand sensitivity 23,24 . In this study, human MRGPRX2 mutants did not exhibit changes of basal activity compared with WT in the calcium mobilisation assay, suggesting that these mutations did not affect the constitutive activity of MRGPRX2. In contrast, the mutants M109W and F78Y/M109W demonstrated a left shift of the dose-response curves and decreased In contrast, the EC 50 of LVFX was similar to or lower than that of GFLX in dog MRGPRX2 WT, suggesting that the ligand selectivity to these fluoroquinolones differs between human and dog MRGPRX2. Because the order did not change even in the mutants constructed in this study, the difference in ligand selectivity may arise from regions other than F78 and M109. Because the residues around the ligand binding pocket of MRGPRX2 are highly conserved between human and dog, comparison between human and dog MRGPRX2 including the site responsible for G-protein coupling, such as intracellular loops, may be needed to clarify the mechanism behind the different ligand selectivity between human and dog. Non-synonymous single-nucleotide polymorphisms (SNPs) would affect the responses to ligands, associating with hypersensitivity against drugs in some patients 30,31 . Although a number of naturally occurring missense variants of human MRGPRX2 have been reported, most of the variants showed similar or decreased responses compared with WT 32 . On the other hand, Chompunud Na Ayudhya et al. have reported that human MRGPRX2 mutants at the carbonyl terminus, which are associated with receptor phosphorylation and desensitisation, showed enhanced responses to substance P 33 . In the present study, we constructed human MRGPRX2 mutants at the TM domain, which is associated with ligand binding, and these mutants demonstrated enhanced responses to fluoroquinolones compared with WT. Our results indicate that missense mutations in F78 or M109 of human MRGPRX2 would induce enhanced responses to certain fluoroquinolones including CPFX in humans. With regard to human MRGPRX2 variants, F78L and V108A are found in gnomAD as naturally occurring missense mutations located at or near positions F78 and M109, respectively. The mutant F78L was reported not to alter the activity against hemokinin-1, substance P, icatibant, and human β-defensin-3 7 , generally consistent with the results of this study. However, it should be considered that the mutation at F78 may induce a greatly enhanced response when it is accompanied by mutation at M109. No reports have been published with regard to V108A's function. Further analysis of SNPs located around F78 or M109 would provide important information to investigate MRGPRX2-related hypersensitivity. In summary, we focused on dog MRGPRX2, which is sensitive to fluoroquinolones compared with human MRGPRX2, and identified key residues associated with fluoroquinolone-induced histamine release that explain this species difference. Our results have important clinical implications for revealing the mechanism behind fluoroquinolone-mediated pseudo-allergic reactions and assessing the risk of them in humans. Thermo Fisher Scientific Inc.) supplemented with 20 mM hydroxyethylpiperazine-N′-2 ethanesulfonic acid (HEPES; Sigma-Aldrich Co. LLC) and 0.05 vol% bovine serum albumin (BSA; Sigma-Aldrich Co. LLC). The highest concentration of fluoroquinolones was set at 1000 µg/mL based on previous reports, at which the test substances induced marked intracellular calcium mobilisation in MRGPRX2-expressing HEK293 cells 10,20 or caused histamine release in rat or human mast cells 14,17,21 . Intracellular calcium levels were analysed using Calcium Kit II-iCellux (Dojindo Molecular Technologies, Inc., Kumamoto, Japan), in accordance with the manufacturer's instructions. HEK293 cells (1.75 × 10 4 cells/well) were loaded with 1.25 mM probenecid and calcium probe for 45 min at 25 °C. Changes in fluorescence intensities between before and after addition of the test articles were measured over time using FLIPR Tetra (Molecular Devices, LLC, Sunnyvale, CA) with excitation at 470-495 nm and emission at 515-575 nm. The test articles were added 10 s after beginning the measurements. The data were analysed using ScreenWorks (Molecular Devices, LLC, Version 3.2.0.14) to determine the difference between maximal and minimal fluorescence intensity (max-min). As CPFX at 333 and 1000 µg/ mL induced nonspecific increases in intracellular calcium levels in untransfected cells, these data were excluded from the analysis. All experiments were performed in quadruplicate. Statistical analysis. Data are presented as the mean ± S.D. for calcium mobilisation. Half-maximal effective concentration (EC 50 ) of each test article used in the calcium mobilisation assay was calculated from individual Emax and E0 for each variant using the four-parameter sigmoidal model. Statistical significance (P < 0.05) was determined by unpaired t-test. These analyses were performed using GraphPad Prism 7.03 (GraphPad Software, La Jolla, CA). Data availability All data generated or analysed during this study are included in this published article.
2023-02-22T14:53:29.914Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "f84032324c9d431dc14f379a4ed18f46bfe3ca73", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-10549-6.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f84032324c9d431dc14f379a4ed18f46bfe3ca73", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
246589735
pes2o/s2orc
v3-fos-license
Primbon : Representation of Kraton Yogyakarta Primbon as Javanese local knowledge has been a guide for Javanese everyday and ritual life, included buildings, for decades. This paper intends to investigate the use of primbon in the Kraton Yogyakarta (Palace of Yogyakarta) as the representation of the sultan (king). The investigation was conducted through interpretive criticism to reveal the degree of conformity under the rules and principles in primbon as an attempt to form a new perspective in understanding the primbon. The analysis focuses on verses 172 and 194–196 of Primbon Betaljemur Adammakna, which deals with the arrangement of buildings. By transliterating the verses into English and interpreting the application of the verses in Kraton Yogyakarta, the study demonstrated the manifestation of the primbon verses in the kraton’s building arrangement. The study of primbon reveals the role of kraton as the representation of the earth in the universe, while the representation here displays the hierarchical arrangement of building facilities, from sacred to nonsacred or from private to public. Introduction Primbon acts as the direction to organise life for Javanese people and as a source of reference to find the solution to most everyday problems. This paper will investigate the representation of culture through spatial formation. Primbon contains the principles related to building practice, which can be either followed or disregarded. It is believed that when the principles are followed, becik (rewards) will emerge, and when they are ignored, ala (consequences) will arise (Prijotomo, 2001). The term primbon was derived from the word rimbu (Hidayat, 2001), which means to keep. This meaning suggests that primbon is a container to keep various forms of knowledge regarding events, both daily and ritual, in Javanese people's lives. These events include events associated with positive meanings (e.g., birth, life, and marriage) or events associated with negative meanings (e.g., death, accident, illness, and divorce) that can trigger conflicts because of the diversity among the Javanese people. Such diversity results in various forms of behaviours in responding to the event, so there will be various kinds of knowledge regarding each event. Such various kinds of knowledge are kept in primbon for future references if a conflict needs to be solved. The knowledge becomes the guidance to behave or act accordingly to the situation to avoid the conflict. Primbon does not force people always to follow the guidance or rules accordingly. However, there are consequences when the principles in primbon are not obeyed. Primbon becomes a synopsis of knowledge about Javanese people's behaviours and acts in life, which was passed down as an ancestor legacy through the process of trial and error. There are several types of primbon that had been written by several royal members of the kraton (palace), for examples, the primbon written by like Kanjeng Pangeran Harya Tjakraningrat from Kraton Yogyakarta or Raden Tanojo from Solo or Kraton Surakarta. Some of the primbon are listed in Table 1. This paper focuses on investigating Primbon Betaljemur Adammakna, which is the legacy of Kanjeng Pangeran Harya Tjakraningrat (1983c), a noble of Kraton Yogyakarta. There is another primbon, Primbon Pandhita Sabda Nata (Tanojo, 1976), which also discussed the principles of buildings; however, it discussed buildings within the territory domain of Kraton Surakarta and therefore will not be investigated in this paper as we focus only on Kraton Yogyakarta. In understanding the formation of Kraton Yogyakarta, we only consider the primbon verses specifically related to buildings, and not all verses related to buildings will be considered. We only focus on the verses that are relevant to understand the representation formation in Kraton Yogyakarta. The study of primbon, which addresses the formation of mass buildings in Javanese house, indicates that primbon is intended for the upper class of society or aristocratic, not for common people (Arfianti, 2005). In particular, in this study, we will discuss the verses from Primbon Betaljemur Adammakna (Tjakraningrat, 1983c) that specifically related to buildings. In this primbon, there are the verses regarding the choice of land (verse 151), time to reside (verse 155), time to plant (verse 162), time to harvest (verse 165), orientation (verse 172), and placement of gates (verses 194 to 196). Only the primbon related to orientation and placement of gates will be discussed here. Investigating Primbon as A Form of Representation This paper intends to explore the architecture representation of primbon in kraton architecture. Representation is a way to communicate visually in architecture. The communication process involves two aspects: the one representing and the one being represented. Building becomes one of the media of representation in the architecture domain. In studying the primbon as architectural representation, the one representing is Kraton Yogyakarta, and the one being represented is the rules in primbon. This study takes on the idea of representation as an image, based on the understanding that architecture is a form of culture (Mangunwijaya, 2013). This paper attempts to view the extent to which the rules in primbon are enforced in the representation of Kraton Yogyakarta, as manifested in the spatial formation of its buildings. The degree of conformity of the rules will be measured qualitatively, to derive a conclusion to determine the extent to which the primbon is followed or disregarded in the kraton architecture. The investigation of primbon in this study was conducted through interpretive criticism. Criticism is a comprehension concerning a work of architecture by giving a description, giving directions, and giving a valuation (Attoe, 1978). The author chose interpretive criticism as an approach in this study to form an alternative perspective, based on the authors' understanding, in viewing an object. This approach was conducted to display the authors' interpretation and comprehension regarding the findings discovered from the dialogue between the subject and the object. The analysis in this study attempts to describe the facts about the principles in primbon (which is related to representation display) as precise as possible and to describe the facts concerning spatial formation in Kraton Yogyakarta. Due to personal influence and experience, this criticism is not entirely neutral; however, with an orderly methodological framework, it is expected that investigation process would derive objective and logical findings. Mangunwijaya, in his book Wastu Citra (2013), suggests that the architectural process comprises two aspects, namely function domain and imagery domain. Imagery indicates the representation of a building or an impression the building offers to the viewing person. According to Mangunwijaya (2013), imagery and function reflect culture and aptitude respectively. Therefore, as the manifestation of the culture, buildings manifest the essence of humans. Buildings reflect the soul's splendour, the heart's beauty, or the simplicity of human thinking. The aspect of imagery formation of architecture is not mythical or merely based on belief or religious aspects. The study will focus on three main aspects of imagery formation. The first one is the culture, which can vary depending on the society where that culture is practised and spread. Hence the buildings in different places can have similarities due to the similarities of culture. Similarities can also occur due to the culture not originally from that place but a result of a culture deployment brought from the original country. Additionally, the buildings in different locations may have similar functions but different appearances due to cultural differences. The second aspect of imagery formation would be the orientation or central point. The central point in this regard does not refer to the contemporary understanding of the centre of gravity from geometrical forms; it refers to the axis of cultural belief embraced by the society. Axis here could be considered as two points served as the beginning and the end, rather than as a single point in geometry. The third aspect of imagery formation would be a hierarchy. Like the life process that has to go through step-by-step with its value, accordingly buildings as the reflection of life should hold values and steps following a system of hierarchy. These three aspects of imagery formation will be discussed in our investigation of the primbon, especially the Primbon Betaljemur Adammakna (Tjakraningrat, 1983c), concerning the spatial formation of Kraton Yogyakarta. Understanding the Architectural Representation in Primbon Betaljemur Adammakna Architecture as a form of culture within the Javanese context is very much related to primbon. Every aspect of living for Javanese people is already determined in primbon. Likewise, in buildings, there are some principles or rules that have been written in primbon. The following analysis will discuss several verses in Primbon Betaljemur Adammakna (Tjakraningrat, 1983c) that demonstrate the representation of Kraton Yogyakarta as part of the culture. Orientation Verse 172 of the primbon set the principles to define the house orientation. The following is the original content of the verse: (Tjakraningrat, 1983c, p. 161) No. 172. The direction of the house According to the measurement of the numbers of days and the birth days of the owner, if the result is: 7 the good direction must face north or east 8 the good direction must face north or east 9 the good direction must face south or east 10 the good direction must face south or west 11 the good direction must face west 12 the good direction must face north or west 13 the good direction must face north or east 14 the good direction must face south or east 15 the good direction must face west 16 the good direction must face west 17 the good direction must face north or west 18 the good direction must face north or east Notes: If the owner was born in Saturday Pahing, the number of Saturday is 9, the number of Pahing is 9, the sum is 18; the best direction must be staying in a house facing north or east. The rules on orientation in these verses are applied in relation to the numbers associated with time. In Javanese culture, there exist a belief that a particular day carries a particular weight, which is different for each day. The Javanese people has their understanding of wektu (time), as shown in Table 2. Although there are many terms occasionally heard and discussed in society, only wektu pitu (time of seven) and wektu limo (time of five) are used daily. Wektu pitu indicates the common days of the week, while wektu limo-Javanese people commonly refer to it as pasaran-indicates birth days. Each of the day in wektu seven and wektu limo is associated with particular neptu (number) as shown in Table 3. Wektu pitu (7) The day that we know as common days Table 2 Wektu (time) in Javanese culture Table 3 Wektu pitu and wektu limo Similar to days, sasi/wulan (months) are also associated with Javanese neptu (number), as shown in Table 4. Javanese calendar was adopted from the Islamic calendar after Islam arrived on the island of Java. The sasi in the Islamic calendar has the same number of months and the number of days in a month like in the common calendar adopted from the Gregorian calendar. Another important concept in Javanese societal belief is the concept of sedulur papat limo pancer which carries a deep meaning. The idea of sedulur papat limo pancer is regarding the presence of the siblings accompanying the unborn baby when still in the womb. The term kakang kawah (the older brother of crater) and adhi ari-ari (the younger sibling of placenta) are the two terms most commonly known by the Javanese people, referring to two out of the five siblings. The sedulur papat names are Watman, Wahman, Rahman, and Ariman, and the term of limo pancer refers to the name of the recently born baby. Watman means wat, the condition of a mother while undergoing the first feeling of giving birth during the delivery. Wahman means crater, birth course, or the opening of the delivery path. Rahman means the blood that comes out during the delivery. Ariman means the placenta that comes out after the process of delivery. These names are usually called out if the unborn baby needs help from their sedulur (siblings). After Islam arrived in Java, this concept still exists; however, the names are changed into the names of the angels: Jibril, Mikail, Isroil, and Israfil. The concept of sedulur papat limo pancer in the Javanese society is applied in relation to the days of pasaran, which are Legi, Pahing, Pon, Wage, and Kliwon-the days in wektu limo or time of five, which are associated with the orientation. According to the ancient beliefs, the east side is considered the oldest side; this is why Legi stands in the east position. Meanwhile, Kliwon indicates the middle position, which is the highest position that represents the unborn baby's position in the centre or the core. The idea of centre point in sedulur papat limo reflects the Javanese belief that individuals can meet relatives and communicate with them. While the relatives' appearance is like the unborn baby, they would guard it until its titi-wanci (the due date). (Tjakraningrat, 1983c), particularly verses number 172, 176, 177, 178, 179 180, and 181, demonstrate how primbon especially follows the counting of neptu, either from neptu of syllable in the name-name of either the house owner or the land location-neptu of day time, and neptu of pasaran time. However, the discussion regarding the search for neptu in the kraton setting and how the kraton's neptu is encompassed in the category of either becik (good) or ala (bad) is beyond the scope of this study. We only attempted to comprehend how neptu affected the formation of the kraton. The verses of the Primbon Betaljemur Adammakna Further analysis was conducted to see if the setting of a kraton follows the philosophy of sedulur papat limo pancer. Assuming that the philosophy is pursued, then the most protected facility should be placed in the middle, surrounded by other facilities which protect it. As shown in Figure 1, the dulur papat, namely Legi, Pahing, Pon, and Wage, surrounds Kliwon as the limo pancer-the one that should be protected. In the setting of a kraton, prabayeksa is the main living quarter of the sultan (king) as the most sacred place in the palace and the most protected. This explains the location of prabayeksa in the centre of the kraton's arrangement, surrounded by other facilities. This placement also becomes the foundation of understanding the representation formation from the sultan's ruling of kraton, resulting in the formation that exists until now. Prabayeksa functions as the sultan's home and his family's residence. The layout setting of prabayeksa is comparable to the layout setting of Javanese houses, which reflects the manner of Primbon Pandhita Sabda Nata. The layout consists of sentong tengen (right-side space), sentong tengah (middle space), and sentong kiwa (left-side space), which are arranged in line from the west to the east (Purwani, 2001). The appearance of prabayeksa is simpler compared to other facilities. While the main essence of kraton is as the power representation of sultan, the appearance of prabayeksa offers a more modest display. Purwani (2001) describes that such a display includes less ornamentation in the hall, unembellished main columns, and plain brown outer walls, whilst the doors and windows are adorned with carvings of the sulur motifs and gold shrubs over red background. The analysis of prabayeksa in relation to the other facilities within the kraton building complex, such as the exhibition ward dan the sitihinggil ward, reveals several key characteristics of prabayeksa ward. The setting of kraton suggests that the facility with the highest value is always placed in the middle of the area. In this regard, prabayeksa is placed in the centre of the area and protected by other facilities, thus becoming the core of kraton. However, the imagery of prabayeksa is more modest compared with other facilities, apart from some minor ornamentation added to prabayeksa. The layout of the buildings within the kraton complex also indicates that the closer the buildings to the core of kraton, the more modest they are. The setting and appearance of buildings in kraton suggest the representation of the sultan's ruling, starting from the inside (the core) with the path moving out. The representation of spatial formation under the philosophy of sedulur papat limo pancer indicates that the private function must be located in the middle of the area-not by the side, the back, nor the front-thus protected by other areas. The public functions are located in the most outer circle, followed by the second circle consisting of the semi-public or semi-private functions, and finally, the most inner circle consisting of the most private function. In general, the building imagery represents the verses of primbon on a macro scale. The verses should not be considered based on the visual impression of human scale or building appearance; however, they should be understood in the imaginative scale, where the kraton represents the whole universe. The gate(s) The verses of Primbon Betaljemur Adammakna also contain the principles regarding the establishment of the gate, in particular as written in verses 194 to 196. No. 195. Making the gate of the yard Making the gates, the calculation and the preparation are the same as in the previous verse, but the length of the site where the gate would be erected is divided into 9, as shown below: No. 196. Gawe lawang pakarangan Gawe lawang pakarangan iki, lakune ana bedane karo kang kasebut ing No. 194,lan No. 195 The rule in verse 196 points out the middle placement on the bumi (earth) which is considered becik (good). The understanding of earth here suggests the meaning of the world and its content. The deployment of the regol (gates) here certainly points out the earth to symbolise the Kraton Yogyakarta as the representation of harmony and prosperity of nature and its content as the divine creation. Therefore, the placement of regol becomes the symbol of earth or nature and its content, as well as the duty of the sultan (king) to oversee its harmony and prosperity. In the reading of the primbon verses, there are conflicting rules such as between the rules in verses 194 and 195; however, such conflicts are not discussed in this paper. The analysis of the rules of primbon and its implementation in the kraton setting suggests the presence of knowledge that demonstrates Kraton Yogyakarta as the imagery of the universe, as represented through its spatial formation. Conclusion This study investigates the verses in primbon as an attempt to describe the representation of Kraton Yogyakarta as the residence of Yogyakarta's sultan. The analysis reveals that the layout of the kraton conform with the rules in Primbon Betaljemur Adammakna, especially the rules regarding the orientation and the making of the gates. The kraton displays the representation through the hierarchical arrangement of the building facilities. The innermost circle of the kraton consists of the private area (difficult to access), while the outermost circle consists of the public area (easy to access). There are circles in between, as either the semi-public or semi-private area to support the existence of buildings in the innermost or outermost circle. The placement of the regol (gates) also indicates the existence of the tiered or hierarchical space formation. Every circle always includes a gate as the access to enter the buildings in particular circle's area. Additionally, the gate placement in Kraton Yogyakarta indicates the compliance to the rules inside the primbon. The representation is displayed in Kraton Yogyakarta not in the scale of the building (or as human scale when viewing buildings), but more as a representation of imagery, in the form of images of Kraton Yogyakarta as the universe. Acknowledgement This article is part of the doctoral dissertation on power representation in Kraton Yogyakarta conducted by the first author in Institut Teknologi Sepuluh Nopember.
2022-02-06T16:36:07.385Z
2022-01-30T00:00:00.000
{ "year": 2022, "sha1": "d083e58ef9b0be0e3dfb35a27f90e9c87f46c42f", "oa_license": "CCBYNC", "oa_url": "https://interiority.eng.ui.ac.id/index.php/journal/article/download/177/76", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "da999c8b2ef9eb0cb7d0494145b74f3b7ad48935", "s2fieldsofstudy": [ "History", "Art" ], "extfieldsofstudy": [] }
3571639
pes2o/s2orc
v3-fos-license
Future of Nano science in Technology for Prosperity : A Policy Paper TIn the process of miniaturization nanotechnology has unleashed enormous prospects for the development of new products and applications for a wide range of industrial and consumer sectors. Currently the most commonly investigated nanomaterials are variants of nanorobots, nanocrystals, Dendrimers, Nanopore sensors, quantum dots, and carbon based materials (e.g., fullerenes, nanotubes). While the source elements are often the same as the ions already used in commercial products, nanomaterials are highly reactive and often differ in many physical and chemical characteristics than their ionic counterparts. These different characteristics make them suitable for improvement or replacement of commercial products and applications. The current and projected applications of engineered nanomaterials span a wide range of sectors. These include cosmetics and personal care products; pesticides and fungicides, lubricants and fuel additives; paints and coatings; agrochemicals, plant protection products, and veterinary medicines, plastics, and weapons and explosives. More than 140 companies worldwide have already engaged in manufacture of nanomaterials. The concerns due to emergence of nanotechnology include health and safety, environmental, analytical, ethical, policy and regulatory issues. While it is often difficult to predict the future, some things seem inevitable. Just as a ball thrown into the air can be expected to fall to the ground, so we can expect our technology to reach the molecular scale. Introduction The term "Nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 as follows: Nanotechnology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule [1].Engineered nanomaterials, developed using nanotechnology tools, can be defined as materials with dimensions between 1 and 100 nanometers in at least one dimension and possessing unique properties (magnetic, photonic, electronic, etc.).Imagine a supercomputer a billion times more powerful than today's and yet so small it would be barely visible by a light microscope [2].Clean factories manufacturing without having to worry about pollution choking up the environment. Cheap and abundant solar energy replacing conventional fossil fuels like oil, coal and gas.Building materials that are stronger, lighter and cheaper than the ones used in today's rockets, making lunar vacations no more expansive than says a trip to South Pole.A world where material abundance for all the people becomes a reality.Sounds too good to be true?Not for the new breed of scientists who believe that the 21st centuary could see all these science fiction dreams come true thanks to nanotechnology, a hybrid of chemistry and engineering that has opened up a whole new world of possibilities which If taken to their logical conclusion would completely change us and the world as we know it today [3].Indeed, so exciting are the prospects of this revolutionary science that countries all over the world are investing in the research and development of nanotechnology.Clearly nanotechnology is slowly but surely capturing the attention of the scientific community, the media and no the public [4].But just what exactly is nanotechnology and why everyone talking about it? Scanning Electron Microscopy (SEM) SEM is an electron microscope that images the sample surface by scanning it with a high energy beam of electrons.Conventional light microscopes use a series of glass lenses to bend light waves & create a magnified image while the SEM creates the magnified images by using electrons instead of light waves. Environmental Scanning Electron Microscopy (ESEM) While using the environmental scanning electron microscope (ESEM), it is not necessary to make nonconductive samples conductive.Materials samples do not need to be desiccated and coated with gold-palladium, for example, and thus their original characteristics may be preserved for further testing or manipulation. Transmission Electron Microscopy (TEM) TEM is a microscopy technique whereby a beam of electrons is transmitted through an ultra-thin specimen & interacts as passes through the sample.An image is formed from the electrons transmitted through the specimen, magnified & focused by an objective lens & appears on an imaging screen.The contrast in a TEM image is not like the contrast in a light microscope image.In TEM, the crystalline sample interacts with the electron beam mostly by diffraction rather than by absorption. Atomic Force Microscopy (AFM) AFM is ideal for both qualitatively & quantitatively measuring the nanometer scale surface roughness & for visualizing the surface nano-texture on many types of material surfaces including polymer nano-composites & nano-coated materials.Advantages of the AFM for such applications are derived from the fact that the AFM is non-destructive technique & it has a very high 3d spatial resolution Scanning Tunneling Microscopy (STM) STM is an instrument for producing surface images with atomic scale lateral resolution, in which a fine probe tip is scanned over the surface of a conducting specimen, with the help of a piezoelectric crystal at a distance of 0.5-1 nm, & the resulting tunneling current or the position of the tip required to maintain a constant tunneling current is monitored. Assemblers A Nanoscale device which can be programmed to build more complex nanomachines.It is a tiny machine that is capable of self replication.It is inexpensive [5].By ensuring that each atom is properly placed, they will manufacture products of high quality and reliability. Atomic Force Microscopy Measures the interaction force between the tip and surface.An atomically sharp tip is scanned over a surface with feedback mechanisms that enable the piezoelectric scanners to maintain the tip at a constant force (to obtain height information), or height (to obtain force information) above the sample surface [6].Tips are made from Si3N4 or Si, and extended down from the end of a cantilever. Scanning Tunneling Microscope Provides a three-dimensional profile of the surface to obtain atomic-scale images of metal surfaces, uses the quantum tunneling effect to view and manipulate Nanoscale particles, atoms and small molecules and to map surfaces [7].It measures a weak electrical current flowing between tip and sample as they are held a very distance apart. Near Field Scanning Microscope Scans a very small light source very close to the sample.Detection of this light energy forms the image.NSOM can provide resolution below that of the conventional light microscope. Nano Factory Proposed system in which hypothetical nanomachines combine reactive molecules via mechanosynthesis to build larger atomically precise parts.These in turns are assembled by positioning mechanisms of assorted sizes to build macroscopic (visible) but still atomically precise products [8].A nanofactory will be the end result of a convergence between nanotechnology (molecular scale engineering), rapid prototyping, and automated assembly. Nano Particle Spherical or capsule-shaped structure.Most are hollow, which provides a central reservoir that can be filled with anticancer drugs, detection agents, and chemicals, known as reporters, which can signal if a drug is having a therapeutic effect [9].The surface of a nanoparticle can also be adorned with various targeting agents, such as antibodies, drugs, imaging agents and reporters.Ammonium citrate (aqueous) and imidazoline or oleyl alcohol (no aqueous) are additives for deagglomeration. Bucky Ball or Fullerene or C 60 Spherical shape with a hollow interior made of 20 hexagons and 12 pentagons.Fullerenes are molecular with exact no. of (60, 70,120,180) of carbon atoms.Buckyballs, because they resemble the geodesic domes built by architect Buckminster Fuller.Discovered in 1985 among the byproducts of laser vaporization of graphite in which the carbon atoms are arranged in sheets [10].Robert F. Curl Jr. and Richard E. Smalley, both of Rice University in Houston, Texas and Harold W. Kroto of the University of Sussex in England, won the 1996 Nobel Prize for Chemistry for their discovery of buckminsterfullerene, the scientific name for buckyballs [11].Inhibiting the HIV by attaching fullerenes to the virus and thus preventing its replication. Carbon Nano Tubes Carbon atom form extended hollow tubes (cylindrical) instead of closed, hollow spheres.Carbon nanotubes can also form as a series of nested, concentric tubes.Carbon nanotubes can be used as nanometer-scale syringe needles for injecting molecules into cells and as Nanoscale probes for making fine-scale measurements [12].Carbon nanotubes can be filled and capped, forming Nanoscale test tubes or potential drug delivery devices.Carbon nanotubes can also be "doped," or modified with small amounts of other elements, giving them electrical properties that include fully insulating, semiconducting, and fully conducting [13].Chemically much less reactive than carbon atoms.Electrical conductivity of graphite, but conduct electricity along one axis rather than in all directions.Strength of diamond [14]. Nano Capsules Used as smart drugs or nano-encapsulation those have specific chemical receptors and only bind to specific cells.Higher dose loading with smaller dose volumes.Longer site-specific dose retention.More rapid absorption of active drug substances.Increased bioavailability of the drug.Higher safety and efficacy. Nano Crystals Nano crystals are aggregates of few hundred to tens of thousands of atoms that combine into a crystalline form of matter known as a "cluster".The first atomic-scale images of nanocrystals that help to reduce pollution show a surprising triangular, rather than hexagonal shape [15].Used to make super-strong and longlasting metal parts.The crystals also might be added to plastics and other metals to make new types of composite structures. Nano Wires Solid "one dimensional".Can be conducting, semiconducting, insulating can be crystalline, low defects.Nanotube defined -a long cylinder with inner and outer nm-sized diameters.Nanowire defined -a long, solid wire with nm diameter [16]. Nanopore Sensor Rapid DNA sequencing, separation of single stranded and double stranded DNA in solution.Determination of length of polymers and separation of polymers by length.Potential applications for NASA missions including astronaut health, life detection and decoding of various genomes [17].An engineered DNA strand between metal atom contacts could function as a molecular electronics device.Understanding the complex quantum physics via simulation guides design. Nano Composites Nanocomposite -consists of two or more synthesized materials of which at least one has Nanoscale dimensions.Multiple material possibilities (Organic + organic, Organic + inorganic, Inorganic + inorganic) and Nanoparticles or nanowire or nanotube + matrix material [18]. Nano Wire Nano wires are built atom by atom on a solid surface.They can be coated with molecules viz.antibodies that will bind to proteins and other substances of interest to researchers and clinicians [19].Nano wires are incredibly sensitive to such binding events and respond by altering the electrical current flowing through them and thus form the basis of ultra sensitive molecular detectors [20].To create electrical circuits and semiconductor nano wire crossings, future of digital computing. Application of Nanotechnology Aerospace Structural materials (e.g.saving weight and energy by using light-weight, ultra rigid materials).Information and communications technology e.g. more efficient design of data transfer between space vehicles and terrestrial information networks using electronic and optoelectronic nanotechnology components [21].Sensorics (e.g.improving medical monitoring of astronauts with sensors based on nanostructured materials) and Thermal protection and control (e.g.improving thermal control systems through nanostructured diamond-like carbon coatings). Security and Defence Nanoscale electronic, improved sensory capabilities, enhanced computing power, storage capacity and electromechanical components could make control and steering of vehicles more effective and robust.Unmanned and autonomous systems in air, sea and space could be further reinforced [21].Development of Nanoscale powders for use in propellants and explosives, enhancing the energy yield and speed of explosion. Catalysis, Chemistry and Materials Synthesis Chemical industry: catalysis (gold nanoparticles), pigments, coatings and lubricants, micro / nanoreaction technology and pharmaceuticals and cosmetics.Supramolecular host-guest structures are opening up synthesis routes in organic chemistry.Regioselectivity and stereo selectivity of catalysts can be increased.The higher surface to volume ratio means that much more of the catalyst is actively participating in the reaction [21].Surface active membranes, nonporous (bio) filters and adsorption agents can be optimized.E.g. for sewage treatment, pollutant removal and byproduct separation. Food Security Cleaner agriculture and more targeted, preventative treatment.Networks of Wireless Nano sensors: to monitor (e.g., the presence of plant viruses or the level of soil nutrients).Near real time pathogen detection and location reporting by integrating Nano-electromechanical systems (NEMS) with new chip designs, without the need to resort to the widespread use of pesticide.Smart Field System that automatically detects locates reports and applies water, fertilizers (nanoparticles) and pesticides -going beyond sensing to automatic application, harvest timing, water quality measurement and control [22]. Biomedical Applications of Nanotechnology Premise relatively small size, ease of transport within tissues/organs, ability to cross plasma membranes and potential target of biologically active molecules, will facilitate biomedical applications of nanoparticles in health and disease.About half of all pharmaceutical production will be dependent on nanotechnology affecting over $180 billion in revenues in 10-15 years [22].Nanotechnology will expand life spans, improve quality of health and enhance human physical capabilities. Dendrimer A Dendrimer is a tree-like highly branched polymer molecule (Greek dendra = tree), also called artificial protein.Dendrimers are synthesized from monomers with new branches added in discrete steps ("generation") to form a tree-like architecture.A high level of synthetic control is achieved through step-wise reactions and purifications at each step to control the size, architecture, functionality and monodispersity [23].Dendrimer used for cancer or tumor-targeting agents, imaging contrast agents to pinpoint tumors, drug molecules for delivery to a tumor and reporter molecules that might detect if an anticancer drug is working. Liposome A type of nano particle made of lipids, or fat molecules, surrounding a water core.Liposome, several of which are widely used to treat infectious diseases and cancer, were the first type of nano particle to be used to create therapeutic agents with novel characteristics Information Technology Information technology revolution brought by miniaturization of silicon transistors.Further miniaturization possible: Fundamental physical limit of transistor dimension 10-20 nm, 10 to 15 years.Future breakthroughs will likely come from nanotechnology: Carbon nano tube transistors: smaller and faster.Single electron transistors: quantum computers.Molecular scale devices or nano scale transistors, based on chemical self-assembly, will lead future advances in information technology. Nano-Optics and Information Storage Photonic crystals for multiplexing and all optical switching in optical networks.Atomically thin layers of nanostructure material used to substantially increase the information storage density. Consumer Products Nanoscale powders, in their free form, without consolidation or blending, used by cosmetics manufacturers: Titanium Dioxide and Zinc Oxide powders for facial base creams and sunscreen lotions.Iron Oxide powders as base material for rouge and lipstick.Improved wear and corrosion resistance.Nanocomposite materials, with increased impact strength, for automobiles. Textile Industry Integration of nano robots to give an article of clothing the capability to mend itself.Improve characteristics and functions such as crease resistance, breathing properties, wear resistance, spot and water repellence, antistatic properties, active ingredient storage or fire protection. Nano Crystals Nano crystals are aggregates of anywhere from a few hundred to tens of thousands of atoms that combine into a crystalline form of matter known as a "cluster".The first atomic-scale images of nanocrystals that help reduce pollution show a surprising triangular, rather than hexagonal shape.Used to make superstrong and long-lasting metal parts.The crystals also might be added to plastics and other metals to make new types of composite structures for everything from cars to electronics [23]. Quantum Dot (Qdots) Semiconductor crystal with a diameter of few nanometers.Being quasi-zero dimension, quantum dots have a sharper density of states than higher dimensional structures.They have superior transport and optical properties, and are being researched foe use in diode lasers and detectors.Solid-state quantum computation.By applying small voltage to the leads, one can control the flow of electrons through the quantum dot and thereby make precise measurements of the spin and other properties therein.Because of their small size, quantum dots can function as cell and even molecule specific markers. Future Lies in Nanotechnology Nanotechnology would give us an opportunity, if we take appropriate and timely action to become one of the important technological nations in the world.The world market in 2005 is for nano materials, nano tools, nano devices and nano biotechnology, which put together, is expected to be over hundred billion dollars.Nanotechnology is a new technology that is knocking at doors [23]. Conclusions Due to its highly reactive surface area, Nanoparticles has very high surface adsorption properties which can be executed for different purposes.Due to high and irreversible adsorption of toxic metals on the nanoparticles, they can be used for remediation of metal contaminated soils and water.Nanomaterials can be excellently used for production of cementing or coating agents for controlled or slow release fertilizers, they are proven superior to other coating agents.Nanomaterials can be excellently used for production of cementing or coating agents for controlled or slow release fertilizers, they are proven superior to other coating agents.As the size decreases computer will compute faster, materials will be stronger, medicine will cure more diseases. Citation: Harikamal B, Shaon Kr Das, et al. (2018) Future of Nano science in Technology for Prosperity: A Policy Paper.Nanosci Technol 5
2018-03-01T07:05:17.961Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "1a7cf0ed1a52fa578f311320441bb273c0e6c87d", "oa_license": "CCBY", "oa_url": "https://symbiosisonlinepublishing.com/nanoscience-technology/nanoscience-technology51.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a7cf0ed1a52fa578f311320441bb273c0e6c87d", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
257642038
pes2o/s2orc
v3-fos-license
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters Parameter-Efficient transfer learning with Adapters have been studied in Natural Language Processing (NLP) as an alternative to full fine-tuning. Adapters are memory-efficient and scale well with downstream tasks by training small bottle-neck layers added between transformer layers while keeping the large pretrained language model (PLMs) frozen. In spite of showing promising results in NLP, these methods are under-explored in Information Retrieval. While previous studies have only experimented with dense retriever or in a cross lingual retrieval scenario, in this paper we aim to complete the picture on the use of adapters in IR. First, we study adapters for SPLADE, a sparse retriever, for which adapters not only retain the efficiency and effectiveness otherwise achieved by finetuning, but are memory-efficient and orders of magnitude lighter to train. We observe that Adapters-SPLADE not only optimizes just 2\% of training parameters, but outperforms fully fine-tuned counterpart and existing parameter-efficient dense IR models on IR benchmark datasets. Secondly, we address domain adaptation of neural retrieval thanks to adapters on cross-domain BEIR datasets and TripClick. Finally, we also consider knowledge sharing between rerankers and first stage rankers. Overall, our study complete the examination of adapters for neural IR Introduction Information Retrieval (IR) systems often aim to return a ranked list of documents ordered with respect to their relevance to a user query. In modern web search engines, there is, in fact, not a single retrieval model but several ones specialized in diverse information needs such as different search verticals. To add to this complexity, multi-stage retrieval considers effectiveness-efficiency trade-off where first stage retrievers are essential for fast retrieval of potentially relevant candidate documents from a large corpus. Further down the pipeline, rerankers are added focusing on effectiveness. With the advent of large Pretrained Language Models (PLM), recent neural retrieval models have millions of parameters. Training, updating and adapting such models implies significant computing and storage cost calling for efficient methods. Moreover, generalizability across out-of-domain datasets is critical and even when effectively adapted to new domains, full finetuning often comes at the expense of large storage and catastrophic forgetting. Fortunately, such research questions have already been studied in the NLP literature [1,2,9,10] with parameter-efficient tuning. In spite of very recent work exploring parameterefficient techniques for neural retrieval, the use of adapters in IR has been overlooked. Previous work on dense retriever had mixed results [11] and successful adaptation was achieved for cross lingual retrieval [17]. Our study aims to complete the examination of adapters for neural IR and investigates it with neural sparse retrievers. We study ablation of adapter layers to analyze whether all layers contribute equally. We examine how adapter-tuned neural sparse retriever SPLADE [5] fares on benchmark IR datasets MS MARCO [21], TREC DL 2019 and 2020 [3] and out-of-domain BEIR datasets [30]. We explore whether generalizability of SPLADE can be further improved with adapter-tuning on BEIR and out-of-domain dataset such as TripClick [26]. In addition, we examine knowledge transfer between first stage retrievers and rerankers with full fine-tuning and adapter-tuning. To the best of our knowledge, this is the first work which studies adapters on sparse retrievers, focuses on sparse models' generalizability and explores knowledge transfer between retrievers in different stages of the retrieval pipeline. In summary, we address the following research questions: 1. RQ1: What is the efficiency-accuracy trade-off of parameter-efficient finetuning with adapters on the sparse retriever model SPLADE? 2. RQ2: How does each adapter layer ablation affect retrieval effectiveness? 3. RQ3: Are adapters effective for adapting neural sparse neural retrieval in a new domain? 4. RQ4: Could adapters be used to share knowledge between rerankers and first stage rankers? Background and Related Work Parameter efficient transfer learning techniques aim to adapt large pretrained models to downstream tasks using a fraction of training parameters, achieving comparable effectiveness to full fine-tuning. Such methods [9,15,10,25,28] are memory efficient and scale well to numerous downstream tasks due to the massive reduction in task specific trainable parameters. This makes them an attractive solution for efficient storage and deployment compared to fully fine-tuned instances. Such methods have been successfully applied to language translation [25], natural language generation [16], Tabular Question Answering [22], and on the GLUE benchmark [7,28], In spite of all its advantages and a large research footprint in NLP, parameter-efficient methods remain under-explored in IR. A recent comprehensive study [4] categorises parameter efficient transfer learning into 3 categories: 1) Addition based 2) Specification based 3) Reparam-eterization based. Addition based methods insert intermediate modules into the pretrained model. The newly added modules are adapted to the downstream task while keeping the rest of the pretrained model frozen. The modules can be added vertically by increasing the model depth as observed in Houlsby Adapters [9] and Pfeiffer Adapters [25]. Houlsby Adapters insert small bottle-neck layers after both the multi-head attention and feed-forward layer of the each transformer layer which are optimized for NLP tasks on GLUE benchmark. Pfeiffer Adapter inserts the bottle-neck layer after only the feed-forward layer and has shown comparable effectiveness to fine-tuning on various NLP tasks. Prompt-based adapter methods such as Prefix-tuning [15] prepend continuous task-specific vectors to the input sequence which are optimized as free-parameters. Compacter [20] hypothesizes that the model can be optimized by learning transformations of the bottle-neck layer in a low-rank subspace leading to less parameters. Specification based methods fine-tune only a subset of pretrained model parameters to the task-at-hand while keeping the rest of the model frozen. The fine-tuned model parameters can be only the bias terms as observed in BitFit [2], or only cross-attention weights as in the case of Seq2Seq models with X-Attention [6]. Re-parameterization methods transform the pretrained weights into parameter efficient form during training. This is observed in LoRA [10] which optimises rank decomposition matrices of pretrained layer while keeping the original layer frozen. Recent studies exploring parameter efficient transfer learning for Information Retrieval show promising results of such techniques for dense retrieval models [11,17,19,29]. [11] studies parameter efficient prefix-tuning, [15] and LoRA [10] on bi-encoder and cross-encoder dense models. Additionally, they combine the two methods by sequentially optimizing one method for m epochs, freezing it and optimizing the other for n epochs. Their studies show that while cross-encoders with LoRA and LoRA+(50% more parameters compared to LoRA) outperform fine-tuning with TwinBERT [18] and ColBERT [13], parameter-efficient methods do not outperform fine-tuning for bi-encoders across all datasets. [17] uses parameter-efficient techniques such as Sparse Fine-Tuning Masks and Adapters for multilingual and cross-lingual retrieval tasks with rerankers. They train language adapters with Masked Language Modeling (MLM hereafter) task and then task-specific retrieval adapters. This enables the fusion of reranking adapter trained with source language data together with the language adapter of the target language. Concurrent to our work, [29] studies parameter-efficient prompt tuning techniques such as Prefix tuning and P-tuning v2, specification based methods such as BitFit and adapter-tuning with Pfeiffer Adapters on late interaction bi-encoder models such as Dense Passage Retrieval [12] and ColBERT. They are motivated by cross-domain generalization of dense retrievals and achieve better results with P-tuning compared to fine-tuning on the BEIR benchmark. [19] studies various parameter-efficient tuning procedures at both retrieval and re-ranking stages. They conduct a comprehensive study of parameter-efficient techniques such as BitFit, Prefix-tuning, Adapters, LoRA, MAM adapters with dense bi-encoders and cross-encoders with BERT-base as the backbone model. Their parameter-efficient techniques achieve comparable effectiveness to finetuning on top-20 retrieval accuracy and marginal gains on top-100 retrieval accuracy. Compared to prior works, our experiments first study the use of adapters for state of the art sparse models such as SPLADE, contrary to previous work that studied dense bi-encoder models 4 . Furthermore, our results show improvements compared to the previous studies. We also studied the case of using distinct adapters for query and document encoders in a "bi-adapter" setting where the same pretrained backbone model is used by both the query and the document encoder but different adapters are trained for the queries and documents. Secondly, we address another research questions ignored by previous work, which is efficient domain adaptation 5 for neural first stage rankers. We start from a trained neural ranker and study adaptation with adapters on a different domain, such as the ones present in the BEIR benchmark. Finally, we also study parameters sharing between rerankers and first stage rankers using adapters, which to our knowledge has not been studied yet. Parameter-Efficient Retrieval with Adapters In this section, we first present the self-attention used in transformers and how the adapters we use for our experiments interact with them. We then introduce the models used for first stage ranking and reranking. Self-Attention Transformer Layers Large pretrained language models are based on the transformer architecture composed of N stacked transformer layers . Each transformer layer comprises of a fully connected feed-forward module and a multi-headed self attention module. Each attention layer has a function of query matrix (Q ∈ R nXd k ), a key matrix and a value matrix. The attention can be formally written as: where the query Q, key K and value V are parameterized by weight matrices Each of the N heads has its respective Q i , V i and K i weights and its corresponding attention A i . The feed-forward layer takes as input a transformation of the concatenation of the N attentions as: where σ(.) is the activation function. A residual connection is further added after each attention layer and feed-forward layer. Adapters Multi-headed attention In this paper, we focus on the Houlsby adapter [9], which as described in Section 3 can be considered an additive adapter and is depicted in Figure 1. An additive adapter inserts trainable parameters in addition to the aforementioned transformer layers. The added modules form a bottle-neck architecture with a down-projection, an up-projection and a non-linear transformation. The size of the bottle-neck controls the number of training parameters in an adapter layer. Additionally, a residual connection is applied across each adapter layers. Finally, a layer normalization is added after each transformer sublayer. Formally, this is defined as: where x ∈ R d is the input to the adapter layer, W down ∈ R dXr is the down projection matrix transforming input x into bottle-neck dimension d, W up ∈ R rXd is the up projection matrix transforming the bottle-neck representation back to the d-dimensional space. Each adapter layer is initialized with a nearidentity weights to enable stable training. Neural Sparse First Stage Retrievers Neural sparse first stage retrievers learn contextualized representations of documents and queries in a sparse high-dimensional latent space. In this work, we focus on SPLADE sparse retriever [5,14], which uses both L 1 and FLOPS regularizations to force sparsity. We freeze the pretrained language model while training the adapter layers. SPLADE predicts term weights of each vocabulary token j with respect to an input token i as: where E j is the j th vocabulary token embedding, b j is it's bias, h i is i th input token embedding, transf orm(.) is a linear transformation followed by GeLU activation and LayerNorm. The final term importance for each vocabulary term j is obtained by taking the maximum predicted weights over the entire input sequence of length n, after applying a log-saturation effect: Given a query q i , the ranking score s of a document d is defined by the degree to which it is relevant to q obtained as a dot product s(q, d) = w(q).w(d). The learning objective is to discriminate representations obtained from Equation 5 of a relevant document d + and non-relevant hard-negatives d − obtained from BM25 and in-batch negatives d − i,j by minimizing the contrastive loss: SPLADE can be further improved with distillation. The learning objective here is to minimize the MarginMSE [5] loss: mean-squared-error between the positive negative margins of a cross-encoder teacher and the student: where M SE is mean-squared error, M t is the teacher's margin and M s is the student's margin. The final objective optimizes either of the objective in Equation 6 or 7 with regularization losses: The Flops regularizer is a smooth relaxation of the average number of floatingpoint operations necessary to compute the score of a document, and hence directly related to the retrieval time. It is defined using as a continuous relaxation of the activation (i.e. the term has a non zero weight) probability a j for token j, and estimated for documents d in a batch of size N by a 2 j . Retrieval Flops: SPLADE also reports the retrieval flops (noted R-FLOPS), i.e., the number of floating point operations on the inverted index to return the list of documents for a given query. The R-FLOPS metric is defined by an estimation of the average number of floating-point operations between a query and a document which is defined as the expectation E q,d j∈V p where p j is the activation probability for token j in a document d or a query q. It is empirically estimated from a set of approximately 100k development queries, on the MS MARCO collection. It is thus an indication of the inverted index sparsity and of the computational cost for a sparse model (which is different from the inference ie forward cost of the model) Cross-Encoding Rerankers Another way to use PLMs for neural retrieval is to use what is called "crossencoding" [33]. In this case, both query and document are concatenated before being provided to the network and the score is directly computed by the network. The cross-encoding procedure allows for networks that are much more effective, but this effectiveness comes with a cost on efficiency as the retrieval procedure now has to go through the entire network for each query document pair, instead of being able to precompute document representations and only go through the network for the query representation. The models are trained with a contrastive loss as seen in equation (6) that aims to maximize the score of the true query/document pair compared to a BM25 negative query/document pair, without using in-batch negatives. Experimental Setting and Results We use the SPLADE github repository 6 to implement our modifications and followed the standard procedure to train SPLADE models. We implement our SPLADE models using an L 1 regularization for the query, and F LOP S regularization for the document following [14]. Unless otherwise stated, the document regularization weight λ d is set to 9e−5 and the query regularization weight λ q to 5e−4 to train all variants of Adapters-SPLADE. In order to mitigate the contribution of the regularizer at the early stages of training, we follow [23] and use a scheduler for λ, quadratically increasing λ at each training iteration, until the 50k step. We use a learning rate of 8e−5, a batch size of 128, a linear scheduler and warmup step of 6000. We set the maximum sequence length to 256. We train for 300k iterations and keep the best checkpoint using MRR@10 on the validation set. We use a bottle-neck reduction factor of 16 (i.e. 16 times smaller) for all adapter layers. We use PyTorch [24], Huggingface Transformers [31] and AdapterHub [1] to train all models on 4 Tesla V100 GPUs with 32GB memory. We compute statistical significance with p ≤ 0.05 using the Student's t-test and use superscripts to identify statistical significance for almost all measures safe for metrics related to BEIR 7 . RQ1: Adapters-SPLADE We study 2 different settings of encoding with adapters. The first called adapter, is a mono-encoder setup where the query and document shars a single encoder. The adapter layers are optimized with both the input sequences keeping the PLM frozen. The second setting inspired by the work on [14], is a bi-encoder setup which separates query and document encoders by training distinct query and document adapters on a shared frozen PLM. We call this setting bi-adapter. This setting not only benefits from optimizing exclusive adapters for input sequence type (different lengths of query/document, etc.), it is also possible to use smaller PLMs for the queries instead of sharing PLM weights. We explore different backbone PLMs: DistilBERT and CC+MLM Flops, a pretrained PLM of cocondenser trained on the masked language model (MLM) task using the FLOPS regularization in order to make it easier to work with SPLADE, introduced in [14]. We trained and evaluated Adapter-SPLADE models on the MS MARCO passage ranking dataset [21] in full ranking setting. The results for finetuning with BM25 triplets are available in Table 1, whereas in Table 2 we make available the results of training models with distillation. For distillation, we use hard-negatives and scores generated by a cross-encoder reranker 8 and the MarginMSE loss as described in [5] and set λ d to 1e−2 and λ q to 9e−2. To study efficiency-effectiveness trade-off of Adapters-SPLADE, we compare effectiveness, R-FLOPS size and number of training parameters of adapter-tuned models with their baseline finetuned counterparts having the same backbone PLM. [23] first showed that R-FLOPs reduction is a reasonable measure of retrieval speed. R-FLOPS measure the average number of floating-point operations needed to compute a document score during retrieval. A sparse embedding and subsequently lower FLOP achieves a retrieval speedup of the order of 1/p 2 over an inverted index where p is the probability of each document embedding dimension being non-zero. Overall, we observe, from Table 1 and 2, all variants of adapter-tuned SPLADE outperform all baseline fine-tuned counterparts on MS MARCO and TREC DL 2019. The distilled cocondenser with MLM mono-encoder model is the highest performing with an MRR@10 score of 0.390 and R@100 of 0.983. The difference in effectiveness between the mono-encoder and bi-encoder adapter-tuning is marginal and depends on the PLM. Most noteworthy, we also observe that the R-FLOPS are lower for adapter-tuned models indicating sparser representation than the fine-tuned counterparts. This is more pronounced in the adaptertuned models with distillation. Finally, the bi-adapter models have even lower R-FLOPS than the mono-encoder settings, which shows that for the same effectiveness the bi-adapters models are more efficient and sparse. We also observe that the number of training parameters is only 2.23% of the total model parameters for triplets training (1.5M/67M for mono-adapter DistilBERT, 3M/135M for bi-adapter DistilBERT, 2M/111M for CC + MLM FLOPS) and 2.16% for the distillation process (1.5M/67M for mono-adapter DistilBERT, 2M/111M for CC + MLM FLOPS). This has direct consequence in low-hardware setting where adapters with lower number of number of training parameters and gradients can be trained on a smaller GPU(such as 24GB P40) but full finetuning is infeasible. Overall, there is a clear advantage in using Adapter-SPLADE over finetuning, which differs from the previous results on dense adapters [11]. We also evaluate with the full BEIR benchmark [41] comprising of 18 different datasets to measure generalizability of IR models with zero-shot effectiveness on out-of-domain data. The results are listed in Table 3. We observe from that in the mono-adapter Triplets training, adapter outperforms finetuning on mean nDCG@10 with the highest gap in arguana. With CC+MLM Flops as the backbone model, finetuning and adapter-tuning performs similarly. However, adapter scores drop on models trained with distillation. This can be attributed to the adapter representations being sparser compared to the finetuned models. As depicted by the R-FLOPS in Table 1, adapter-tuned DistilBERT has less than half the number of R-FLOPS than its finetuned counterpart whereas CC+MLM Flops finetuned model has approximately 1.87 times the number of R-FLOPS of the adapter-tuned model. This reflects in model representation capacity in 0-shot setting in Table 3. However, as discussed in Section 4.3, adapters are well suited for domain adaptation when trained on out-of-domain datasets keeping the backbone retriever intact and free from catastrophic forgetting. RQ2: Adapter Layer Ablation Furthermore, we perform extensive adapter layer ablation by progressively removing adapter layers from the early layers of the encoder. Doing so results in n separate models for each layer ablation setting. The frozen pretrained model for our ablation studies is DistilBERT in a mono-encoder setting where the same instance of the encoder is used to encode both the document and the query, which is the same configuration as the adapter method in Table 1. This results in a total of 6 configurations for the ablation study corresponding to the 6 adapter layers after each pretrained transformer layer. The final experimental setting removes all 6 adapter layers (0 − 5) and fine-tunes only the language model head. We note that such an experiment (dropping adapter layers from transformer models) has been studied in NLP [28] and was shown to improve both training and inference time while retaining comparable effectiveness. We report the effectiveness of each adapter ablation setting on MS MARCO, TREC DL 2019 and TREC DL 2020 in Table 4. We actually observe gradual performance drop for MS MARCO and TREC DL datasets as the training parameters decrease with the progressive removal of adapter layers as shown in Table 4. The drop is significantly higher (a drop of 0.25 MRR score) when layers are removed from the second half of the model ( ≥ 0 − 3). This phenomenon is consistent with studies in NLP [22,28] that task-specific information is stored in the later layers of the adapters. For the BEIR datasets, this effectiveness drop is not as evident until all adapters but the language model head is removed (configuration 0 − 5). The last configuration also has less sparsity as observed from the R-FLOPS size of 2.78 compared to the other configurations. We also observe that the training time drops proportional to the drop in adapter layers. The training time for adapter-tune without any drop in adapter layers is 34.42 hours on 4 Tesla V100 GPUS for 150, 000 iterations, and it drops to 26.70 hours with only 1% drop in MRR with the first 0 − 2 adapter layers dropped. The lowest training time is 21.35 hours with a drop of 3.2% in MRR for the configuration with all adapters dropped but the language model head. RQ3: Out-of-Domain Dataset Adaptation For the next research question, we want to check how adapters compare to full finetuning when adapting a model trained on MSMARCO on a smaller out-ofdomain dataset. We evaluate this question under two scenarios: i) BEIR and ii) TripClick. BEIR: On the beir benchmark we use 3 datasets (FEVER, FiQA and NFCorpus) that have training, development and test sets and aim for very different domains and tasks (fact checking , financial QA and bio-medical IR). We start from a pre-finetuned SPLADE model called "splade-cocondenser-ensembledistil" made available in [5]. We verify the effectiveness of the models in zero shot and get a first set of hard negatives. These hard negatives are then used to train either via finetuning of all parameters or via the introduction of adapters. The networks are trained for either 10 (FEVER) or 100 epochs (FiQA and NFCorpus), and at the end of each epoch we compute the development set effectiveness. We use the models with the best development set to compute the 1st round test set effectiveness and generate hard negatives that are used for another round of training that we call 2nd round (which repeats the 1st round, starting from the best network of the 1st round and using negatives from the 1st round). Results are available in Table 5. While finetuning is not always able to improve the results over the zero-shot, mostly due to overfitting on the training/dev sets. For example, on fever fine-tuning first makes all representations as it can easily overfit to the training even without using many words and only on the second round of training started using more dimensions. On the other hand, adapter tuning is able to consistently improve the effectiveness over the zero shot and first rounds (even if it does not always perform the best, as is the case on NFCorpus). Overall, we conclude that adapters are more stable than finetuning when finetuning on these specific domains. For the Head queries, a DCTR click model was employed to created relevance signals, otherwise raw clicks were used. We use the triplets released by [8]. Similarly to the BEIR experiments, we start from the "splade-cocondenser-ensembledistil" SPLADE model and fine-tune or adapt-tune it over 100,000 iterations (batch size equal to 100). As shown in Table 6, adapter-tuning shows very competitive results, on par with finetuning for head categories (frequent queries), and achieving even better results for the less frequent queries (torso and tail). RQ4: Knowledge Sharing between Rerankers and First stage Rankers The final research question explores sharing knowledge between rerankers and first-stage rankers. We explore this with transforming first stage rankers into rerankers. First, we tune the pretrained DistilBERT for reranking task as a baseline for both finetuning and adapter-tuning. We then test transforming both sparse (splade-cocondenser) and dense (tct_colbert-v2-msmarco) first stage rankers into rerankers, using either fine-tuning or adapter-tuning. To be clear, the cross-encoder is initialized with the weights of the aforementioned first stage models, but the reranker classification head on the CLS token is randomly initialized. Also note that we rerank the top-1k returned from "splade-cocondenserensembledistil" (represented by "first stage" on table). We compare adapter-tuning with finetuning and display the results in Table 7. We observe that finetuning the baseline model (DistilBERT) is better than adapter-tuning. When using first stage rankers, results are varied. Dense first stage rerankers were able to learn similarly with both adapter and finetuning. However, this was not the case for sparse first stage rankers (splade-cocondenser-ensembledistil). We posit that this may come from two different reasons: i) The SPLADE model does not focus on the CLS representations, but on the MLM head representations of all tokens, thus needing more flexibility; ii) The model has been trained multiple times (initial BERT training, then condenser, then cocondenser and finally SPLADE), while not always using the same precision (fp16 or fp32), which under preliminary analysis seems to have made some parts of the model unusable for cross-encoding without full finetuning. Overall, there is slight gain in using the first stage model for the reranker. However, there's no increase in effectiveness of using adapters, we actually see worse effectiveness on all settings. Conclusion Retrieval models, based on PLM, require finetuning millions of parameters which makes them memory inefficient and non-scalable for out-of-domain adaptation. This motivates the need for efficient methods to adapt them to information retrieval tasks. In this paper, we examine adapters for sparse retrieval models. We show that with approximately 2% of training parameters, adapters can be successfully employed for SPLADE models with comparable or even better effectiveness on benchmark IR datasets such as MS MARCO and TREC. We further analyze adapter layer ablation and see a further reduction in training parameters to 1.8% retains effectiveness of full finetuning. For domain adaptation, adapters are more stable and outperform finetuning, which is prone to overfitting, On Tripclick dataset, adapters outperform on precision metrics Torso and Tail queries and performs comparably on Head queries. We explore knowledge transfer between first stage rankers and rerankers as a final study. Adapters underperform full finetuning when trying to reuse sparse model to rerankers. Dense first stage rankers perform similarly for adapters and finetuning while sparse first stage rankers is less effective compared to finetuning. We leave this as future work. As memory-efficient adapters are effective for Splade, we leave for future studying larger sparse models and their generalizability. Finally, an interesting scenario could also be to tackle unsupervised domain adaptation with adapters.
2023-03-22T13:07:28.393Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "9fed386b9496542a6c74b4a8abcd23d06abf9fb2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cfe67dce06e6bbd0ef1c810366fc3ac00cfba20a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15822340
pes2o/s2orc
v3-fos-license
Generation non-universality and flavor changing neutral currents in the 331 model The 331 model, an extension of the standard electroweak theory to $SU(3)_L\times U(1)_X$, naturally predicts three families of quarks and leptons via the requirement of anomaly cancellation. This is accomplished by making one of the quark families transform differently from the other two, thus leading to flavor changing neutral currents. Using experimental input on neutral meson mixing, we show that the third family must be the one that is singled out, at least up to small family mixing. We additionally describe a convenient way to parametrize the new mixing matrix that plays a role in the gauge interactions of the ordinary quarks with the new 331 quarks. The 331 model is an SU(3) L × U(1) X extension of the standard SU(2) L × U(1) Y electroweak theory [1,2]. This Previous analyses of Z ′ FCNC contributions to neutral meson mass splittings have attempted to put a lower bound on the allowed Z ′ mass [1]. However, it has since been realized that unknown mixing parameters beyond the ordinary CKM matrix prevent one from making quantitative statements about such a lower bound [3,5]. In this paper, we show that while Z ′ FCNC constraints do not rule out the 331 model, the theoretical upper bound on the Z ′ mass may instead be used to greatly restrict the unknown mixing parameters. This is essentially the opposite approach from that taken previously [1,3,5]. Additionally, we clarify some of the confusion over whether the first or the third family of quarks must be taken to transform differently. In order to understand the origin of the Z ′ FCNC in the 331 model, we begin by describing the fermion representations. While all three lepton families are treated identically, anomaly cancellation requires that one of the three quark families transform differently from the other two [1,2]. In particular, cancelling the pure SU(3) L anomaly requires that there are the same number of triplets as anti-triplets. Putting the three lepton families in as antitriplets, and taking into account the three quark colors, we find that two families of quarks must transform as triplets and the third must transform as an anti-triplet. In terms of weak eigenstates, we do not need to distinguish which family falls in the anti-triplet. However, as we demonstrate later, it is convenient to think of the different family as the third family. We thus denote the first two families as and the third family (anti-triplet) as The sign ensures that the SU(2) L quark doublet, when embedded in SU(3) L , has the conventional form. Using the standard normalization of non-abelian generators, the hypercharge is embed- When 331 is broken to the SM, the neutral gauge bosons W 8 µ and X µ mix to give the Z ′ µ and hypercharge B µ gauge bosons. This mixing may be parametrized by a 331 mixing angle θ 331 (generalizing the Weinberg angle) defined by [4] where g and g X are the SU(3) L and U(1) X coupling constants, and the hypercharge coupling constant g ′ is given by tan θ W = g ′ /g. In terms of W 8 µ and X µ , the hypercharge and Z ′ µ gauge bosons are given by a rotation parametrized by θ 331 Since the Z ′ is a combination of W 8 and X, it couples to fermions according to Using cos θ 331 = √ 3 tan θ W , this may be rewritten as where J Z ′ is given by Since the value of T 8 is different for triplets and anti-triplets, the Z ′ coupling to lefthanded ordinary quarks is different for the third family and thus flavor changing. If we assume J Z ′ has a "standard" form for quark triplets, then the flavor changing interaction occurs for the third (weak-eigenstate) family and may be written as for both up-and down-type quarks (γ L = 1 2 (1 − γ 5 ) is the left-handed projection operator). Other than in the scalar sector, this is the only tree level FCNC interaction present since when 331 is broken to the SM, all three families of ordinary quarks are in usual SU(2) L doublets and thus couple in the ordinary manner to the Z and photon. The dilepton currents are also sensitive to the SU(3) L structure of the quark representations, and hence the difference in the third family. However, with only ordinary external quarks, these dilepton effects first show up at loop level. Since tree level Z ′ FCNC presumably dominates over loop processes, a good place to study the effects of dilepton exchange on flavor changing interactions would be in the process b → sγ which cannot occur at tree level. After SU(2) L × U(1) Y breaking, the weak eigenstate Z and Z ′ may mix, forming mass eigenstates Z 1 and Z 2 . This mixing of the neutral gauge bosons may be parametrized by a mixing angle φ so that A fit to precision electroweak observables gives a limit on the mixing angle of −0.0006 < φ < 0.0042 and a lower bound on the mass of the heavy Z 2 of M Z 2 > 490GeV (both at 90% C.L.) [6]. While this mass limit is not as strong as the indirect limit M Z 2 > 1.4TeV given by the dilepton mass bound and the M Z 2 -M Y relation of the minimal Higgs sector [7], it is insensitive to the choice of Higgs representations and provides an independent lower bound on M Z 2 . Due to the mixing, the mass eigenstate Z 1 now picks up flavor changing couplings proportional to sin φ. In particular, using (6) and (9), we may write [3] For sufficiently large mixing, the flavor changing Z 1 decays may be observable. However, because Z-Z ′ mixing is constrained to be very small, evidence of 331 FCNC can only be probed indirectly at present via the Z 2 couplings. In order to examine the flavor changing Z ′ interaction given in (8), we need to relate weak and mass eigenstate quarks. Symmetry breaking and mass generation in the minimal 331 model is accomplished by four Higgs multiplets -the three triplets with X charges 1, 0, and −1 respectively and a sextet H with X = 0 [2][3][4]8]. We have written the triplets in terms of SU(2) L component fields: the Goldstone boson doublet corresponding to the massive dileptons and Φ A third SM doublet arises from the sextet H, but plays no role in generating quark masses. The vacuum expectation value Φ breaks 331 and gives masses to the new quarks D, S, and T . The remaining scalars implement SU(2) L × U(1) Y breaking and gives masses to the remaining fermions. In particular, the most general gauge invariant Yukawa couplings of the above scalars to the quarks may be written where i, j = 1, 2 runs through the first two families only and k = 1, 2, 3. As usual, the primes denote weak eigenstates. Since T is the only charge 5/3 quark, it is a simultaneous gauge and mass eigenstate. When 331 is reduced to the SM, the Yukawa interactions may be written in terms of ordinary left-handed quark doublets q Li = (u i , d i ) T L and singlets. We separate L into two pieces, L 0 which contains only lepton number L = 0 scalars and L 2 which has |L| = 2 scalars that change ordinary and new quarks into each other. We find Because the third family of quarks is treated differently, it has different couplings to scalars as well as the Z ′ . Thus natural flavor conservation [9] is necessarily violated in the 331 model, Because the first two families are generation symmetric, we may make a convenient choice of letting D and S be simultaneous gauge and mass eigenstates. This replaces the standard choice of using up-type quarks in this fashion which is no longer possible in this case. As a result, the charged currents in the quark sector and the Z ′ FCNC interaction may be written in a matrix notation where D = (D, S) T . If we had not initially picked D to be generation diagonal, we could simply have absorbed the unitary matrix W L into a redefinition of U L and V L . Unlike the SM where only V CKM is physical, there is additional freedom in the mixing present above [5]. Although we have introduced three matrices in (14), they are not independent but are related by V CKM = U † L V L . Since flavor changing interactions involving down-type quarks have been studied the most extensively, we find it convenient to specify the two unitary matrices V CKM and V L . As usual, V CKM contains three angles and one complex phase. V L is specified by three angles and three phases since we may remove three phases from the general unitary matrix by appropriately transforming the three new quarks. In the absence of CP violating phases, the three angles of V L have a simple interpretation. We may use a CKM like parametrization where s ij = sin θ ij and c ij = cos θ ij . Since the third row corresponds to the anti-triplet weak eigenstate, θ 13 and θ 23 specify which down-type quark is in the anti-triplet and, orthogonal to that, θ 12 specifies the mixing between the first two triplets (i.e. D and S). Previous examinations of the Z ′ in the 331 model have concentrated on putting lower bounds on M Z 2 [1,5] to prevent excessive tree-level FCNC. The drawback to this approach is that the new mixing specified by V L is in principle unknown and has to be estimated. Here, we instead use the upper limit M Z 2 < 2.2GeV [7] to place restrictions on V L . The strongest constraints on tree level Z ′ FCNC come from neutral meson mixing. For the neutral kaon system, the tree level ∆S = 2 interaction is given from (8) and (10) by where s = sin θ W and c = cos θ W . In addition to the SM box diagram and possible long distance effects, this contributes a term to the K 0 -K 0 mass difference [10]. We have included the leading order QCD corrections through the parameters η Z 2 ≈ 0.55 and η Z 1 ≈ 0.61 [11]. B K and f K are the bag parameter and decay constant of the kaon. Similar equations hold for D 0 -D 0 and B 0 -B 0 mixing. Because the Z-Z ′ mixing angle φ is very small, the first term in the parentheses dominates and sin 2 φ may safely be neglected. The present limits on neutral meson mixing are given by [12] K 0 -K 0 ∆m = (3.522 ± 0.016) × 10 −12 MeV Although there is considerable uncertainty in the heavy meson decay constants, this has little effect on the results. We use The kaon quantity comes from f K = 161MeV and B K = 0.7 ± 0.2. The heavy meson decay constants are taken from a lattice calculation, Ref. [13], where all reported errors are added in quadrature and B D = B B = 1. Because there are various sources that may contribute to the mass difference, ∆m, it is impossible to disentangle the tree level Z ′ contribution from other effects. However, barring any unexpected cancellations, it is reasonable to expect that Z ′ exchange contributes a ∆m no larger than the observed values. Using the upper limit, M Z 2 < 2.2TeV, we find, from the K 0 , D 0 and B 0 system, respectively (at 90%C.L.). u 3i are components of the third row of U L , the rotation matrix in the up-quark sector, and are given by u 3i = v 3j V * CKM ij . It should now be apparent why we have chosen to parametrize the new mixing by V L . In order to restrict these cases further, we must relate u 3i to v 3i and take the limit on D 0 mixing into account. This requires knowledge of V CKM and possible new CP violating phases as well. We find that in order to satisfy all three conditions of (20) simultaneously, only |v 3d | ≈ 0 is allowed. Restricted to the first quadrant, the limits on θ ij are which means |v 3b | ≈ 1 and hence that the third family must be the anti-triplet (up to small mixing). There has been some confusion over the issue of whether the first family or the third family must be treated differently in order to sufficiently suppress the Z ′ FCNC [1,3,5]. Obviously, in terms of weak eigenstates, it makes no difference which family is assigned to the anti-triplet. In terms of mass eigenstates, the anti-triplet has been unitarily transformed into some combination of all three families. However, physically, the almost-diagonal CKM matrix tells us that it makes sense to group mass eigenstates into families. It is in this manner that we may say the third family must be the one that is different. The reason this choice is forced on us is because the Cabibbo angle, sin θ C ≈ 0.22, is the largest off-diagonal element of V CKM , and hence the ∆S = 2 and ∆C = 2 FCNC limits cannot be simultaneously satisfied unless the anti-triplet is in the third family. When B 0 s mixing is measured, it will put further stronger restrictions on θ 23 . In the SM, ∆m B d /∆m Bs ∼ |V CKM td /V CKM ts | 2 so B 0 s mixing is expected to be large. Although this box diagram contribution is still present in the 331 case, if we assume that the tree level process dominates, we find instead ∆m B d /∆m Bs ∼ |v 3d /v 3s | 2 = | tan θ 13 / sin θ 23 | 2 . Depending on the new mixing angles, the Z ′ contribution to B 0 s mixing may be large or small. Even if this mixing turns out to be unexpectedly small, it will not rule out the 331 model. Because of the additional freedom present in V L , there is a possibility that tree level Z ′ exchange has the opposite phase as the SM box diagram, and hence would suppress the large SM contribution to ∆m Bs . This intriguing possibility of small B 0 s mixing would present clear evidence of physics beyond the SM, including possible support for the 331 model. Tree level Z ′ exchange also contributes to ∆S = 1 FCNC processes such as K → πνν. We find (22) Since BR(K + → π + νν) < 1.7 × 10 −8 [14], we use the upper bound on M Z 2 to find |v * 3s v 3d | < 0.18 which is a weaker limit than that from K 0 -K 0 mixing, Eq. (20). Similar considerations hold for the rare decay K 0 L → µ + µ − . However, it is theoretically harder to treat because of long-distance contributions. The reason such semi-leptonic decays do not give strong mixing constraints is that the Z ′ is only weakly coupled to the leptons. While the above processes occur at tree level via Z ′ exchange, the rare decay b → sγ must still proceed at one-loop. In the 331 model, in addition to the SM W penguin, this may occur via Z ′ and Y penguins. Although the SM contribution is GIM suppressed, this is no longer the case for both 331 contributions. One might worry that this would lead to too large a rate for b → sγ. However, the non-GIM suppressed contributions are proportional to new mixing given by v * 3b v 3s , which may be sufficiently small to prevent conflict with experiment [15]. This is currently under investigation [16]. In conclusion, FCNC occurs at tree level in the 331 model because of the Z ′ , which couples differently to triplets and anti-triplets. In order to describe the flavor changing Z ′ interaction, we need to understand family mixing in the quark sector, which is complicated by the presence of the new quarks. In addition to the ordinary CKM matrix, three more angles and three new phases are required to describe the mixing between ordinary and new quarks. Although we have not focused on the three new CP violating phases, they may lead to striking predictions beyond the SM and deserve further investigation. We find that the only way to satisfy the experimental constraints on FCNC is to make the third family transform differently from the other two (up to small mixing). The reason behind singling out the third family is that it has the smallest couplings to the other two families -the Cabibbo angle mixing is sufficiently large that it forces the first two families to be treated identically. Because of the almost diagonal family structure, it makes physical sense to group either weak or mass eigenstate quarks into corresponding families. This is why it is convenient to think of the third family as unique, even in terms of weak eigenstates [2,3], although technically it makes no difference. Going back to the quark Yukawa couplings, (13), we note that since the Higgs couplings to the third family are different, FCNH will occur in the scalar sector. However, the Z ′ FCNC constraint, (21), will simultaneously suppress FCNH by restricting the third family to be almost diagonal. Thus the SM Yukawa interactions are similar to that of the two-Higgs doublet model II with the exception that t and b get their masses from the opposite Higgs doublet as for the first two families. Because of the unique feature that there is an upper bound on the unification scale, the 331 model is highly predictive. It is remarkable that in this model, there is just enough freedom to eliminate large FCNC, and the result of this is to constrain the third family to be the one that is different. In turn, this may give us some indication of why the top quark is so heavy and may present a new approach to the question of fermion mass generation.
2014-10-01T00:00:00.000Z
1993-12-20T00:00:00.000
{ "year": 1993, "sha1": "f89666a1a8ab32ee1b3abe179d4d7727b5186b75", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9312312", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f89666a1a8ab32ee1b3abe179d4d7727b5186b75", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
235222535
pes2o/s2orc
v3-fos-license
Genetic structure of rainbow trout Oncorhynchus mykiss ( Salmoniformes , Salmonidae ) from aquaculture by DNA-markers National University of Food Technologies, Volodymyrska st., 68, Kyiv, 01601, Ukraine. Tel.: +38-050-011-59-49. E-mail: 13romvik@gmail.com Bielikova, О. Y., Mariutsa, A. E., Mruk, A. I., Tarasjuk, S. I., & Romanenko, V. M. (2021). Genetic structure of rainbow trout Oncorhynchus mykiss (Salmoniformes, Salmonidae) from aquaculture by DNA-markers. Biosystems Diversity, 29(1), 28–32. doi:10.15421/012104 Introduction Modern methods to control the genetic diversity of local stocks play an important role in increasing efficiency and accelerating breeding work in aquaculture (Chiu et al., 2012). Breeding programs in combination with molecular biology techniques can optimize the use of aquatic genetic resources in aquaculture (Saad et al., 2012). Molecular genetic markers are effective population genetic tools to resolve such issues as mechanisms of adaptation to the environmental condition of fish species, protection of biodiversity, assessment of inbreeding effects and stock identification (Olagunju, 2019). The knowledge of the specifics of the formation of genetic structure will create a platform for obtaining local groups of individuals with the desired economic and valuable characteristics. The primary task for the development of such programs is the study of polymorphism at the intraspecific level. ISSR-PCR (inter-simple sequence repeats) are one of the most convenient and cheapest tools of molecular genetic analysis for solving this problem. These dominant ISSR-markers allow polylocus genotyping of individuals to be carried out using a single microsatellite locus (Egorova et al., 2018). Multilocus intermicrosatellite analysis (ISSR-PCR) makes it possible to study genetic biodiversity and identify species-specific features, which can be used to create a "gene pool standard" of a breed based on ISSR-fingerprint (Stolpovskii et al., 2010;Labastida et al., 2015;Komarova et al., 2018). Genetic certification becomes an integral part of modern breeding standards and undoubtedly facilitates combating falsifications. ISSR markers are widely used to study various fish species: rainbow trout (Melnikova et al., 2010;Perfilyeva et al., 2018), sterlet (Komarova et al., 2018), tilapia (Saad et al., 2012), cyprinids (Mariutsa et al., 2016). A number of works are devoted to the study of the genetic profile of marine (Yusufzai et al., 2016) and, to a greater extent, exotic fish species, for example, Family Osphronemidae (Abu-Almaaty et al., 2017), Pangasius species (Ly & Yen, 2019), parrotfish (Saad et al., 2013). In Ukraine, a population genetic analysis using intermicrosatellite loci has already been performed on sturgeon (Dubin, 2012) and cyprinids (Nahorniuk et al., 2013;Hrytsyniak et al., 2015;Mariutsa et al., 2016). However, the rainbow trout cultured in Ukraine has not been studied yet using ISSR markers. On the other hand, investigations of genetic variability are conducted using microsatellite markers (SSR -simple sequence repeats). They are characterized by wide distribution in the genome, have large allelic polymorphism among individuals and are closely connected with genes of known function. SSR-markers were proven to be essential in studies of the genetic structure of populations for providing management of fish stocks (Olagunju, 2019). Analysis of literature sources in recent years shows that microsatellites were predominantly used in genetic studies of rainbow trout in different countries (Barat et al., 2015;Abadía-Cardoso et al., 2016;Faccenda et al., 2018) to assess the genetic variability of stocks and evaluation of their relationships, reconstruction of the admixture history. Abadía-Cardoso et al. (2016) showed the possibility of identifying the differ-ences in natural populations of Oncorhynchus mykiss Walbaum, 1792 from individuals of farmed stocks and assessing the impact on rainbow trout from natural water sources during stocking. Moreover, currently, microsatellites along with SNP-markers (single nucleotide polymorphisms markers) are actively used for the analysis of quantitative trait loci (QTL). It allows determination of the breeding value of individuals, prediction of their productivity at an early age, determination of the efficiency of selection and response to the intensity of selection (Olagunju, 2019). Therefore, the purpose of our research was to study the genetic profile of rainbow trout in aquaculture of Ukraine using SSR-and ISSR-markers. Material and methods The selection of fish for the study was carried out taking into account the provisions recommended by the European Convention for the Protection of Vertebrate Animals used for Research and other Scientific Purposes (Strasbourg, 1986). The rainbow trout of the Chernivtsi local stock (Berehomet, Chernivtsi region) ( Fig. 1) was selected as an object for the study of the genetic structure by ISSR-and SSR-markers. Fin clips were collected from the age-3+ group (n = 21) and stored in 96% ethanol at a temperature of 4 °C until DNA isolation. DNA was isolated using a DNA-Go commercial kit (BioLabTech LTD). A biophotometer (Eppendorf, Germany) was used to assess the quantity and quality of the isolated DNA. The Polymorphism Information Content (PIC) was assessed using methods (Nagy et al., 2012) generally accepted for codominant markers and the GDdom for ISSR-markers (Abuzayed et al., 2016). The following parameters were used to determine the information content of ISSR primers: effective multiplex ratio (EMR), marker index (MI), resolving power (Rp), which were calculated using methods (Prevost & Wilkinson, 1999). Results A total of 85 amplicons was obtained by genotyping rainbow trout with the use of five ISSR markers and 92.9% of the amplicons were polymorphic ( Table 2). The molecular weight of the amplified fragments ranged from 170 to 1900 bp (Fig. 2). Amplicon size and their frequencies show the "gene pool profile" of rainbow trout cultivated in aquaculture of Ukraine by ISSR-markers. The total number of amplicons per locus (NTB) ranged from 10 (marker D and B) to 23 (marker C). For three of the five studied loci, 6 conservative bands, or so-called monomorphic bands were identified, the number of which ranged from 1 to 3 per locus. For marker A, these were amplicons with a molecular weight of 770 and 520 bp, for marker В -345, 295 and 260 bp, and for marker E -350 bp. The mean value of alleles per locus and the effective number of alleles were 1.92 ± 0.04 and 1.45 ± 0.02, respectively. The average value of the Shannon index was 0.43 ± 0.02 (Fig. 3), and the unbiased expected heterozygosity was 0.30 ± 0.01. The average number of polymorphic bands per locus was 15.8 ± 2.6. A high percentage of polymorphic bands was observed by the selected intermicrosatellite markers (average PPB = 92.2%). The highest percentage of polymorphic fragments was observed by using the marker (AGC)₆G and (ACC)₆G (100.0%), the lowest by (GAG)₆C (76.9%). The determined parameters of genetic diversity of the local stock of rainbow trout using ISSR-markers varied in narrow ranges, e.g., Na ranged from 1.8 to 2.0; Ne = 1.39-1.52. Fluctuations of the Shannon index were observed in the range 0.38-0.48, and unbiased expected heterozy-gosity uHe varied in the range from 0.25 to 0.32. This suggests that each ISSR-marker can be used separately to quickly monitor the genetic diversity of local stocks. Discussion The genetic profile of rainbow trout cultivated in aquaculture of Ukraine was determined using five ISSR markers and six microsatellite (SSR) markers. The efficiency of the selected ISSR-primers, which consisted of trinucleotide microsatellite motifs with a single anchor nucleotide at the 3'-end, was tested. Reddy et al. (2002) emphasized that it is better to use primers with an anchor region (3'-or 5'-anchored) since annealing, in this case, extends only to the ends of microsatellites in the DNA template, preventing the formation of smears instead of clear amplicons, which is due to slippage along the length of complementary microsatellite region during PCR. Taking into account that PIC ≤ 0.5 (Chesnokov & Artemyeva, 2015) for dominant markers, it can be concluded that the studied ISSR-markers had the value of the information polymorphism content above average. The highest PPB, EMR and MI rates were recorded for markers (AGC)₆G and (AGC)₆C. Based on the value of the resolving power, it was found that the (AGC)₆C marker shows the highest ability to identify differences between a large number of genotypes. As for the effectiveness of the microsatellite markers used, we previously demonstrated that the polymorphism information content exceeded the level of 0.5 (the average PIC was 0.70 ± 0.03), which for codominant markers indicates a high level for this parameter. Barat et al. (2015) in their study used other markers of the OMM group, which were also originally developed by Rexroad et al. (2002). The markers indicated a high allelic variability and showed their availability to analyze the genetic structure of rainbow trout stocks in India. The specificity of the allelic profile determined by each of the ISSRmarkers (Fig. 2) was established, which is a consequence of the specificity of the genotypes of rainbow trout of local stock in aquaculture of Ukraine and the nucleotide sequence of the primer. Studying the distribution of lengths of amplification products in individuals from different stocks will allow monitoring of the differentiation and consolidation of the genetic structure of rainbow trout on individual farms and assessing the genealogical relationships among them. In further works, the difference between the profiles of various natural and artificial local stocks by specific amplicons obtained in ISSR analysis will allow differentiation of different rainbow trout populations. For example, as shown by Komarova et al. (2018), it was possible to effectively distinguish the sterlet populations by the presence of specific alleles, since only fish from the Vyatka River (middle course) had 780 bp amplicons determined for the primer (АСС)₆G. (Melnikova et al., 2010;Komarova et al., 2018;Perfilyeva et al., 2018), which were carried out on rainbow trout using intermicrosatellite loci, are indicative of the fact that ISSR-markers are promising for intraspecific differentiation of populations. As noted by Sulimova et al. (2011), the information content and convenience of the analysis using ISSR-PCR method has increased due to the development of software for statistical processing of results, such as, for example, Structure. Stolpovskii et al. (2010) used ISSR loci to analyze the genetic structure and determine the so-called "gene pool profile" and "gene pool standard". It`s possible to determine the correspondence of the genetic structure of individuals to the gene pool standard according to the major bands and amplicon frequencies, which occurred with a high frequency (more than 40.0%), as well as further cluster analysis. Studies in this field, as shown in some works (Sulimova et al., 2011;Komarova et al., 2018), allows ISSR-markers to be used to create genetic passports of breeds or intrabreed types. Therefore, at the present stage, intermicrosatellite analysis is highly informative, universal and indicative for the study of biodiversity and identification of differences in animal populations. At the same time, Faccenda et al. (2017) showed that the results of microsatellite analysis were essential to understanding the state of genetic resources of each individual stock. Since, in addition to significant genetic variation within populations, there is also significant subdivision reflected at the inter-population level. Faccenda et al. (2017) concluded that it was possible to give recommendations on the rational management of local stocks for breeding pure bred broodstocks based on the data of the SSRanalysis. Faccenda et al. (2017) showed that OMM markers (including the OMM 1088 marker used in our work) were promising for effective work with local stocks for their differentiation. We concluded that ISSR markers are convenient for interspecies identification and creation of genetic passports since they are universal in this aspect and are applicable for different animal species. Microsatellite markers are more convenient for identifying intraspecific polymorphism of rainbow trout and can be used as routine tools for solving the goals of fish farming at the current level of the development of molecular genetic methods. Conclusion The information content of ISSR primers for studying the gene pool of rainbow trout and monitoring its state was determined. The genetic profile of rainbow trout by ISSR-and SSR-markers has been obtained, which will allow intra-and interspecific identification to be carried out, and gene pool standards of breed to be introduced in the future that will be developed and approved in Ukraine. The obtained results indicate that the DNA markers used may be useful for monitoring the genetic diversity and inbreeding rate of local stocks of rainbow trout in aquaculture. The study was supported by the Fund of Fundamental and Applied Research of the National Academy of Agrarian Sciences of Ukraine on the subject 37.00.01.05 F "To study the mechanisms of adaptation of certain valuable fish species using methods of population genetics" (DR No 0116U001227) and 33.00.00.19 Р "To investigate the genetic variability of rare and endangered species of salmon and sturgeon fish in the water bodies of the Carpathian region and to develop methods for their reproduction" (DR No 0119U100578) in 2016-2020. We are also grateful to the trout farm for the materials provided.
2021-05-28T14:39:33.752Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "eeb2a63fa8000650dde4a8b94ed63f01da9d85e8", "oa_license": "CCBY", "oa_url": "https://ecology.dp.ua/index.php/ECO/article/download/1080/1035", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eeb2a63fa8000650dde4a8b94ed63f01da9d85e8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
119256297
pes2o/s2orc
v3-fos-license
Local newforms and formal exterior square L-functions Let F be a non-archimedean local field of characteristic zero. Jacquet and Shalika attached a family of zeta integrals to unitary irreducible generic representations $\pi$ of GL_n(F). In this paper, we show that Jacquet-Shalika integral attains a certain L-function, so called the formal exterior square L-function, when the Whittaker function is associated to a newform for $\pi$. By consideration on the Galois side, formal exterior square L-functions are equal to exterior square L-functions for some principal series representations. Introduction Let F be a non-archimedean local field of characteristic zero and o its ring of integers with the maximal ideal p. Let π be an irreducible admissible representation of GL n (F ). Via the local Langlands correspondence, there exists a Weil-Deligne representation ρ of the Weil group W F associated to π. The exterior square L-function of π is defined by L(s, π, ∧ 2 ) = L(s, ∧ 2 ρ), where L(s, ∧ 2 ρ) is the L-factor of the representation ∧ 2 ρ of W F . We suppose that π is unitary and generic. We denote by W(π, ψ) the Whittaker model of π, and by C ∞ c (F m ) the space of Schwartz functions on F m . To give an integral representation of L(s, π, ∧ 2 ), Jacquet and Shalika in [11] introduced a family of zeta integrals of the form J(s, W, Φ) for n even, and J(s, W ) for n odd, where W ∈ W(π, ψ) and Φ ∈ C ∞ c (F n/2 ). In loc. cit., they showed that the integral J(s, W, Φ) attains L(s, π, ∧ 2 ) when π is unramified and W is spherical. The key for unramified computation is the explicit formula for the spherical Whittaker functions given by Casselman-Shalika [3] and Shintani [18]. It is natural to ask about ramified representations. Jacquet, Piatetski-Shapiro and Shalika introduced the concept of newforms for generic representations of GL n (F ) in [10], which is an extension of that of spherical vectors for unramified representations. Recently, Matringe [16] and the first author [17] independently gave an explicit formula for Whittaker functions associated to newforms on the diagonal torus. We apply this formula to compute the integral J(s, W, Φ) when W is associated to a newform. To state our results, we introduce the notion of the formal exterior square L-functions. For an irreducible admissible representation π of GL n (F ), its standard L-function can be written as where q denotes the cardinality of the residue field of F . We define the formal exterior square L-function of π by L (s, π, ∧ 2 ) = 1≤i<j≤n (1 − α i α j q −s ) −1 . It is known that L (s, π, ∧ 2 ) is equal to L(s, π, ∧ 2 ) for unramified principal series representations, and one may check that L (s, π, ∧ 2 ) divides L(s, π, ∧ 2 ) in general (Theorem 6.2 (ii)). In this paper, we shall show the following: Theorem 1.1. Let π be a unitary irreducible generic representation of GL n (F ). Suppose that a function W in W(π, ψ) is associated to a newform for π. Then the integral J(s, W, Φ c ) (J(s, W ) if n is odd) is a constant multiple of L (s, π, ∧ 2 ), where c is the conductor of π and Φ c is the characteristic function of p c ⊕ · · · ⊕ p c ⊕ (1 + p c ) ⊂ F n/2 . Theorem 1.1 has several applications. We summarize them comparing the recent progress in this topic. The Jacquet-Shalika integrals attached to a unitary irreducible generic representation π span a fractional ideal I π of C[q −s , q s ]. It is an important fact that I π contains 1. Due to this, we may define Jacquet-Shalika's exterior square L-function L JS (s, π, ∧ 2 ) to be the normalized generator of I π . Kewat and Raghunathan in [13] have already mentioned that this is implicitly proved by Belt in [1]. Theorem 1.1 gives an alternative (and brief) proof of this because it implies that L (s, π, ∧ 2 ) is contained in I π . Additionally, Theorem 1.1 says that L (s, π, ∧ 2 ) divides L JS (s, π, ∧ 2 ). Thus the poles of L (s, π, ∧ 2 ) are also those of L JS (s, π, ∧ 2 ) (Theorem 3.6). Recently, Kewat and Raghunathan in [13] showed the coincidence of L JS (s, π, ∧ 2 ) and L(s, π, ∧ 2 ) for all the essentially square integrable representations of GL n (F ), and for all the generic representations when n is even. Although Theorem 3.6 is obvious for such representations via arguments on the Galois side (see Theorem 6.2 (ii)), it provides an evidence of the equality of L JS (s, π, ∧ 2 ) and L(s, π, ∧ 2 ) for the odd case. It is still an open problem to find Whittaker functions which attain exterior square L-functions through Jacquet-Shalika integral. We give an example of some principal series representations π for which L (s, π, ∧ 2 ) equals to L(s, π, ∧ 2 ) (Proposition 6.4). Therefore Whittaker newforms attain L(s, π, ∧ 2 ) for such representations. On the other hand, there is another kind of zeta integrals related to exterior square Lfunctions, introduced by Bump and Friedberg [2]. For an irreducible generic representation π of GL n (F ), Bump-Friedberg integral has the form Z(s 1 , s 2 , W, Φ), where W ∈ W(π, ψ) and Φ ∈ C ∞ c (F ⌊(n+1)/2⌋ ). For Bump-Friedberg integral, we obtain the following Theorem 1.2. Let π be an irreducible generic representation of GL n (F ). Suppose that a function W in W(π, ψ) is associated to a newform for π. Then the integral Z(s 1 , s 2 , W, Φ c ) is a constant multiple of L(s 1 , π)L (s 2 , π, ∧ 2 ), where c is the conductor of π and Φ c is the characteristic function of This paper is organized as follows. In section 2, we define the formal exterior square Lfunctions, and relate them with newforms. We show that Jacquet-Shalika integrals attain the formal exterior square L-functions when Whittaker functions are associated to newforms for the even case in section 3, and for the odd case in section 4. We consider Bump-Friedberg integral in section 5. The meaning of the formal exterior square L-functions on the Galois side is given in section 6. Preliminaries In this section, after fixing notations, we define formal exterior square L-functions for irreducible generic representations of GL(n), and relate them with newforms. 2.1. Notation. Let F be a non-archimedean local field of characteristic zero, o its ring of integers, p the maximal ideal in o, and ̟ a generator of p. Let ν denote the valuation on F normalized so that ν(̟) = 1. We write | · | for the absolute value of F normalized so that |̟| = q −1 , where q stands for the cardinality of the residue field o/p of F . Throughout this paper, we fix a non-trivial additive character ψ of F whose conductor is o, that is, ψ is trivial on o and not trivial on p −1 . We set G n = GL n (F ). Let B n denote the Borel subgroup of G n consisting of the upper triangular matrices, T n the diagonal torus in G n and U n the unipotent radical of B n . We write δ Bn for the modulus character of B n . We define a subgroup T n,1 of T n by T n,1 = {diag(a 1 , a 2 , . . . , a n−1 , 1) | a 1 , . . . , a n−1 ∈ F × }. We use the same letter ψ for the following character of U n induced from ψ: For an irreducible generic representation (π, V ) of G n , we denote by W(π, ψ) its Whittaker model with respect to ψ. 2.2. Formal exterior square L-functions. Let π be an irreducible generic representation of G n . We denote by L(s, π) the L-factor of π defined in [4]. Since the degree of L(s, π) is equal to or less than n, we can write L(s, π) as Here we allow the possibility that α i = 0. We define the formal exterior square L-function of π by We say that π is unramified if π has a non-zero GL n (o)-fixed vector. Suppose that π is unramified. Then L (s, π, ∧ 2 ) coincides with the exterior square L-function L(s, π, ∧ 2 ) of π defined through the local Langlands correspondence ( [11]). Proof. If π is an irreducible, essentially square integrable representation of G n , then the degree of L(s, π) is equal to or less than 1 (see [9]). The assertion follows immediately from this. For any integer r, let Φ r denote the characteristic function of p r ⊕ · · · ⊕ p r ⊕ (1 + p r ) ⊂ F n . The following lemma determines the support of the function g ∈ G n → Φ r (e n g), where e n = (0, 0, . . . , 0, 1) ∈ F n . Lemma 2.6. Suppose that r is positive. Then we have Φ r (e n g) = 1, if g ∈ U n T n,1 K n,r ; 0, otherwise, Proof. Clearly, we have Φ r (e n g) = 1 for g ∈ U n T n,1 K n,r . We shall prove the converse statement. By the Iwasawa decomposition G n = U n T n K n,0 , we can write g in G n as g = utk, where u ∈ U n , t ∈ T n , k ∈ K n,0 . Since the function g → Φ r (e n g) is left U n -invariant, we may assume g = tk. We write t = diag(t 1 , t 2 , . . . , t n ), t i ∈ F × . Suppose that Φ r (e n tk) = 1. Then we obtain |t n k ni | ≤ q −r , for 1 ≤ i ≤ n − 1 and |t n k nn | = 1. This implies |k ni | < |k nn | for 1 ≤ i ≤ n − 1. Since k lies in K n,0 , we have |k nj | ≤ 1 for all 1 ≤ j ≤ n, and there exists at least one j such that |k nj | = 1. Thus we get |k nn | = 1, and hence |t n | = 1. So we may assume that t lies in T n,1 . In this case, the equation Φ r (e n tk) = Φ r (e n k) = 1 precisely means that k belongs to K n,r . This completes the proof. Let π be an irreducible generic representation of G n . We write V (r) for the space of K n,rfixed vectors in V . Due to [10] (5.1) Théorème (ii), there exists a non-negative integer r such that V (r) = {0}. We denote by c(π) the smallest integer with this property. We call c(π) the conductor of π, and V (c(π)) the space of newforms for π. By [10] For simplicity, we say that an element W in W(π, ψ) is a newform if W is the Whittaker function associated to a newform for π. It follows from [17] Theorem 4.1 and [18] that a newform W in W(π, ψ) is determined by its value at 1 ∈ G n . Proof. By using the central character of π, we may assume that t n = 1. 1)). Hence the proposition follows from [17] Proposition 1.2. We shall give an integral representation of formal exterior square L-functions. We normalize Haar measures on T n and T n,1 so that the volumes of T n ∩ K n,0 and of T n,1 ∩ K n,0 are one respectively. Note that if the conductor c(π) of π is positive, then the degree of L(s, π) is less than n ( [9]). Proposition 2.9. Let π be an irreducible generic representation of G n whose conductor is positive and let W be the newform in W(π, ψ) such that W (1) = 1. It follows from [17] Theorem 4.1 that Thus, (2.4) implies the assertion. Part (ii) follows from (2.5) in a similar fashion. Jacquet-Shalika integral: the even case We shall prove that Jacquet-Shalika integral attains the formal exterior square L-function when the Whittaker function is associated to a newform. In this section, we consider the case when n = 2m. Let π be a unitary irreducible generic representation of G n . In [11], Jacquet and Shalika introduced a family of zeta integrals which have the form J(s, W, Φ), We take Haar measure on V m \M m so that the volume of V m \(V m + M m (o)) is one. By using the Iwasawa decomposition G m = U m T m K m,0 , we can write an element g in G m as g = uak, u ∈ U m , a ∈ T m , k ∈ K m,0 . Then Haar measure dg on U m \G m is decomposed into We normalize Haar measures on T m and K m,0 so that the volumes of T m ∩K m,c(π) and of K m,c(π) are one respectively. Then the following holds: Theorem 3.1. Let π be a unitary irreducible generic representation of GL 2m (F ) and let W be the newform in W(π, ψ) such that W (1) = 1. Then we have J(s, W, Φ c(π) ) = L (s, π, ∧ 2 ), where c(π) is the conductor of π and Φ c(π) is the characteristic function of p c(π) ⊕ · · · ⊕ p c(π) ⊕ (1 + p c(π) ) ⊂ F m . Proof. If c(π) is zero, then the theorem follows from Proposition 2 in [11] section 7. We suppose that c = c(π) is positive. The proof is quite similar to that for unramified representations. By Lemma 2.6, the map g → Φ c (e m g) is the characteristic function on U m T m,1 K m,c . We note that if k belongs to K n,c , then k 0 0 k lies in K n,c and fixes W . Thus we obtain The second equality follows because σ belongs to K n,c and fixes W . Set b = σ a 0 0 a σ −1 . Then we have b = diag(a 1 , a 1 , a 2 , a 2 , . . . , a m−1 , a m−1 , 1, 1), where a = diag(a 1 , a 2 , . . . , a m−1 , 1) ∈ T m,1 . By the Iwasawa decomposition G n = U n T n K n,0 , we can write for Z ∈ M m , where u Z ∈ U n , t Z ∈ T n and k Z ∈ K n,0 . Since the n-th row of u Z t Z k Z is (0, 0, . . . , 0, 1), we can take t Z and k Z so that t Z ∈ T n,1 and k Z has the n-th row (0, 0, . . . , 0, 1). This implies that k Z lies in K n,c . Hence we obtain We write b = diag(b 1 , . . . , b n ) and t Z = diag(t 1 , . . . , t n ). It follows from Proposition 2.8 that if W (bt Z ) = 0, then we have |b i t i | ≤ |b i+1 t i+1 |, for 1 ≤ i ≤ n − 1. So we obtain |t i | ≤ |t i+1 |, for i odd. By Proposition 4 in [11] section 5, we have |t i | ≥ 1 for i odd, and |t i | ≤ 1 otherwise. Thus we get |t i | = 1 for all i. Proposition 5 in [11] section 5 says that Z lies in V m + M m (o). So we may take u Z = t Z = 1 if W (bt Z ) = 0, and obtain . Now the assertion follows from Proposition 2.9 (i). Let I π be the subspace of C(q −s ) spanned by J(s, W, Φ), where W ∈ W(π, ψ) and Φ ∈ C ∞ c (F m ). We shall give an alternative proof of a result by Belt. Proposition 3.2 ([1] Theorem 2.2, [13]). With the notation as above, I π is a fractional ideal of C[q −s , q s ] which contains 1. Proof. By [12] p. 158, I π is a C[q −s , q s ]-module. Due to [1] Proposition 4.3, there exists a polynomial Q(X) ∈ C[X] such that Q(q −s )I π ⊂ C[q −s , q s ]. So I π is a fractional ideal of C[q −s , q s ]. By Theorem 3.1, L (s, π, ∧ 2 ) is contained in I π , so is 1. Proof. The proposition follows from Proposition 2.2 and Theorem 3.1. Finally, we state a result on poles of L JS (s, π, ∧ 2 ). Theorem 3.6. Let π be a unitary irreducible generic representation of GL 2m (F ). (i) Suppose that s 0 ∈ C is a pole of L(s, π) whose order is equal to or more than two. Then 2s 0 is a pole of L JS (s, π, ∧ 2 ). (ii) Suppose that s 1 and s 2 are two distinct poles of L(s, π). Then s 1 + s 2 is a pole of L JS (s, π, ∧ 2 ). Proof. By Theorem 3.1, the formal exterior square L-function L (s, π, ∧ 2 ) is contained in the set L JS (s, π, ∧ 2 )C[q −s , q s ]. So the theorem follows from the definition of L (s, π, ∧ 2 ). Jacquet-Shalika integral: the odd case In this section, we consider the case when n = 2m + 1. Let π be a unitary irreducible generic representation of G n . Then Jacquet-Shalika integral for π has the form J(s, W ), W ∈ W(π, ψ): where σ is the permutation of degree n = 2m + 1 given by We normalize Haar measures on U m \G m and V m \M m so that the volumes of K m,0 and of V m \(V m + M m (o)) are one respectively. Similar results to those for the even case hold. We shall be brief here. Proof. Along the lines in the proof of Theorem 3.1, the theorem follows from Proposition 2.9 (ii). Let I π be the subspace of C(q −s ) spanned by J(s, W ), W ∈ W(π, ψ). As in Proposition 3.2, the set I π is a fractional ideal of C[q −s , q s ] which contains 1. Thus we can define Jacquet-Shalika's exterior square L-function by where P (X) is a polynomial in C[X] such that P (0) = 1 and I π = (1/P (q −s )). In the odd case, Schwartz functions are not involved with Jacquet-Shalika integral. So we get the following Proof. The proposition follows from the fact that I π contains 1. We note that Theorem 3.6 holds for the odd case. Bump-Friedberg integral Bump and Friedberg introduced another kind of Rankin-Selberg type zeta integrals related to exterior square L-functions in [2]. In this section, we treat Bump-Friedberg integrals. Set m = ⌊(n + 1)/2⌋ and m ′ = ⌊n/2⌋. We define an embedding J : for n even, and by for n odd. Let π be an irreducible generic representation of G n . For W ∈ W(π, ψ) and Φ ∈ C ∞ c (F m ), we define We shall show that Bump-Friedberg integral attains the formal exterior square L-function when the Whittaker function is associated to a newform. The Galois side via the local Langlands correspondence In previous sections, we have defined L (s, π, ∧ 2 ) and shown that it divides L JS (s, π, ∧ 2 ). In this section, we collect the facts corresponding these in the Galois side via the local Langlands correspondence (say LLC for short). Let Ω be an algebraically closed field of characteristic zero. Let F be a finite extension of Q p and F q be its residue field with the cardinality q. Define the inertia group I F by the following exact sequence: Take the geometric Frobenius element Frob q ∈ Gal(F q /F q ) ≃ Z. Then the Weil group W F is defined by the inverse image of Z-span Frob Z q by ι, hence we have If we fix a lift Φ of Frob q , then W F can be written as W F = n∈Z Φ n I F . A Weil-Deligne representation of W F is a couple of a smooth representation r = (r, V ) of W F with a finite dimensional vector space V over Ω and N ∈ End Ω (V ) satisfying the following relation: if g = Φ n σ, n ∈ Z, σ ∈ I F , then r(g)N r(g) −1 = q −n N. Let (r, N ) be a Weil-Deligne representation of W F . By Jordan decomposition, r(Φ) can be written as the product of a semisimple matrix T and a unipotent matrix U . Then for g = Φ n σ, n ∈ Z, σ ∈ I F we define r ss (g) := T n r(σ). We call r ss the Φ-semisimplification of r. It is easy to see that a couple (r ss , N ) forms a Weil-Deligne representation. We say that r is Φ-semisimple if r ss = r. For any Weil-Deligne representation (r, N ), we define the L-function of it by For each integer n ≥ 1, denote by G F (n) the set of isomorphism classes of Φ-semisimple Weil-Deligne representations of dimension n and A F (n) the set of isomorphism classes of smooth irreducible representations of GL n (F ). Then by [5], [7], there exists a canonical bijective correspondence G F (n) LLC −→ A F (n), ρ → π(ρ) which is preserving L-functions and ε-factors of both sides (see [6], [8]).
2012-03-30T15:54:46.000Z
2012-03-30T00:00:00.000
{ "year": 2013, "sha1": "1a575e91fd009eb81bf26154059ea6d303bc61fd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.6841", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a575e91fd009eb81bf26154059ea6d303bc61fd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221021612
pes2o/s2orc
v3-fos-license
Oral Toxicity Assessment and In vitro Antimicrobial Profile of Methanolic Leaf Extract of Alchornea Cordifolia on Albino Rats The ameliorative tendency of the leaves of Alchornea cordifolia has been reported against ailments ranging from conjunctivitis to yaws and certain parasitic infections. This necessitated investigating the in vitro antibacterial efficacy of methanol-extracted leaves of Alchornea cordifolia on hematological and histopathological of organ of toxicity on albino rats. The rats were randomly segregated into four groups of five animals in each cage. The groups were orally administered with 250, 500 and 750 mg/kg body weight and 10% Tween 80 control for 28 days. Blood samples were collected for hematological analysis and organs (liver and spleen) for histopathological analysis. The data obtained were analyzed by ANOVA and Dunnett’s test at P>0.05 levels of significance. Methanol leaf extract had a significantly higher inhibitory zone in E. coli and K. pneumonia ranging 35.00± 1.73 and 35.67± 3.48 at all the concentrations tested. There was no significant effect on hematological parameters. Liver necrosis was noticed in the harvested organs of the experimental rats. The liver sections of rat treated with 750mg/kg of the leaf extract showed cloudy swelling of hepatocytes and mild Kupffer cell hyperplasia. These results suggest that Alchornea cordifolia is non-toxic but has the propensity to induce hepatic injury at high doses. Conclusively, successful antibacterial activity at all concentrations and the slight pathological effects could be indicative low toxicity and high efficacy of this plant if taken at lower doses. Introduction Alchornea cordifolia (Schumach and Thonn) (Euphorbiaceae) is an erect and bushy perennial shrub or small tree, up to 4 m high, reproducing from seeds. The stem is woody, greyish, with lightly granulated bark [1] with many branches and bushy when young. It is geographically distributed in secondary forest usually near water, moist or marshy places. The common names are Christmas bush and dovewood. Alchornea cordifolia occurs widely in Africa from Senegal to Kenya, Tanzania, and throughout Central Africa to Angola. It is cultivated in Democratic Republic of Congo for its medicinal use [2]. There are many convergences in its traditional use throughout tropical Africa including medicinal and ethnobotanical values. The leaves, stem bark, stem pith, leafy stems, root bark, roots and fruits have been reported to be commonly used in local medicine [3]. The crude aqueous methanolic ethe leaves of A. cordifolia has been found to show anti-inflammatory activity [4]. It is also used for the treatment of wounds ulcers, gum inflammation and conjunctivitis [3]. Despite the progress in human medicine, infectious diseases caused by bacteria, fungi, viruses and parasites are still a major threat to public health and particularly large in developing countries due to relative unavailability of medicines and emergence of widespread drug resistance [5]. From the point of its abundance and reported efficacy the present investigation was made to evaluate the antibacterial activities and toxicity examination of Alchornea cordifolia against three human bacteria. Plant Materials and Authentication The Plant Alchornea cordifolia used for this research was collected from the Polytechnic Ibadan quarters, Ibadan, Nigeria. The plant was identified and authenticated by a plant taxonomist at Forest Research Institute of Nigeria (FRIN). Sample deposited in the herbarium was assigned voucher number 111248. Extraction of Plant Materials The healthy leaves of the plant were harvested and air-dried for 4 weeks to a constant weight in the laboratory. The dried samples was ground into powdered form using an electric grinder and stored in air tight bottle. Using maceration method, 350g of powdered sample was soaked in 1750 ml of methanol of analytical grade, for 72 hours. Sample solution was filtered using muslin cloth and Whatman No. 1 filter paper. The filtrate was concentrated using rotary evaporator at 45°C. The extract was weighed and stored in well stopper container and kept in a refrigerator at 4°C until used. Test Samples The bacterial isolates selected for this investigation were Escherichia coli and Staphylococcus aureus. These isolates were obtained from the Microbiology Laboratory in the Department of Microbiology, University College Hospital (UCH), Ibadan where they were kept as stock cultures. Each isolate was sub cultured in nutrient agar for 24 hours and subjected to simple but specific tests for confirmation before use for this study. Experimental Animals Forty albino rats (20 males: 20 females) weighing (120-130g) required for the experiment were procured from the Animal facility of the Anatomy Department, University of Ibadan. The animals were acclimatized for two weeks under 12 hours light/dark cycle at 28.0±1.0°C room temperature with available standard feed and water prior to commencement of experiment. Animals were handled according to standard protocols for the use of laboratory animals and approved protocol by Animal Care and Use in Research Committee (ACUREC), University of Ibadan. Antibacterial Susceptibility Screening Antibacterial activity of the leaf extract was determined using the Kirby-Bauer disc diffusion method [7]. Bacterial cell suspensions were prepared in fresh normal saline and the turbidity of the resulting suspensions was adjusted to 0.5 McFarland turbidity standards. 1 ml inoculums of each selected organism were spread by glass spread on nutrient agar media. The sterile discs (6 mm diameter) of Whatman's No. 1 filter paper were impregnated with 20 µl of the extract solutions to achieve desired concentrations of 50, 100 and 200mg/ml on disc and placed separately in the inoculated agar plates. Ciprofloxacin and Ampiclox (30µg/ml/disc) was used as positive control and disc impregnated with DMSO was used as negative control. Each experiment was carried out in 3 replicates. The antibacterial assay plates were incubated at 37°C f or 24hr and mean diameters of the inhibition zones were recorded. Oral Toxicity Study Animals were used for the oral sub-acute toxicity study carried out according to OECD guideline 407 [8]. Forty albino rats of both sexes with mean body weight (120 -130) g were randomly divided into four groups of ten rats (5 males: 5 females) each comprising three experimental (250, 500 and 750mg/kg) and one control group. Extracts were orally administered daily for 28 days. Body weight was weekly recorded, and the animals were daily observed for clinical signs of toxicity. Animals were euthanized with ketamine (0.02ml) on 29th day. Blood samples were collected by cardiac puncture for hematological and selected organs were harvested, weighed and stored in 10% formalin for pathological examination. Collection of Blood Samples and Organ Harvesting Blood samples were collected from the rats via the cardiac puncture into (EDTA) bottles for blood parameter analysis. The uncoagulated blood was analyzed immediately for the following packed cell volume (PCV), red blood cell count (RBC), white blood cell count (WBC), Haemoglobin level (Hb) as described by [9]. Neutrophils, eosinophils, platelets, lymphocytes, monocytes were determined using Automated Haematologic Coulter Analyser in accordance with the standard procedures [10], MCV (mean corpuscular volume), MCH (mean corpuscular hemoglobin) and MCHC (mean hours. Sectioning and staining with hematoxylin and eosin for analysis followed methods of [11] and [12] with slight modification and later observed under a light microscope with objective lens 400x; scale: 0.500μm/pixel, and obj 100x; scale: 0.049μm/pixel. Statistical Analysis Statistical analyses were carried out using statistical package for social science (SPSS). The data was presented as Mean ± SEM (standard error of mean). The statistical differences between groups were analyzed using (ANOVA) followed by Dunnett Multiple range test, (GraphPad Prism software, USA). Value of (P<0.05) indicates statistically significant difference between compared data. Antibacterial Screening The methanolic leaf extract from Alchornea cordifolia were tested against the bacteria strain of E. coli and S. aureus. The extract appeared more efficacious at all the concentrations tested with E. coli and S. aureus been more sensitive recording inhibition zone of 35.00 ±1.73 and 33.60± 2.73 at 50mg/ml respectively (Table 2). Body Weight of Animals There was increase in the body weight of rats in all the treated and control group throughout the experimental period without any significant difference when compared to control groups (Figure 1). Haematological Analyses on Animals Exposed to the Leaf Extract of Alchornea cordifolia There is no significant effects (P>0.05) on the PCV, Hb, RBC, WBC of both the male and female rats treated with the methanol leaf extract of Alchornea cordifolia (AC). Slight decrease in was recorded in the PCV level of female rats administered AC500mg/kg (38.00±1.00) and 750mg/kg (38.00±0.00) compared to the control (41.50±2.50). There was no significant differences (P<0.05) in MCV, MCH and MCHC indices of both the male and female rats in the treated groups compared to the control. No significant differences (P>0.05) were recorded in the WBC count and platelets, Lymphocytes, Neutrophils, Eosinophils and Monocytes of both the male and female rats in all treated groups compared to control groups (Table 3). Histopathological Examination on Exposed Rats The sections of the liver from female and male control groups, showed a few foci of thinning of hepatic cords with dilation of hepatic sinusoids (Figure 2a). The liver section of rats given 250 and 500mg/kg of the extract showed closely packed hepatic plates and mild Kupffer cell hyperplasia (KCH) in the male (Figure 2b). In rats given 750mg/kg of the extract revealed closely packed hepatic plates, random foci of single-cell hepatocellular necrosis and moderate KCH ( Figure 2c). There were no histomorphological changes in spleen of female and male rats in the control groups (Figure 2d). There were moderate large peri-arteriolar lymphoid sheaths with a few germinal centres and a few aggregates of dark brown pigments (haemosiderosis) were observed in both male and female rats treated with 250mg/kg, 500mg/kg and 750mg/kg of Alchornea cordifolia (Figure 2e). Phytochemical Screening and Antibacterial Activities The leaf extract of Alchornea cordifolia at tested doses showed antibacterial efficacy against all bacteria tested. E. coli showed the largest zones of inhibition at lowest concentration of 50mg/ml. The sensitivity of E. coli to methanol leaf extract of Alchornea cordifolia is of great interest as it could be used to curb incidence of food poisoning [13]. The antibacterial activities of Alchornea cordifolia appeared to be broad spectrum since it was effective against both gram-positive and gram-negative as these could be ascribed to the bioactive constituents present in the leaf extract which exhibit antibacterial activity. The phytochemical components found in the leaves included terpenoids, steroid, glycosides, flavonoids, tannins, saponins, alkaloid, and several constituents was similar to the work of [14,15]; which could be responsible for its efficacy as antimicrobial [15]. Oral Toxicity Studies The observed increase in body weight after the 28-days treatment was normal especially when the animals were well-fed ad libitum [16]. The animals were daily observed for clinical signs of toxicity and non of the animal showed toxicity signs and no mortality rate was recorded. On 29 th day, animals were humanely anaesthesised and blood and some organ of toxicity were collected for blood parameters, biochemical and organ pathology analyses. Haematology The haematopoetic system is an important index of physiological and pathological status in man and animals [17] and a sensitive target for toxic compounds [18]. This has a predictive value for toxicity in humans and animals and therefore analysis of blood is relevant to risk evaluation [19]. The hematological analysis showed that Alchornea cordifolia has little or no effect on White Blood Cell, Lymphocytes, Neutrophils, Red Blood Cells, Haemoglobin Concentration, Pack Cell Volume, and platelets. However, the slight decrease in WBC of female rats administered 250mg/kg may be caused by viral infections that temporarily disrupt the work of bone marrow. A dose dependent decrease in neutrophils (NEU) was observed. Neutrophils interact with foreign compounds and microorganisms and destroy them by the process of phagocytosis. Other parameters were not affected in treated animals. Alchornea cordifolia has been shown to contain flavonoids [20]. Some of these flavonoids have been demonstrated to inhibit nephrotoxicity because of its strong antioxidant activity [21]. Alchornea cordifolia has also been reported to contain tannins and tannins are known to offer protection against nephrotoxicity [22]. It is possible that these constituents offered protection to the treated animals in the present study. Histopathological Analysis Alchornea cordifolia could have evoked mild liver damage rather at high dose of 750 mg/kg in the present work. This may be intriguing, because of the reported several components in the extract with different pharmacological actions. The proportion of specific toxicants could also be increased at high doses. There were no histomorphological changes noticed in spleen based on the widely recommended dosage of dried Alchornea cordifolia leaves (maximum 50 g per litre of water; 3 cups daily) in traditional medicine. Similar recommendation was made in the study of [23] that 50 g of the plant product per litre of water; and those 4 cups could be taken daily. Therefore, a daily intake of 200 mg/kg of the extract could be recommended as the maximum threshold in humans. Though this appeared to be lower than the doses tested in this present study, and the high doses could be resulted in the sign of liver damage observed in the rat. Therefore, further studies are call for on the standard doses that can be recommended. Conclusion Methanolic extract of the leaves of Alchornea cordifolia contain bioactive ingredients and demonstrated high potency against the tested organisms. It could be exploited for use as an antibacterial drug for the treatment of infections caused by enteric bacteria. Although, methanolic leaf extract of Alchornea cordifolia was non-toxic to hematological indices; these findings could not be directly extrapolated to man in view of possible species differences and differences in metabolic activation. However, its usage should be taken with caution especially at high doses and over a prolong usage of plant.
2020-08-07T01:46:46.331Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "4197f41dde70185c9eb2863c3e23e70474f496ab", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.eeb.20200502.12.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4197f41dde70185c9eb2863c3e23e70474f496ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
251510877
pes2o/s2orc
v3-fos-license
New azodyrecins identified by a genome mining-directed reactivity-based screening Only a few azoxy natural products have been identified despite their intriguing biological activities. Azodyrecins D–G, four new analogs of aliphatic azoxides, were identified from two Streptomyces species by a reactivity-based screening that targets azoxy bonds. A biological activity evaluation demonstrated that the double bond in the alkyl side chain is important for the cytotoxicity of azodyrecins. An in vitro assay elucidated the tailoring step of azodyrecin biosynthesis, which is mediated by the S-adenosylmethionine (SAM)-dependent methyltransferase Ady1. This study paves the way for the targeted isolation of aliphatic azoxy natural products through a genome-mining approach and further investigations of their biosynthetic mechanisms. S1 . Functional annotation of ady in Streptomyces sp. RM72 Table S2. Functional annotation of ady in Streptomyces sp. A1C6 General remarks. 1 H and 13 C NMR spectra were recorded on a JEOL ECA500 spectrometer (500 MHz for 1 H NMR), a JEOL ECX400P (400 MHz for 1 H NMR), a JEOL ECS400 (400 MHz for 1 H NMR), a JEOL ECZ400 (400 MHz for 1 H NMR) or a Bruker AVANCE Neo (500 MHz for 1 H NMR) spectrometer. Chemical shifts are denoted in δ (ppm) relative to residual solvent peaks as internal standard (CD3OD, 1 H δ 3.31, 13 C δ 49.0, DMSO-d6, 1 H δ 2.50, 13 C δ 39.5). Electrospray ionization mass spectrometry (ESI-MS) spectra were recorded on a Thermo Scientific Exactive mass spectrometer. Liquid chromatography-mass spectroscopy (LC-MS) experiments were performed with a Shimadzu HPLC prominence system coupled with a Shimadzu LCMS-2020 spectrometer or an amaZon SL-NPC system (Bruker Daltonics). All reagents were used as supplied unless otherwise stated. Escherichia coli DH5α was used as a host for general cloning. Oligonucleotides used for genetic manipulation were purchased from Fasmac Co. Supplementary figures N2H4-detecting reactivity-based screening. 50 L of crude extracts of actinobacteria were mixed with an equal volume of assay solution containing 10 mM p-(dimethylamino)benzaldehyde and 1 M HCl, and incubated at room temperature for 10 min. The resultant mixture was diluted with an equal volume of methanol, centrifuged at 20,630g for 10 min, then the supernatant was analyzed by Shimadzu HPLC system equipped with SPD-M20A. 5 L of the reaction mixture was loaded onto COSMOSIL 5C18-MS-II 2.0 × 100 mm (nacalai tesque). The sample was eluted by H2O/MeCN containing 0.1% formic acid with a linear gradient: 2-98% for MeCN + 0.1% formic acid over 5 min with a flow rate at 0.4 mL/min. Column eluates were monitored by UV absorption at 485 nm. Isolation of azodyrecins. Streptomyces sp. RM72 and Streptomyces sp. A1C6 were cultured on YMS ++ solid media (0.4% yeast extract, 1.0% malt extract, 0.4% soluble starch, after pH was adjusted to 7.4 with KOH solution, 2.0% agar was added. 10 mL of 1 M MgCl2 and 8 mL of Ca(NO3)2 were added after autoclaving) and SFM solid media (2.0% mannitol, 2.0% soya flour, 2.0% agar), respectively, at 30 °C for 7 days. The resultant media was extracted by methanol, then the residual agar pieces were removed by filtration. Solvents were removed from filtrate, and residues were partitioned between H2O and ethyl acetate. The organic layer was further partitioned by hexane, then subjected to HP20 column. The column was eluted by a gradient mixture of H2O/MeOH, then the fractions that generate N2H4 upon acid hydrolysis were combined and subjected to silica gel chromatography (40-50 μm silica gel 60N (Kanto Chemical Co.). The column was eluted by a gradient mixture of hexane/CHCl3 then the fractions that generate N2H4 upon hydrolysis were combined and the solvent was removed. The crude sample was separated by HPLC with COSMOSIL 5C18-MS-II Cytotoxic assay. In a manner analogous to the previous report, 1 the cytotoxic activities of compounds against human ovarian adenocarcinoma SKOV-3 cells, malignant pleural mesothelioma MESO-1 cells, and immortalized T lymphocyte Jurkat cells, were examined. SKOV-3 cells were cultured in Dulbecco's Modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum, penicillin (50 U/mL), and streptomycin (50 μg/mL). MESO-1 cells were cultured in RPMI1640 medium supplemented with 10% fetal bovine serum, penicillin (50 U/mL), and streptomycin (50 μg/mL). Jurkat cells were cultured in RPMI1640 medium supplemented with 10% fetal bovine serum, penicillin (50 U/mL), streptomycin (50 μg/mL), and GlutaMAX. All cell lines were seeded in a 384-well plate at a density of 1000 cells/well in 20 μL of media and incubated at 37 °C in a humidified incubator with 5% CO2. After 4 h, 2-fold serial dilution samples dissolved in DMSO were added to the cell cultures at the concentration of 0.5% (0.1 μL) and incubated for 72 h. Cell viabilities were measured using a CellTiter-Glo luminescent cell viability assay and EnVision multilabel plate reader. S4 P388 murine leukemia cells were cultured in DMEM, supplemented with 1% penicillin/streptomycin and 10% fetal bovine serum in 5% of CO2 cell incubator at 37 ℃. The cells were placed a 96-well cell culture plate at a density of 1 × 10 4 cells/well, then 1 µL of test solution in various concentrations (samples were dissolved in DMSO) added to cell plates and incubated for 48 h. Doxorubicin hydrochloride was used as a positive control. Finally, 50 µL of 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) solution (1 mg/mL dissolved in PBS buffer) were added to each well and the plates were incubated for 4 h. After the medium was removed, the precipitated dye was solubilized by DMSO, and measured by a microplate reader with the absorbance at 570 nm. S5 Preparation of demethylazodyrecin E (11). Demethyl azodyrecin E (11) was prepared by treating azodyrecin E (8) with 2 M NaOH aq. at room temperature for 20 min. The reaction mixture was acidified with 2 M acetic acid aq. until pH 7, extracted with EtOAc (3 times), and the solvent was 5 Protein-coding regions were predicted by prodigal (2.6.3). 6 Publicly available data were retrieved from NCBI database by using efetch (16.2) and NCBI dataset (11.22.0). To assess the distribution of VlmA-like enzymes in publicly available database, protein sequences of The plates were extracted by methanol for overnight at room temperature then debris was filtered off. The filtrate was evaporated, and the residue was dissolved in 5 mL methanol. 50 L of the supernatant were subjected to the N2H4-based reactivity assay following the procedure described in the previous section.
2022-08-12T15:18:10.139Z
2022-08-10T00:00:00.000
{ "year": 2022, "sha1": "314662b697ef1d9172dc979c10b258676089dea9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "170cb7a26fc1bd58e99c080ecc6ce5cef4fe5889", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
78546785
pes2o/s2orc
v3-fos-license
Knowledge and attitude of Saudi mothers towards health of primary teeth The aim of this study was to assess Saudi mothers’ knowledge and attitude towards primary teeth health and dental caries and the impact of level of education on their knowledge and attitude. Four hundred, self-reported questionnaires were distributed to mothers of children aged 1 to 6 years. They contained questions expressing knowledge and attitudes towards the health of primary teeth and the effect of educational level on knowledge and attitude of oral health. Data were processed and analysed by means of the Statistical Package for Social Sciences (SPSS) using Chi-square test. The significance was taken for P-value ≤ 0.05. Mothers had good knowledge about dietary practices and oral hygiene practices. While more than half of them do not know when to start child mouth cleaning, first visit to dentist and transmissibility of caries. Half of the respondents do not know the contribution of frequent sweet consumption to dental caries. Our study showed a strong correlation between level of education and oral health knowledge (P-value = 0.00) whereas effect of knowledge on mothers’ attitude was insignificant (P-value ≤ 0.6). Mothers showed some degree of knowledge about certain aspects of primary teeth health and caries, while poor knowledge is shown in other aspects. We recommended broadening prevention concept. INTRODUCTION Caries prevalence among Saudi Arabian children and adolescent in Jazan Region, Kingdom of Saudi Arabia is high (Al-Malik and Rehbini, 2006).Oral health knowledge is an essential pre-requisite for health related behaviour (Ashley, 1996).Children under the age of 5 years spend most of their time with mothers, so their oral hygiene and dietary habits are influenced by their care takers and level of education (Jain et al., 2014).In addition to the level of education, behavioural, cultural and social factors influence caries risk (Acs et al., 1992).These include sleeping with a bottle and frequent consumption of sugarcontaining snacks or drinks (Hallett and O'Rourke, 2006).Dental caries with its consequences including pain, and diminished quality of life is a common health problem among children (Casamassimo et al., 2009).Since caries is a transmissible infectious disease, salivary contact is responsible for its transmission (Berkowitz, 2006).The organisms responsible for caries are mutans streptococci (MS) (Sakai et al., 2008).Children of mothers with high levels of mutans streptococci, are at greater risk and elimination of saliva-sharing activities (e.g.sharing utensils) reduces transmission of caries (Berkowitz, 2006).Although, early childhood caries (ECC) is preventable, most parents often think it is not (Acs et al., 1992).Consequences of ECC include a higher risk of new carious lesions in both the primary and permanent dentitions (Al-Shalan et al., 1997).Severe early childhood caries (S-ECC) interferes both with the quality of life of the child and the family.It affects child's school performance, and social behaviour.Treatment of S-ECC is expensive, invasive and very stressful (Filstrup et al., 2003).Young children with high caries activity may develop caries even during tooth eruption so it is essential to reach the preschool child and its caregivers as early as possible (Plutzer and Spencer, 2008).Oral hygiene measures should be implemented to infants no later than the time of eruption of the 1st primary tooth and tooth brushing should be performed by parents twice daily (American Academy on Pediatric Dentistry [AAPD], 2011). The first dental visit is important and should be before completion of 12 months of age.The age at which a child visits the dentist for the first time, reflects the quality of the preventive dental care and the future of his oral health (Widmer, 2003).Many studies showed a low awareness level in the population, as the commonest reason for seeking dental care is pain and dental caries (Meera et al., 2008).Basic knowledge of caries risk factors, importance of the deciduous teeth and oral health maintenance are important to employ effective disease preventive strategies (Finlayson et al., 2007).There is little information on the awareness and attitude of Saudi mothers towards the health of the primary teeth. The aims of this study were to assess the Saudi mothers' knowledge and attitude towards the primary teeth health and dental caries and the impact of level of education on primary teeth health and dental caries in Jazan Region, Kingdom of Saudi Arabia. MATERIALS AND METHODS A questionnaire based cross-sectional study was conducted in Kingdom of Saudi Arabia, Jazan area during the period of June to August, 2012.Trained interviewers (dental students) distributed 400 questionnaires to mothers of children aged 1 to 6 years from different cities and villages in Jazan area (the participating students' residential areas).91% (365) of distributed questionnaires were collected.Some questionnaires were with few missing data (18%).The questionnaire was reviewed by expert staff members for refining and criticism then approved by the ethical committee.A simple, short and direct questionnaire written in Arabic language (participants' mother tongue) was designed to provide an overall view of the subject's socio-demographic characteristics, oral hygiene practices, dietary practices and degree of awareness of the importance of primary teeth.The questions were constructed with closed alternative answer in order to be simple and easily understood by the subjects regardless of their educational status.The mothers were asked to respond to the knowledge questions by agree, disagree or not sure for most questions.The questionnaire reflected subjects' knowledge and attitudes towards oral health and ECC.Oral health educational pamphlets were distributed to the respondents after collection of questionnaires.We used Cronbach's alpha statistics to measure internal consistency for assessing reliability.The value of Cronbach's alpha was 0.79 which indicates acceptable reliability. Ethical considerations The study proposal was submitted to the College of Dentistry Jazan University, Research and Publication Office for ethical clearance and written informed consent was obtained from the participants prior to study commencement.In this concern, it has been stated to the participants that there is no direct benefit of their participation in the study, however knowledge gained from the study may lead to the prevention and treatment of primary teeth (general population benefits) and that no information about the participants, or information provided by them during the research will be disclosed to others without their written permission. Construction of scales for analysis A total of 8 questions were gotten for oral health knowledge and 3 questions for oral health attitude.Concerning responses for oral health knowledge questions, positive statement (agree) scores 1 whereas both don't agree and don't know score 0. The sum of the 8 responses represents oral health knowledge score for each respondent.For further analysis, the sum scores were sub-grouped into 3 groups: poor, adequate and good knowledge (0 to 3, 4 to 5 and 6 to 8, respectively).Concerning mothers' attitudes towards oral health, we had three questions (Table 3) with 3 different choices of answers.A positive statements scores 3, an average statement scores 2 and a negative statement scores 1.The sum of the three attitude questions served as the final oral health attitude score for each respondent.For further analyses, the sum scores were sub-grouped into 3 groups: poor, average and good attitude (<4, 4 to 6 and 7 to 9, respectively). Statistical analysis All data were analysed using the Statistical Package for the Social Sciences (SPSS version 19) program.For frequency, Chi-square test was used to find out if mothers' educational level affects their oral health knowledge and attitude.The significance was taken for P-value ≤ 0.05. RESULTS 61.6% of respondents had university level of education, 25.6% had secondary school level, while only 12.8% had primary school level of education or illiterate.Mothers had a good knowledge about diet, dietary practices and oral hygiene practices.Nevertheless, more than half of them had poor knowledge about child mouth cleaning starting, child first visit to dentist and transmissibility of caries.Around half of them did not know that frequency of sweet consumption predispose to dental caries regardless of its amount.There was a significant correlation between respondents' level of education and oral health knowledge (P-value = 0.00), whereas the impact of the level of education on oral health attitude of the participants was insignificant (P-value ≤ 0.6). The frequency and percentage of the participants' answers to questions of knowledge are shown in Table 1.Level of oral health knowledge of the participants is shown in Table 2.The frequency and percentage of mothers' answers for the dental health attitude questions are shown in Table 3. Level of dental health attitude of the participants is as shown in Table 4. Oral health knowledge and attitude level among the participants is as shown in Figure 1.Impact of education on mothers' health knowledge is shown in Figure 2. DISCUSSION Many studies suggest that mother's education influences dental health of their children.Shamta et al. (2009) found a strong interdependence on the mother's level of knowledge with that of their educational level which influenced the child's oral health.This was found to be true in the present study as well.The higher the educational attainment of mothers, the better the dental health practices.An overwhelming majority of mothers (96.2%) believed that sweets and soft drinks can lead to caries, although this reflect excellent knowledge of sweet risk factor in dental caries, but at the same time, only 52.3% of the respondents relate this risk factor to the frequent sweets intake more than the quantity taken.Rafi et al. (2012) got the same finding.The majority of mothers (56.9%) had inadequate knowledge about the fact that sharing of utensils and kissing can transmit Streptococci mutans which causes caries.This finding is consistent with the finding of Sakai et al. (2008); although the transmissibility of dental caries is relatively well established in the literature.Night time bottle feeding with sugar; 82.6% of our respondents agreed that night time bottle feeding with sugar contributes to caries.Children that were put to sleep with a bottle had S-ECC compared to those not put to sleep with a bottle (Hallett and O'Rourke, 2006).In the present study, we inquired about the knowledge of the sweetened night time bottle feeding, but did not ask about the actual habit itself, especially in a country of high caries prevalence and this is a limitation of this study.Knowledge alone is not the absolute basis of oral health practices as other factors like dietary traditions exist.Gussy et al. (2008) found that parents had good knowledge of diet related risk factors, but half the children were given bottle at bedtime.62.0% of respondent of the present study agreed that the health of the pregnant mother affect her baby primary teeth health; this finding reflects good knowledge of the subjects.61.6% agreed that primary teeth caries affect general health and child's permanent teeth which is almost the same finding with that of Rafi et al. (2012).Dental caries is a preventable disease, and it can be stopped and even potentially reversed during its early stages (Kawashita et al., 2011).The majority of the subjects of the present study (76%), agreed that primary teeth caries is preventable.In the present study, tooth brushing habits of mothers were assessed because they strongly affect brushing habits of their children (Castilho et al., 2013).This study showed good oral hygiene knowledge and practices which may result from high level of education of the majority of respondents (61.6% of our respondents had higher university education). CONCLUSION AND RECOMMENDATIONS Mothers' level of education improves the awareness of oral health related issues.They were familiar with factors causing dental caries, while transmissibility of caries and effect of frequent fermentable carbohydrates were not evident.The awareness of the importance of the first dental visit is very low.Majority of the mothers had good oral hygiene practices for themselves, but most of them ignore the proper age for starting new-born's mouth cleaning.Broadening prevention concepts with special focus on transmissibility of caries, frequent intake of sweets, infants' mouth cleaning commencement and first visit to dentists is recommended. Figure 1 . Figure 1.Oral health knowledge and attitude of the participants. Figure 2 . Figure 2. Impact of education on mothers' health knowledge. Table 1 . Oral health knowledge questions, number and percentage distribution of the study participants. Table 2 . Level of knowledge and frequency distributions of the participants. Table 3 . Number and percentage distribution of mother according to attitude items. Table 4 . Frequency and percentage distribution of participants according to attitude items.
2019-03-16T13:10:53.535Z
2015-07-31T00:00:00.000
{ "year": 2015, "sha1": "482cba56a909e4273e45487851b85db535bb0e31", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JDOH/article-full-text-pdf/8363E5A53921.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "da5b96c60cfb5deccfdb6fb333beda505085d492", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
55949716
pes2o/s2orc
v3-fos-license
An Efficient Approach Based on the Gradient Definition for Solving Conditional Nonlinear Optimal Perturbation Conditional nonlinear optimal perturbation (CNOP) has beenwidely applied to study the predictability of weather and climate.The classical method of solving CNOP is adjointmethod, in which the gradient is obtained using the adjointmodel. But some numerical models have no adjoint models implemented, and it is not realistic to develop from scratch because of the huge amount of work. The gradient can be obtained by the definition in mathematics; however, with the sharp growth of dimensions, its calculation efficiency will decrease dramatically. Therefore, the gradient is rarely obtained by the definition when solving CNOP. In this paper, an efficient approach based on the gradient definition is proposed to solve CNOP around the whole solution space and parallelized. Our approach is applied to solve CNOP in Zebiak-Cane (ZC)model, and, compared with adjoint method, which is the benchmark, our approach can obtain similar results in CNOP value and pattern aspects and higher efficiency in time consumption aspect, only 12.83 s, while adjoint method spends 15.04 s and consumes less time if more CPU cores are provided. All the experimental results show that it is feasible to solve CNOP with our approach based on the gradient definition around the whole solution space. Introduction In the study of weather and climate predictability, it is crucial to determine the fastest growing perturbation.To solve the fastest growing perturbation in a nonlinear system, Mu and Duan [1] proposed the concept of conditional nonlinear optimal perturbation (CNOP), which can represent the nonlinear initial perturbations that satisfy certain constraint conditions and result in the largest nonlinear evolution at the prediction time.Later, Mu et al. [2] extended the CNOP method to study the optimal parameter perturbation.The CNOP method has been widely applied to study the predictability of many phenomena and many research fields related to initial errors and model parameter errors, such as EI Niño-Southern Oscillation (ENSO) event [3][4][5], Kuroshio large meander, [6] and grassland ecosystem [7]; spring predictability barrier [8][9][10][11]; targeted observation of the atmosphere and ocean [12][13][14][15]; ensemble forecast [16][17][18].It is obvious that the CNOP plays an important role in the study of weather and climate predictability. Solving CNOP is essentially an optimization problem of nonlinear objective function.In the current study, the approaches to solve CNOP can be classified into two types depending on whether the gradient is used.The gradientbased approaches solve CNOP by searching the optimum value along the direction of gradient descent, such as the spectral projected gradient (spg2) [19], the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) [20], and the sequential quadratic programming (SQP).While the gradient-free approaches get the optimum value by searching randomly around the whole or partial solution space to solve CNOP, such as intelligent algorithm (IA).The random search method initially was applied to ideal numerical models with 3 to 20 dimensions [21,22].To apply this method to solve efficiently CNOP for practical models, some researchers reduced the high-dimensional solution space to a relative low one firstly and then employed IAs to solve CNOP in the reduced low-dimensional solution space [23][24][25][26][27][28][29][30][31][32].They got good results when taking ZC model with 1080 dimensions as a case.But there inevitably exists the information loss because of dimension reduction.Therefore, the gradient descent method is widely used to calculate CNOP. Generally, the gradient is obtained using the adjoint model corresponding to the numerical model, which is referred to as the adjoint method.However, there are no corresponding adjoint models implemented for some numerical models [33][34][35], and it is not realistic to develop the corresponding adjoint model from scratch due to the huge amount of work.Wang and Tan [36,37] attempted to obtain approximate gradient information based on ensemble technique, in which they employed the samples ensemble of initial perturbations and corresponding prediction increments to denote the approximate tangent linear matrix in the gradient formula of objective function.And a localization technique was introduced to ameliorate the spurious correlation between the two ensembles, in which the localization radius was achieved from artificial experience.This method calculated the gradient information only once during the whole optimization; therefore, it can obtain an approximate CNOP easier and more efficiently than adjoint-based method, but it depends on artificial experience.In fact, the gradient information can also be obtained using gradient definition, but the calculation efficiency will decrease dramatically with the sharp growth of the dimension.And in general, the dimensions of the numerical models for climate and weather are relatively high, which results in the fact that gradient definition method has rarely been applied in solving CNOP.At present, only Chen et al. [38] calculated the gradient in one way that is similar to the gradient definition, but this gradient is calculated in the feather space generated by dimension reduction.Firstly, they reduced the dimensions to the feather space using singular value decomposition (SVD) and represented the initial perturbations using the linear combination of base vectors.Consequently, the objective function was transformed into the function of the linear combination coefficients.The gradient was approximated by the differences, the linear combination coefficients, and prediction increments of the initial perturbations.In other words, the gradient calculated was formally the same as the definition of the gradient, but the small amount in the gradient definition equation is the increment of the coefficient, not the increment of the initial perturbations.This method can obtain an approximate CNOP, and time efficiency depends on the number of base vectors chosen. In this paper, an efficient approach based on the gradient definition around the whole solution space is proposed to solve CNOP, in which some parallel strategies are adopted to improve the calculation efficiency of gradient.In our approach, the gradient calculated is the gradient of objective function with respect to initial perturbation, and we solve the CNOP around the whole solution space, so the CNOP is more accurate.In addition, certain parallel strategies make our approach more efficient than adjoint method.Taking the ZC model as an example, which is a medium-complexity model to forecast ENSO event, our approach is applied to solve CNOP of ENSO event, and the experimental results show that our approach is feasible from CNOP value, CNOP pattern and time efficiency aspects. The remainder of this paper is organized as follows.The detailed introduction of ZC model and the concept of the CNOP are in Section 2. Our efficient approach based on the gradient definition around the whole solution space accompanied by parallel strategies is described in Section 3. In Section 4, we employ ZC model as a case study and apply our approach to study the optimal precursor of an ENSO event; here we also show the results and compare the CNOP value, CNOP pattern, and time consumption with those from adjoint method.Finally, we summarize our conclusions and future works in Section 5. Zebiak-Cane Model and CNOP 2.1.Zebiak-Cane (ZC) Model.ZC model is adopted as the case to verify the feasibility of our approach in solving CNOP.ZC model was developed to simulate and study EI Niño-Southern Oscillation (ENSO) phenomenon; it is a mediumcomplexity model.The model can calculate perturbations about a climatological mean state that is specified from observation [39].It can also reproduce the warm events that possess a 3-4 years' period of oscillation without anomalous external forcing, which is consistent with the real ENSO cycle.ZC model is a coupled atmosphere-ocean model, which has three components: the atmosphere, the ocean, and the coupling component. Atmosphere Component. The dynamics of atmosphere component follow the Gill model, which is described by the linear shallow-water equations on an equatorial beta plane.The circulation in atmosphere component is forced by a heating anomaly that depends on the sea surface temperature (SST) anomalies and moisture convergence.The atmospheric grid used in the atmosphere component lies in the region 101.25 ∘ E-73.125 ∘ W, 29 ∘ S-29 ∘ N. Ocean Component. The dynamics of the ocean component begin with the linear reduced-gravity model, which can successfully simulate the changes of thermocline depth anomalies and sea surface pressure during EI Niño events.In the ocean component, the surface intensification of winddriven currents in the real ocean is simulated by the shallow frictional layer.The component can simulate the mean features of SST anomaly (SSTA) forced by ENSO composite wind anomalies.The oceanic gird used in the ocean component lies in the region 124 ∘ E-80 ∘ W, 28.75 ∘ S-28.75 ∘ N. Coupling Component. In the coupling component, the atmosphere component retains steady state and was run with certain monthly mean SSTA to simulate wind anomalies in advance.The ocean component is driven by the surface wind stress anomalies, which are produced by combining the background mean winds and surface wind anomalies generated by the atmosphere component.After coupling the ocean and atmosphere component, the region of the coupled model is shown in Figure 1.The rectangle with black solid line represents the region of atmosphere component.The rectangle with black dash line represents the region of ocean Mathematical Problems in Engineering component.The rectangle with red dash line represents the integration region of SSTA in the coupled model. CNOP. CNOP can represent the initial perturbation subjected to a given physical constraint and result in the largest nonlinear evolution at the prediction time in the nonlinear weather and climate models.Suppose we have the following model: where is an -dimensional state vector of the model, while 0 is the -dimensional initial state vectors at initial time ( = 0), and is a nonlinear partial differential operator.The discrete form of (1) can be described as follows: where is a nonlinear propagation operator, 0 and are separately the initial optimization time and terminal time, is the value of at time , and represents the development of 0 from time 0 to .The CNOP, represented using 0 , is the solution of the following optimization problem: where 0 is the -dimensional initial perturbation of 0 and is the constraint radius of the initial perturbation.‖( 0 )‖ 2 is the objective function. Obviously, solving (3) is to solve an optimization problem.So CNOP can be obtained by a nonlinear optimization algorithm.Generally, optimization algorithms, such as LBFGS, SQP, and SPG2, are designed to find the minimum value of the objective function.In this paper, the SPG2 algorithm is employed to solve CNOP.The SPG2 method is often applied to solve the problem of the following form: min () where Ω is a closed convex set in IR .To use SPG2 algorithm to solve CNOP directly, we let ( 0 ) = −( 0 ); then the optimization problem in ( 3) is equivalent to the following optimization problem: Now, the optimization problem has been converted into solving minimum value of the objective function, which is the same form with problem in (4); therefore, we can use SPG2 method directly. The Efficient Approach Based on Gradient Definition 3.1.The Approach Based on Gradient Definition.The primary idea of our approach is to calculate the gradient of objective function with respect to the initial perturbation 0 using the gradient definition in mathematics firstly and then to apply spg2 method to solve CNOP based on the gradient information.In our case, the optimization problem is described in ( 5); obviously, it can be described as follows: In mathematics, the gradient is a generalization for the usual concept of derivative of a function in one dimension to a function in several dimensions.So the gradient of function in ( 6) is as follows: where ( 0 ) denotes the first-order partial derivative of function ( 0 ), namely, the gradient of function ( 0 ).Due to 0 being an -dimensional perturbation, let 0 = ( 01 , 02 , 03 , . . ., 0 , . . ., 0 ), according to the definition of gradient in mathematics; then the gradient for function ( 0 ) in a rectangular coordinate systemic is where grad() represents ( 0 ), is a positive nonzero integer, = 1, 2, 3, . . ., , and are the orthogonal unit vectors pointing in the coordinate directions.Therefore, for a certain point ( 1 , . . ., , . . ., ), the partial derivative of at direction 0 is as follows: where ∇ is a real number which should approach 0 but never equals 0. We will provide detailed description on the setting Initialization: (1) Set the parameters ∇, , , 0 SPG2: (2) Calculate the gradient of 0 with respect to the objective function using subroutine gradient( 0 ) (3) Calculate the value of objective function in 0 using subroutine values( 0 ) (4) While (the stopping criterion is not satisfied) do (5) Calculate the new position using subroutine line search( ) (6) Calculate the gradient in using subroutine gradient( ) (7) End while Output: CNOP (the when the value of values( ) is the minimum for all ) Algorithm 1: The pseudocode of our approach.the value of ∇ in Section 4. Using (9), once ∇ is determined, it becomes much easier to calculate the derivative of at a certain direction for a certain point.In ( 8) and ( 9), if the dimension of the variable in a model is -dimensional, the gradient vector is also -dimensional. Algorithm 1 shows the pseudocode of our approach.There are two main parts in the approach.First, we initialize the related parameters used in our approach; the meaning of parameters ∇, , , 0 has been shown in the abovementioned equations ( 3) and ( 9); the value of ∇ is determined in Section 4.1; is set as (1) in this paper.Then we use SPG2 algorithm to calculate CNOP, the maximum iteration steps are set as 20 for stopping criterion, the gradient(), values( ), and line search( ) represent related subroutines, the gradient( ) subroutine calculates the gradient by implementing formulas (8) and ( 9), the values( ) subroutine calculates the value of objective function in current position, and line search( ) subroutine searches the next position along the direction of gradient decent.Eventually, the program will output the CNOP as the result. Parallel Strategies. As shown in Figure 1, the outer rectangle with black solid line represents the region of atmospheric component with latitudinal resolution = 5.625 ∘ and longitudinal resolution = 2 ∘ .The middle rectangle with red dashed line denotes the region of the ocean component with the resolutions = 2 ∘ and = 0.5 ∘ , which forms a 30 * 34 grid.After removing the unused marginal area, the inner rectangle with blue dashed line is the integration region of the SSTA with resolutions = 5.625 ∘ and = 2 ∘ , which forms a 20 * 27 grid.When studying ENSO phenomenon, the two physical variables in ZC model involved in objective function are SSTA and thermocline height anomalies (THA).Thus, the dimension of ZC model is 1080 (20 * 27 * 2) after combining the two variables into one vector. Taking the ZC model with 1080 dimensions as an example, we implement serially our approach to solve CNOP descried above on TH-1A supercomputer system at National Supercomputer Center in Tianjin.The available resources for us are as follows: 20 available nodes, each node with two Intel Xeon X5670 processors at 2.93 GHz and 24 GB memory, total 240 CPU cores.We measure the time consumption of our serial approach with Intel VTune Amplifier, which is shown in Figure 2; it costs 1482.069s for a complete run., in which subroutine gradient( ) occupies 99.9% of the entire time.We can conclude that the time consumption of subroutine gradient( ) will dramatically increase with the increasing dimensions.Therefore, improving time efficiency of subroutine gradient( ) is crucial and necessary.In this section, certain parallel strategies are designed. Considering the dependence between the current iteration and next iteration, and the independency between every dimension of a certain gradient vector, we can perform parallel our approach when calculating one gradient vector in one iteration.To ensure the transportability and usability of the parallel strategy, we adopt MPI to realize parallelization on the cluster. To calculate in parallel, the gradient vector is divided into groups which are executed in parallel.The following is the way to decompose the gradient vector into groups: suppose we employ -processes to calculate one gradient vector concurrently; then we should divide the gradient vector into -groups, and the size of each group for process is = 1080 , ∈ {1, 2, 3, . . ., } ; when = 0 where represents the total number of processes used to compute the gradient, represents the remainder from dividing 1080 by , 1 and 2 represent the size of one group for different value of , and represents the process number.The process calculates one group of one gradient vector; the dimensions for one gradient vector calculated by process can be described as follows: The different parts of one gradient vectors calculated by different processes are collected as one whole gradient vector via the communication mechanism between processes which is implemented with MPI, specifically MPICH and the Intel compiler.The communication mechanism adopted is masterslave mode, process 0 as the master and others as the slaves.Supposing we use processes, when calculating one gradient vector, process 1 to − 1 sends their part of gradient vector to process 0, respectively, and process 0 receives the messages from slaves and then combines all messages together into a complete gradient vector. Experiments and Results Analysis To demonstrate the effectiveness, validity, and time efficiency of our approach in solving CNOP, we employ ZC model as a case to study the optimal precursor of an ENSO event.Firstly, we calculate the gradient of objective function using our approach and then use the spg2 method to solve CNOP.The final solution of CNOP is the pattern of initial SSTA and THA that will cause the largest evolution at prediction time in the tropical Pacific, named as SSTA-CNOP and THA-CNOP, which are so-called optimal precursor.We optimize the ZC model for 9-month optimization period for different initial months (from January to December).For every initial month, there are corresponding SSTA-CNOP and THA-CNOP.We compare the results with those obtained from adjoint method which is referred to as the benchmark.Compared with adjoint method, our approach calculates the gradient using the gradient definition. When calculating gradient, the value of ∇ is critical.Therefore, in Section 4.1, we conduct many experiments to decide the value of ∇.In the following Sections 4.2 and 4.3, compared with the adjoint method, we show the CNOP calculated by our approach from CNOP value and CNOP pattern aspects to verify its effectiveness and validity and then demonstrate the time consumption and speedup up to 240 CPU cores to verify the time efficiency. Determination of ∇. In Section 3, we show the mathematical formula (see ( 9)) to calculate the gradient of objective function.In the equation, ∇ is a real constant which should approach 0 but never equals 0. For our case, the value of ∇ cannot be too small, because too small ∇ will lead no evolution for numerical model; that is, the limit value in (9) always equals or approaches 0; thus we cannot obtain the correct gradient direction.Therefore, what value ∇ should be in our case?We design the following two schemes to determine the value of ∇: (1) ∇ is constant value for every in ( 9); (2) ∇ is changing with the value of in (9).For the CNOP calculated by different methods, the value of objective function (( 0 )) is larger; then the CNOP is much better.So, in this paper, we will take the norm ‖( 0 )‖ to measure the magnitude of CNOP, and we take the value of ‖( 0 )‖ as the evaluating standard for CNOP value. For scheme (1), we conduct lots of experiments to solve CNOP, but it is found that the different initial month is corresponding to a different appropriate ∇ for CNOP.And it requires lots of experiments to determine the most appropriate ∇ for each initial month.Therefore, the conclusion is that the scheme (1) is not feasible in our case. For scheme (2), we let ∇ = * 10 − ( = 1, 2, 3).When ≥ 3, ∇ is too small, and CNOP value obtained is very small.When < 0, ∇ is too large to calculate the limit value.When = 1 and = 2, we compare the maximum value of objective function obtained by our approach with the results of adjoint method (shown in Figure 3).In Figure 3, the blue solid line represents the CNOP values calculated by adjoint method for different initial month, which is the baseline, while the red one (∇ = * 10 −1 ) and green one (∇ = * 10 −2 ) are the CNOP values using our approach.: CNOP values calculated from our approach and adjoint method for different initial month.The blue line represents the values from adjoint method; the red one is the values from our approach.The green one is the difference between them. In Figure 3, the CNOP value obtained by our approach has similar tendency with the results of adjoint method and the CNOP value for every initial month is less than adjoint method, but the largest difference between them is 5.3 (Table 1), which is acceptable, so we can draw the conclusion that ∇ = * 10 − ( = 1, 2) is appropriate for our approach. Effectiveness and Validity. There are a corresponding CNOP value and CNOP pattern for every initial optimization month, so there are 12 CNOP values and 12 CNOP patterns for all 12 different initial months.In this section, we compare our approach with the adjoint method from CNOP value and CNOP pattern aspects to verify the effectiveness and validity. CNOP Value. In this section, we set ∇ = * 10 −1 when initial month is 1, 2, 5, 9, 11, and others; we set ∇ = * 10 −2 to get higher CNOP values.Figure 4 depicts CNOP values from our approach and adjoint method for different initial month; -axis represents the initial optimization time (from January to December) and the -axis represents the CNOP values.We can see that the variation trend of CNOP values for the two methods is almost the same.In detail, red and blue lines show upward trends from January to March; from March to September, they go down; and from September to December, they go up again. CNOP Pattern. In this section, the spatial pattern of the optimal precursor (SSTA-CNOP and THA-CNOP) of ENSO phenomenon and corresponding SSTA evolution are compared to assess the validity of our approach.It is unnecessary to show all 12 CNOP patterns.We choose the patterns of the two initial optimization months which has the biggest (March) and smallest (September) CNOP value, respectively. Figure 5 shows the patterns of SSTA-CNOP, THA-CNOP, and corresponding SSTA evolutions after 9 months obtained from our approach and the adjoint method while the initial month is March, while Figure 6 shows the patterns of September.(a, b) is the pattern of SSTA-CNOP, (c, d) is the pattern of THA-CNOP, and (e, f) is the pattern of the SSTA evolution; (a, c, e) is the pattern from our approach and (b, d, f) is the pattern from the adjoint method.These two optimal precursors obtained by the two methods both can evolve into an EI Niño event.In a word, the CNOP pattern from our approach is quite similar to those from the adjoint method but is a little weaker.The result is in accordance with the results in Section 4.2.1, which shows that the CNOP values from our approach are a bit smaller than those from the adjoint method.In conclusion, our approach can obtain the valid optimal precursor for ENSO phenomenon. Time Efficiency. In this section, we demonstrate the time consumption and speedup up to 240 CPU cores to verify the efficiency of our approach.In this work, the average value of running the same program ten times is set as the final time consumption and speedup is the radio of serial execution time over the parallel execution time.We employ 12 CPU cores as a unit because each node in the cluster has 12 CPU cores.There are 20 nodes, 240 CPU cores totally.In Figure 7, we show the time consumption diagram corresponding to the adjoint method, our serial approach, and our parallel approach with 240 CPU cores.When the CPU cores is 240, the time consumed is 12.83 s, which is less than the time spent by adjoint method and the speedup reaches 85.18. To show the effectiveness of the parallel strategies designed in Section 3.2, we show the time consumption and speedup with the number of CPU cores increasing from 12 to 12 * 20 in Figure 8.The blue line stands for the time consumption and red line stands for speedup.With the number of CPU cores increasing from 12 to 12 * 20, the time assumption is falling and the speedup grows almost linearly.From the trend of decreasing for the time assumption, we can expect less time consumption if more CPU cores are provided.And the speedup also has the trend of continually increasing if more CPU cores are provided.Of course, there exist bottlenecks for both time consumption and speedup with the increasing of CPU cores; we cannot find the bottleneck owing to the lack of computing resources. Correctness and Physical Meaning of the CNOP To demonstrate the correctness of the CNOP calculated by the proposed approach, we calculate the change rate of the energy norm increment ( − 0 )/ from the CNOP over the integrating months according to [36], that is, the net growth rate of the energy (Figure 9).The energy norm is defined as ||, where is the sea surface temperature and || is the 2-norm of . Figure 9 shows that the energy from CNOP is increasing nonlinearly over the integrating months, and the energy increases around 35 times when integrating 12 months.Therefore, the CNOP calculated can show the fast nonlinear growth, which illustrates the physical definition of the CNOP.Furthermore, the CNOP patterns obtained from the proposed approach (Figures 5(a In physics, the CNOP can represent the optimal precursor that will induce the occurrence of certain physical events.As we know, when the El Niño event occurs, the sea surface temperature will present anomalously warm in the eastern and central tropical Pacific Ocean area.And the spatial patterns (Figures 5(e Conclusions and Future Works In this paper, we proposed an efficient approach based on the gradient definition to solve CNOP around the whole solution space and some parallel strategies were designed to improve gradient calculation efficiency.It is the first time to solve CNOP using gradient definition around the whole solution space.To verify the effectiveness and validity of our approach, we applied our approach to solve CNOP to study the optimal precursor of an ENSO event in ZC model.The experiment results indicate that our approach can obtain good results, and the time consumed is less than the adjoint method, and the time consumption still has the trend of continually decreasing when providing more CPU cores. The cruciality of the proposed approach is to calculate the gradient of the objective function using the gradient definition.Zebiak-Cane model is of medium complexity (10 3dimensional) and the objective function is differentiable.The proposed approach is applied to the complex models, whether the objective function is differentiable and the time efficiency must be taken into account.For nondifferentiable models, the approximate gradients information for those nondifferentiable points can be obtained by the proposed approach.Inevitably, the solving efficiency will go down dramatically along with the rapidly increasing dimensions.However, kinds of methods can be adopted to improve the time efficiency, such as the parallel of the numerical models based on the CPU/GPU, reducing the dimension of the original solution space using appropriate dimension reduction methods.At present, we are concentrating on applying the proposed approach to MM5 and WRF models, which are more than 10 5 -dimensional; related papers will soon be published. Figure 1 : Figure 1: The region of ZC model. Figure 2 : Figure 2: Time distribution in computation.The colored bar represents the distribution of the wait time according to the utilization levels (Idle, Poor, Ok, Ideal, and Over) defined by the VTune Amplifier XE.The longer the bar, the higher the value. Figure 3 :Figure 4 Figure 3: CNOP values calculated by our approach and adjoint method for different initial month.The blue line represents the values by adjoint method; the red one (∇ = * 10 −1 ) and green one (∇ = * 10 −2 ) are the values by our approach. Figures 5 ( a), 5(b), 6(a), and 6(b), the patterns of SSTA-CNOPs show almost the same spatial structure.The SST of western Pacific is abnormally high around the equatorial Pacific, while eastern Pacific is opposite.It is just the precursor of the EI Niño event.The difference is red and blue area of (b) is larger and darker.From Figures 5(c), 5(d), 6(c), and 6(d), the patterns of THA-CNOPs also show almost the same feature.The color deepened along the entire equatorial Pacific.The difference is that red area of (d) is larger.From Figures 5(e), 5(f), 6(e), and 6(f), evolutions of SSTA still show quite similar spatial feature.(f) of SSTA evolution is positive while (e) is negative.The difference is red area of (f) is larger. Figure 5 : Figure 5: Patterns of SSTA-CNOP (a, b), THA-CNOP (c, d), and corresponding SSTA evolutions (e, f) obtained from our approach (a, c, e) and adjoint method (b, d, f) while the initial month is March. Figure 6 : Figure 6: As in Figure 5, the patterns corresponding to the initial month September. Figure 8 : Figure 8: Time consumption (blue line) and speedup (red line) with the increasing CPU cores. Net growth rate of the total energy Figure 9 : Figure 9: The net growth rate of the energy from CNOP in 12 months. Table 1 : The difference value between CNOP values calculated by adjoint method and our approach.
2018-12-10T19:25:33.250Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "274f9b434ce6577f356b0582fe4d7848782ee823", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mpe/2017/3208431.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "274f9b434ce6577f356b0582fe4d7848782ee823", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
119461450
pes2o/s2orc
v3-fos-license
Core-Jet Blending Effects in Active Galactic Nuclei under the Korean VLBI Network View at 43 GHz A long standing problem in the study of Active Galactic Nuclei (AGNs) is that the observed VLBI core is in fact a blending of the actual AGN core (classically defined by the $\tau=1$ surface) and the upstream regions of the jet or optically thin emitting region flows. This blending may cause some biases towards the observables of the core, such as its flux density, size or brightness temperature, which may lead to misleading interpretation of the derived quantities and physics. We study the effects of such blending under the view of the Korean VLBI Network (KVN) for a sample of AGNs at 43 GHz by comparing their observed properties with observations with the Very Large Baseline Array (VLBA). Our results suggest that the observed core sizes are a factor $\sim11$ larger than these of VLBA, which is similar to the factor expected by considering the different resolutions of the two facilities. We suggest the use of this factor to consider blending effects in KVN measurements. Other parameters, such as flux density or brightness temperature, seem to possess a more complicated dependence. INTRODUCTION The observable morphology of a typical radio-loud active galactic nuclei (AGN) consists of i) the core, an optically thick region classically defined by the τ = 1 surface, and ii) an optically thin jet or emitting region. In general, due to resolution limitations, the observed radio core is actually a blending of the actual AGN core and the upstream regions of the optically thin flows. This becomes very evident when studying the phenomenology of AGNs with instruments achieving further resolution limits: what was first conceived as the core region, can be now seen to contain further structure, consisting of a smaller core and additional emission or jet components. One of the long-standing problem in the study of these objects is that, even at resolutions of few milliarcseconds, provided by interferometric techniques such as VLBI, this core-jet blending effect can still be significant, as proven by observations by GMVA, VSOP or RadioAstron, which can resolve even further structure from what was considered to be the VLBI core (see, e.g., Boccardi et al. 2016;Asada et al. 2016;Gómez et al. 2016). This blending effect can greatly affect the observables, such as polarization, core size, or core shift frequency dependence, as contribution from both the core and the innermost jet regions are integrated together. Thus, such blending has to be treated carefully Corresponding author: J. C. Algaba and an analysis has to be done to properly understand its effects on our observable quantities. Various methods have been used in the literature to consider such blending effects and its implications in the data analysis. One approach is to consider observations with better resolution and to compare the observables, such as the morphology or flux density, in order to understand how the different resolution may affect these (e.g. Kovalev et al. 2008;Pushkarev et al. 2012). Although better resolutions can be obtained with increasing frequency, physical properties of the source may also be different at various frequency (due to e.g., opacity effects), and thus different arrays or, if possible, different array configurations observing at the same frequency are preferred for such analysis. A different approach would consist of the convolution of a pre-existing high resolution map with a larger beam size or the flagging of data at various UV-distances in the interferometric UV-plane to simulate a map of lower resolution (e.g. Hovatta et al. 2014). A third approach would include the analysis of Monte Carlo simulations on a predefined model (e.g. Mahmud et al 2013). The Korean VLBI Network (KVN) is a unique interferometric array located in the Korean peninsula. Consisting of three 21-m antennas equipped with a multi-receiver band system, it can observe at 22, 43, 86 and 129 GHz, higher frequencies than most other VLBI networks, simultaneously. With baselines between 305-476 km, achievable resolutions at these frequencies can reach about 1 mas, which makes the KVN an excellent tool to resolve the innermost regions of AGNs. Nonetheless, as we have mentioned, for a robust analysis, it is necessary to first consider the blending effects that can be expected from the KVN view. A pioneering work where KVN source blending issues is discussed is that of Rioja et al. (2014). Although their work mainly focuses on astrometric issues, they include an analysis of the source structure effects in the KVN, including structure blending effects as compared with both high-and matched-resolution VLBA images. As they find, this blending has a large impact in astrometric measurements, becoming the dominant source of errors in astrometric measurements of extended sources. Its magnitude, however, seems to be different case by case, suggesting that a large sample should be needed for proper study. In this paper we study the core properties, such as the core size or core brightness temperature, of several AGNs observed with the KVN and compare them with observations with the Very Large Baseline Array (VLBA), an array which offers better angular resolution. In this way, we estimate the core-jet blending and its effects on such observables. Kim et al. (2019) will do parallel analysis using a different approach. The paper is organized as follows: in Section 2 we describe the observations used for our analysis. In Section 3 we show our results and comparison. In Section 4 we discuss these and possible implications. A summary can be found in Section 5. Data Selection Criteria In order to study possible core blending effects, we intend to compare our KVN measurements with other VLBI observations of the same source capable of clearly resolving components or features upstream of the jet that may be blended with the KVN core. Given the strong variability of these objects, with timescales of even just a few days (see e.g. Wagner & Witzel 1995), comparison should ideally be performed with (quasi-)simultaneous observations. For consistency and robustness of the results, one should ideally include various epochs, not too sparse in time, if possible. However, observations performed (quasi-) simultaneously in different arrays are not always possible, except for very particular cases where the typical VLBI dynamic scheduling is superseded by a strong science case very particular source or event scenarios. An alternative approach to circumvent these limitations is to investigate data obtained for an extended period of time and, if the relative observed variability is not significant, consider the mean values of the observables. To analyze the KVN data, we used data from the iMOGABA (interferometric monitoring of gamma ray bright AGNs; see Lee et al. 2013;Algaba et al. 2015;Lee et al. 2016) program, which observes a total of 30 well known AGNs with a mean cadence of about a month at 22, 43, 86 and 129 GHz. This is, to date, the best source for AGN monitoring with the KVN array. Unfortunately, there are very little VLBI multi-epoch observations or monitoring programs available for comparison. In particular, the authors are not aware of any VLBI program at 22 or 86 GHz with good cadence. Similarly, the situation at 129 GHz is very tricky since this is not a common VLBI observing frequency. Only at 43 GHz the Boston University VLBA-BU-BLAZAR Program provides an excellent systematic monitoring of AGNs. Consequently, in this paper we will focus on the comparison of KVN iMOGABA 43 GHz data with the VLBA-BU-BLAZAR program which, thanks to the much larger baselines, provides a resolution better by a factor of ∼ 18. Although not complete in terms of frequency space nor in the framework of source characteristics, this will provide a fundamental first step test to understand the KVN core blending effects. With this in mind, we will leave the analysis of other KVN frequencies for a forthcoming study. One of the most immediate observable that gets affected by blending effects is the core size. Indeed, as the observed VLBI core is a combination of the actual τ = 1 surface (the actual core) and the innermost unresolved regions of the jet, blending plays a significant role in its resulting observed size. The larger the innermost jet regions merged with the core due to blending, the larger the apparent observed size of the core will be. Being one of the more direct quantities that can be easily measured in VLBI observations, the core size seems like an ideal proxy to consider the blending effects. Similarly, the flux density is a very straightforward observable that can also be affected by the area measured. Finally, the brightness temperature combines these two factors and has proven to be a quantity of great importance in high resolution mapping (see e.g., Bruni et al. 2017;Kardashev et al. 2017;Pilipenko et al. 2018). Thus, in this work, we will consider these three different observables. The Data Information of the data obtained from the VLBA-BU-BLAZAR program, including model-fitting and properties of the VLBA core flux density and size is summarized in Jorstad et al. (2017). We note however that, although this program is still ongoing, Jorstad et al. (2017) limits its analysis to epochs prior June 2013. In some cases, we were able to access the later public data and continue the model-fit of the source for subsequent epochs (such as for e.g., 1633+382; see Algaba et al. 2018a,b, for further details). The VLBA-BU-BLAZAR program does not contain information about M87. For this source, we used VLBA data from Hada et al. (2013), which contains a total of 7 epochs observations at 43 GHz, once upper limits of the core size are excluded from the analysis. No VLBI core flux densities are shown in this paper, but we used a fiducial value of 0.7 Jy (Ly et al. 2007;Walker et al. 2018). Regarding the KVN data from the iMOGABA program, only a handful of iMOGABA sources have cur-rently been already analyzed in depth. Other sources are still being investigated or under analysis. Data for 0716+714 is publicly available in Lee et al (2017); data for 1156+295 is described in Kang et al. (2018); data for 1633+382 is summarized in Algaba et al. (2018a,b); data from M87 can be inspected in Kim et al. (2018); and data for BL Lac is investigated in Kim et al. (2017). In order to obtain state-of-the-art information regarding the properties of the rest of the sources, an iMOGABA modelf-fitting Difmap script has been implemented (Hodgson et al. 2016). In a nutshell, this script finds the best model based on one circular gaussian to fit the core. This is expected to work well given that iMOGABA sources at 43 GHz are mostly either point-like or core-dominated. KVN uncertainties for 43 GHz should be close to 10%. A detailed discussion is provided in Lee et al. (2016). In Figure 1 we show the results obtained with this script compared with the bona-fide more robust manual analysis which may include, in some cases, additional components. It is clear that, considering their uncertainties, the flux densities obtained with the script are quite reliable and follows well these obtained with a more careful analysis. The core sizes seem to also roughly match, except for the cases of extended structures with significant flux, such as that of 0716+714 or BL Lac, where the script overestimates the size. Nonetheless, such difference is only by a factor of 2 at most, which is not dramatic for our study and, as mentioned earlier, will happen on only very few cases. We thus consider that, in a statistical sense, the script works well for our purposes here. RESULTS Flux densities and core sizes were obtained for 25 sources. Brightness temperatures were calculated using the relationship T b = 1.22 × 10 12 S(1 + z)/43 2 /d 2 , where S and d are the model fitted core flux densities and core sizes. Compiled data can be examined in Figure 2. In general, we were not able to obtain (quasi-) simultaneous data, and there is a gap between VLBI and KVN data for most of the sources, except for the case of 1633+382 and 1156+295. It seems however that, whilst certain variability, inherent to these sources, is still clear, its dispersion in terms of the quantities observed here is not too critical for our purposes. In fact, it seems that the dispersion is of the same order of the respective measured quantities, if not smaller. On the other hand, comparison between the KVN and VLBI arrays show that, whereas the measured flux densities appears to be quite similar for many sources, measured core sizes and brightness temperatures are different in the two arrays by at least an order of magnitude. Table 1 summarizes the median values of flux density and core size calculated among all sources and their dispersion in a quantitative manner. In Table 2 we summarize some relevant source properties and the average quantities obtained from our data. Columns 1, 2 and 3 show the source name, its redshift and it viewing angle, from Hovatta et al. (2009). Columns 4, 5 and 6 indicate the core flux S under the VLBA and KVN perspectives, and a compactness factor, f S = S V LBA /S KV N . Columns 7, 8 and 9 show the core size d under the VLBA and KVN perspectives, and the core size ratio Similarly, columns 10, 11 and 12 show the core brightness temperature T b under the VLBA and KVN perspectives, and the core brightness temperature ratio . Statistical values for the fractional quantities are shown in Table 4, and a histogram is shown in Figure 3. Notes on Individual Sources Below, we indicate some remarks for a number of sources for which the assumptions provided above (small variability, etc) may not apply. In effect, these sources may provide some bias for the discussion, and care should be taken when considering their observed quantities and derived fractional values. -3C84: Significant amount of upper limits were found for the core size in this source, suggesting that the actual core is much smaller than what the KVN can resolve at 43 GHz. As a consequence, no brightness temperatures were calculated for such upper limits. -0420-014: The compactness factor is f S > 1 for this source. However, the observed flux with VLBA seems to connect well with the one with KVN. We consider that this larger value may be rather caused by variability effects. -0528+134: The observed VLBA flux of this source appears to be larger than its KVN flux. Given that the resolution of KVN is smaller and, a priori, the core flux should include a larger region, possibly leading to larger flux densities, if any, we consider the large compactness factor f S > 1 for this source to be also due to variability effects. -OJ287: The flux density of this source seems to be steadily increasing. As a consequence, even when the VLBA flux smoothly connects with the KVN flux, suggesting a ratio close to unity, the actual VLBA and KVN median values appear to be different, leading to a fiducial compactness=0.4 -1222+216: The flux density seems to be significantly variable in this source, with two clear maxima in the data. There is a local minimum located in the VLBA data, which may bias the compactness of this source to a value lower than the one which would be expected. -3C454.3: Significant flux variability is found in this source for the VLBA data, with several minima found. The data for KVN suggests a more stable flux density after a small increase. DISCUSSION As expected, due to the comparatively poorer resolution of the KVN, both the observed flux densities and core sizes appear to be larger than these from the VLBA. In the case of the flux densities, the factor f S provides information about the compactness of the source. Given an extended structure, integration of the flux over a larger region will result in a larger value, but not all of such flux will actually arise from the more unresolved region observed within the larger array. In our case, f S ∼ 0.6 suggests that, on average, VLBA can observe roughly only 60% of the KVN flux or, inversely, 40% of the flux considered to be arising from the VLBI KVN core may be emitted in other regions (although a proper study using convolved images should confirm this; see Kim et al. 2019). Note however that this value changes dramatically with different sources, as shown by the various quartiles. In some sources (e.g., 3C84, 0735+178, OJ287, 3C273B, 3C345), more than half of the core KVN flux can be attributed to blending effects, whereas in more compact ones (e.g., 0716+714, 0836+710, 1308+326) most of the KVN flux seems to arise from the core regions. This is in agreement with the discussion in Rioja et al. (2014), where they suggest that the magnitude of the blending strongly depends on the source. Maximum baselines for the case of VLBA are of the order of 8611 km (between Mauna-Kea and Saint-Croix antennas), leading to resolutions of the order of 0.17 mas at 43 GHz. On the other hand, maximum KVN baselines are of the order of 476 km (between Tamna and Yonsei antennas), leading to resolution of about 3.0 mas. This is an improvement of the resolution by a factor of 18 when using the VLBA array for 43 GHz. We note however that the maximum baselines (and hence, the smaller beam sizes) will only occur in the ideal cases, and the actual baselines will be given by several other factors such as source elevation. In order to investigate this in detail, we have performed a check as follows: we obtained typical beam sizes from VLBA data (Jorstad et al. 2017) and iMO-GABA data ) and obtained the fraction f beam =beam(VLBA)/beam(KVN) on a source-bysource basis. Once we consider the more realistic beam sizes, rather than the maximum baselines, the factor in resolution between VLBA and KVN become f d ∼ 1/10, which is much closer to the 50% quartile for f d . We thus consider that, although they may have a slightly effect, the baseline length is not significantly affecting our results. There is however no significant correlation found between f beam and f S (Pearson r = 0.07), and only moderate for f beam and f d (r = 0.47). Additionally, the normalized standard deviation σ(f beam /f max beam ) = 0.07 is much smaller than the normalized standard deviation for f S and f d (0.23, 0.21, respectively). We have checked that the beam size for the BU-VLBA observations can change by a factor of 11%. Similarly, the beam size of the beam size for iMOGABA observations change by a factor 10%. This is smaller than the dispersion that we find in the factors f S and f d , which can vary by about an order of magnitude. This suggests that the dispersion found in the values for f S and f d may have a different origin. The derived value for the brightness temperature also get severely affected as a consequence of the two factors f S and f d . In principle, as we use a larger array, we should be able to probe smaller regions. At the same time, if the source were uniform, we would also observe smaller flux densities in the proportion S ∝ d 2 , thus leading to a similar T b . However, this is clearly not the case in AGNs in general, with a brighter core and blending effects. In our case here, on average VLBA fluxes are 60% smaller but sizes values become only 9% of their KVN value. This large disproportion derives in brightness temperatures 10 1−2.5 times larger with the VLBA array. Origin of the Dispersion in f S and f d The dispersion that we find in the fractional values seems to be quite significant and cannot be attributed only to the measurement uncertainties. Knowing what is the origin of such large dispersion is crucial to asses a proper factor in accounting for core properties with the KVN. It is possible that the source itself plays an important role due to blending effects. On one hand, the derived parameters for each source may be intrinsically different due to i) compactness (an increase of the resolution would not significantly affect the observables of a compact source), ii) small viewing angles (leading to components appearing closer in projection), or iii) different redshift (features would appear smaller). Based on this, we consider any possible dependence on the size and brightness temperature factors f d and f T b in terms of the compactness f S , redshift z and viewing angle θ of the sources. In Figure 4 we show the proposed correlations, including the Pearson correlation coefficient r. Inspection of the figure clearly shows that there seems to be no relevant dependence of the blending with compactness, redshift or viewing angle 1 . Alternatively, since blazars are highly variable in structure and in flux, the blending effect is highly timedependent. Such variability may produce changes, or even spurious values, in the measured fractional parameters f S and f d if comparison is not made properly. Since, the two data sets, KVN and VLBA, used here are not fully coincident in time, variability effects may play a major role. Indeed, although we considered 1 Note that some fiducial correlation appears due to observational bias (θ vs z) or parameters dependence (f T B vs f d ). observations over a long period of time to circumvent the need for quasi-simultaneous observations, we found that for all sources, the flux density standard deviation was larger than 15% the median flux, suggesting a significant variability. Indeed, we note that i) the most extreme value f d = 0.03 corresponds to 1222+216, which has been noted to have an observational bias and ii) the sources containing quasi-simultaneous VLBA and KVN data (0716+714, 1156+295, and 1633+382) seem to have similar f d values. Thus, data analysis of nonoverlapping time may have introduced uncertainties and probably biases. Reflecting this, we consider the sources above mentioned, 0716+714, 1156+295 and 1633+382, and focus on the epochs which have KVN and VLBA data overlapped in time. In Table 3 we summarize the new quasi-simultaneous median quantities for these sources. It seems that the values of f d for these sources, which were already close, become more similar when not only all the data range but only quasi-simultaneous data is considered. As a different check, we also considered the effect of removing the quasi-simultaneous data for these sources and performed various tests flagging VLBA data near in time to that of the KVN (for example, flagging VLBA data after MJD> 56000, as a simple case). We found that the values for f d significantly changed, with cases where f d = 0.19 for 1633+382, or f d = 0.14 for 0716+714. This suggests that variability effects are indeed the source for the dispersion in the f d values. We thus suggest a blending factor for the KVN of f d ∼ 0.09. It seems, on the other hand, that f S is intrinsically source-dependent, and there doesn't seem to be a simple recipe to consider a priori this parameter. Not only the compactness of the source but a full analysis of its structure should be taken into account. Furthermore, ejection of new components, possibly associated with γ−ray flares, may alter the innermost structure of the source, and a more methodic study, beyond the scope of this work, should follow (see e.g. Rioja et al. 2014). Additionally, given its dependence on the flux, it will not be straightforward to find a common factor for f TB either. Extrapolation to Other Frequencies Regarding the core size, it seems that the factor f d = 0.09 considered here seems to be in agreement with the value expected considering the array resolution, once we consider the actual baselines and UV coverage during the observations. It is thus reasonable to consider that, at different frequencies, this factor will be scaled accordingly. On the other hand, given the characteristics for f S and hence f TB , considerations for the brightness temperature may not be straightforward. In Lee (2014), it is discussed that, in general for AGNs, T b ∝ ν ξ , with ξ = +2.6 below a critical frequency ν c , which corresponds to the peak frequency of the spectrum. Beyond this frequency, ξ ∼ −1 for a decelerating jet model and ξ ∼ +1 for the rapidly accelerating jet model. In that work, it was found that the brightness temperature seemed to decrease with frequency as T b ∝ ν −1.2 for ν > 9 GHz, favoring the decelerating jet model. However, in Lee et al. (2016), the median brightness temperatures increase from T b = 10 9 K at 22 GHz to T b = 7.4 × 10 9 K at 129 GHz; i.e, increasing by almost an order of magnitude. Furthermore, the observed frequency dependence in Lee (2014) was significantly different than the predictions. These apparent inconsistencies could be potentially due to the blending effects discussed here. Only once these are understood, the physical model can be truly tested. From our result above, we can consider that, statistically speaking, the actual brightness temperature at 43 GHz may be a factor ∼ 50 larger than the one observed with KVN. If we assume an accelerating jet, it may be possible for the blending effects to be similar or larger at higher frequencies. However, under the assumption of a constant speed or decelerating jet, blending effects on the KVN should decrease at higher frequencies. CONCLUSIONS We investigate the effects of core blending effects on AGNs under the KVN view by comparing the properties of a sample of 25 sources when observed with the KVN and VLBA arrays. For this purpose, we collected data at 43 GHz from the KVN iMOGABA program and the 43 GHz BU-VLBA-Blazar program and supplemented it with some additional observations. Although the two data sets are not fully coincident in time, we consider various cases where quasi-simultaneous observations exist and study their effects on the discussed quantitites. Our results suggest that, on average, the core flux densities are larger by a factor f S = 0.6; the core sizes are larger by a factor f d = 0.09, and the brightness temperatures are lower by a factor f T b = 59, when observed with the KVN. These factors are compatible with the a priori expectations purely based on the arrays different resolutions. Note however that, although a common blending factor f d would suffice to characterize the KVN with respect to other VLBI arrays, there is a significant scatter in the fractional values for the flux density f S and the brightness temperature f T b = 59. Such scatter may be attributed to the particular properties of each source, as suggested by previous results by Rioja et al. (2014). We thus suggest that a factor f d = 0.09 could be used to scrutinize KVN core size blending effects when comparing the VLBA and KVN at 43GHz. Otherwise, a source-dependent factor can also be estimated. We discuss considerations regarding the AGN jet model and possible implications of the relative magnitude of the blending effect at different frequency bands. Further work, including simulations and matching spatial resolutions of the observations will be the topic of future research.
2019-03-19T08:11:29.000Z
2019-03-19T00:00:00.000
{ "year": 2019, "sha1": "7f6379312e3b8aa211397b5ab1215a5cf68df169", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7f6379312e3b8aa211397b5ab1215a5cf68df169", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
122976242
pes2o/s2orc
v3-fos-license
Symmetry group of a particle in an impenetrable cubic well potential A quantum particle in a impenetrable cubic well potential presents accidental degeneracy when the group is considered to be the symmetry group of the system. This degeneracy becomes natural when a new symmetry group, embedding the group, is proposed. This new group turns out to be the semidirect product , where is a two-dimensional compact continuous group whose generators correspond to linear combinations of the one-dimensional Hamiltonians. The systematic degeneracy is studied in detail, the new group is identified and its irreducible representations (irreps) are constructed by means of induction, an approach that allows the irreducibility and completeness to be assured. Pythagorean degeneracy as well as the one due to commensurable sides are not considered. Introduction The degeneracy degree is expected to correspond with the dimension of one of the symmetry group irreducible representations (irrep) [1]. When this is not the case the degeneracy is identified as accidental. Instances of accidental degeneracy are abundant in quantum mechanics [1], but in any case, according to the experience, the existence of this kind of degeneracy suggests an overlooked higher symmetry. The particle in a square or cubic box with impenetrable walls is the most simple quantum mechanical system where accidental degeneracy appears, however this systems receive a very brief mention in the literature [6,7,8], this may be explained by the fact that the natural language of symmetry is group theory, a specialized field not included in most textbooks of quantum mechanics. Though, in 1996 leyvraz et al [9] derived a new symmetry group to explain the accidental degeneracy of a square box from the group theory point of view. A more interesting system presenting systematic degeneracy is a quantum particle in an impenetrable cubic well potential, because it provides a deeper insight into the mathematics and physics of the problem. The description of a particle in a cubic box allows the system of square and parallelepiped well potentials to be studied as a symmetry breaking process, in such a way that these cases can be analyzed as a mathematic subduction problem. The simultaneous analysis of these systems provides a clear example of the fact that the greater the symmetry the higher the degeneracy. A free particle enclosed in an impenetrable three dimensional box of sides a, b and c, as displayed in Figure 1, is described by the eigenstates r|Ψ n 1 n 2 n 3 = ψ n 1 (x)ψ n 2 (y)ψ n 3 (z), where ψ n 1 (x) = 2 a sin ( n 1 πx a ); ψ n 2 (y) = 2 b sin ( n 2 πy b ); ψ n 3 (z) = 2 c sin ( n 3 πz c ); with n i positive integers. These states satisfy the condition Ψ n 1 n 2 n 3 (x, y, z) = 0 at the boundaries of the box. The corresponding eigenvalues are given by where μ is the reduced mass. The energies and eigenstates by itself does not provide information about the kind of degeneracy displayed, this should be established through the proposition of a symmetry group. When a = b = c we have a cubic box that presents three subspaces of degenerate funtions. For n 1 = n 2 = n 3 we have one-dimensional subspaces L 1 = {|Ψ nnn }; n 1 = n 2 = n 3 = n, when only two quantum numbers are equal we have three-dimensional subspaces given by L 3 = {|Ψ nnm , |Ψ nmn , |Ψ mnn }; n 1 = n 2 = n, and when all the quantum numbers are different, six dimensional subspaces are described by L 6 = {|Ψ n 1 n 2 n 3 ; n 1 = n 2 = n 3 }. New symmetry group To identify the symmetry group we shall first consider the apparent geometrical symmetry of this system, such symmetry corresponds to the point group O h , hence the eigenstates are expected to carry irreps of this group. The identification of the irreps, however, cannot be achieved straightforwardly since the eigenstates are given with respect to the origin O, while the center of the parallelepiped is located at O . Both origins are related with the translationÔ T , with T defined byT ( Table 1 and the reduction assuming this group as the symmetry group is given in Table 2. For L 3 systematic accidental degeneracy appears when n and m are both even or odd. Furthermore the six fold degeneracy appearing in the L 6 subspaces cannot be explained in the context of the O h symmetry group [10], because, according to the character table, the expected larger degeneracy degree is three. An interesting peculiarity for this subspace is that a multiplicity of 2 appears for both representations E g and E u , Table 2. The first thing to establish a new symmetry group is identifying the operator or operators that connect the subspaces spanning O h irreps. To achieve this goal we look for an operator F (ρ) r , spanning the irrep ρ and satisfies that Ψ Γ γ |F (ρ) r |Φ Γ γ vanish unless Γ ∈ ρ ⊗ Γ = μ ⊕μ where Γ and Γ, with components γ and γ respectively, are the irreps spanned by the kets. It is not difficult to check that such operator must hold the irrep E g and consequently ρ = E g , [19]. From the O h character table [19] we notice that the following Cartesian Harmonics span the irrep E g : The same linear combination in terms of the square momenta does transform according to E g in both reference systems [9,10]. We thus have the following operatorŝ where we have added the components according to the chain O h ⊃ D 4h ⊃ D 2h . The operators (3) explain the accidental degeneracy, because they connect degenerate states with different O h irreps. Once that the accidental degeneracy as well as the operators that connect the degenerate states have been identified, we are ready to propose a new symmetry group for this system. The operators generate the continuous group T , whose elements are obtained by exponentiation and form a two dimensional group with elements The corresponding Casimir operator of this group is the Hamiltonian itself. Considering the transformation of the elements (4) under the action of R ∈ O h , we realize that the subgroup T is invariant in the context of the new group G. This fact allows the new group to be expressed as and in terms of left cosets Therefore all the elements g ∈ G can be written in the form with the product of elements gg = g . Representations of the group G Once we have identified the new symmetry group, we proceed to construct the irreps following the induction approach, this method will assure the irreducibility and completeness of the representations [11]. According to this, we have first to construct the irreps of the invariant subgroup T . The representation of the T elements is diagonal in the basis of the cubic box eigenfunctions. For the six dimensional space L 6 = {|Ψ n 1 n 2 n 3 }. The action of the subgroup T elements over the state |Ψ n 1 n 2 n 3 is depicted bŷ U (α, β)|Ψ n 1 n 2 n 3 = D (kn) (U (α, β)) |Ψ n 1 n 2 n 3 , where D (kn) (U (α, β)) = e iαk (1) n 1 n 2 n 3 +iβk where we define the vectors, The vectors (10) will be considered to be written in an orthonormal basis. It is appropriate to add the label k n to the state in the form |Ψ kn n 1 n 2 n 3 . If we apply the elements of T to the other five states of L 6 we obtain five additional representations associated with the functions of permuted indices in accordance to the permutations of S 3 . In this context, each function in the space L 6 span a different representation. These irreps are the starting point to obtain the irreps of the complete group G by induction. The T irrep k 1 is invariant under the action of the elements of T as well as the elements of the subgroup D 2h . This set of transformations form a group, the so called little group of k 1 , denoted by K(k 1 ), in this case given by The little group of k 1 , however, is still infinite. To have a group of finite order we note that On the other hand showing that the left coset expansion is the relevant expansion in constructing the irreps, since the action of the elements of the group T is diagonal over the basis L 6 . The expression (14) indicates that every element g ∈ O h , may be expressed in terms of a product of the form in accordance with the explicit coset expansion To obtain the irreducible representations by induction, we start projecting the state |φ k 1 to irreps of the little cogroup D 2h , giving rise to the states |φ k 1 ; Γ, γ . Finally, the induction is carried out and the representations so obtained are irreducible and complete with the form There are two labels, the prong corresponding to the irrep k 1 of T , and the irrep Γ of the little cogroup k 1 K = D 2h . We now proceed to generate the irreps of the group G. For the elements of T the matrix representation in the new basis is diagonal as we have already noted, Let us now consider the point operations. In order to obtain the representation of the generator C 4 (x), for instance, the basal representation should be obtained to generate: where χ μ (h) is the irrep, in this case the character, of the element h ∈ D 2h . It should be clear that χ μ (E) = 1, ∀μ. In the same way we obtain for the generator C 4 (y): Finally, the representation D(I) is diagonal with elements χ μ (I) = (−1) n 1 +n 2 +n 3 +1 . We have thus constructed the representation of the group G associated with the L 6 subspaces. Formally we have induced the representations of T through the little cogroup [11]. Let us now consider the subspace L 3 . We proceed to identify the little group of k 1 . Besides the elements of the subgroups T and D 2h , the transformation σ d also keeps the irrep k 1 invariant. We note that D 4h = D 2h + σ d D 2h , and consequently In explicit form we select the expansion Again, the basal representation is needed to obtain, We have also the one dimensional space, for this case the associated representation of U (t) carries the irrep k = 0, which means thatÛ (t)|Ψ 0 nnn = |Ψ 0 nnn , and consequently the little group of k coincides with the group G itself. Then the little cogroup is given by O h with the basal representation given by the one dimensional unit matrix. Hence the states |Ψ 0 nnn span the irreps of the octahedral group. The analysis of the system when the symmetry is broken is presented in Figure 2. If in our previous analysis of the cubic box we introduce the condition a = b = c, then some of the transformations of the group T ∧ O h stop keeping invariant the Hamiltonian. We thus have to identify the new symmetry group by subduction. For a rectangular box the z-axis is not In the case of a = b = c k = 0 the states are labeled by the group D 2h . Conclusions The symmetry group of a particle inside an impenetrable cubic well potential has been identified to be G = T ∧ O h , where T is a compact continuous group. Reducing the representations generated by the eigenstates allowed us to identify the systematic degeneracy. The continuous group was obtained identifying the operators that connect degenerate states spanning different O h irreps, by means of the coupling coefficients. The elements of the group T are constructed by exponentiation of the operators. We have thus obtained the irreps of the symmetry group by the process of induction, an approach that assures the irreducibility and completeness of the representations as well as their completeness. From this perspective the systematic accidental degeneracy become natural, then by introducing the continuous group whose Casimir operator is the Hamiltonian all levels are distinguished, as can be observed in Figure 2. The states of this system have been classified according with the chain of subgroups consistent with the reduction of symmetry to obtain the square and rectangular boxes, allowing us to analyze the symmetry breaking in a natural form [20].
2019-04-20T13:09:13.827Z
2014-05-12T00:00:00.000
{ "year": 2014, "sha1": "9fd237bbc0b4235ad414b403fb4d04ffeff1a7e2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/512/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "30080f369e664491eaad4c7e6ec3cdb91581a9af", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
231392406
pes2o/s2orc
v3-fos-license
Conservative management of chylous leak after open radical nephrectomy in an adult patient: a case report and literature review Chylous ascites is rare but still a recognized complication of retroperitoneal surgeries caused mostly by inadvertent trauma to lymphatic channels. In this article, we present a case report and literature review of adult patient, with malignant tumor of upper urinary tract, who developed chylous leak after open nephrectomy. We present a case of chylous leak for a 67-year-old female patient, presented to urology clinic with complaining of left loin pain and gross hematuria, found to have upper urinary tract tumor, she underwent open radical nephrectomy with lymph nodes dissection, and postoperatively she had chylous leak that is treated conservatively using octreotide and spironolactone without the need for total parenteral nutrition. Conservative management should always be the first choice of management of chylous leak and chylous ascites. Careful anatomical identification and securing of the periaortic lymphatics are needed to decrease the risk of postoperative chylous leak and ascites. Background Chylous ascites defined as the accumulation of lipid-rich lymph in the peritoneal cavity is an uncommon complication after open radical nephrectomy. It carries significant morbidity and increases both the cost and hospital stay [1]. The postoperative chylous ascites is related to inadvertent disruption of lymphatic channels during the surgery [2]. Because the incidence of postoperative chylous ascites is low, no level-one evidence can be made on the treatment of chylous ascites though most specialists start conservative management first and reserve surgical management for refractory cases [1,2]. Most reported urological cases in English literature are those after laparoscopic nephrectomy and donor nephrectomy. There are only few reported cases of chylous ascites after open radical nephrectomy/nephroureterectomy, only one of them was performed for upper urinary tract urothelial carcinoma. To our knowledge, we present here the first case of chylous leak after open radical nephrectomy with lymphadenectomy for upper urinary tract urothelial carcinoma that was treated conservatively without the need for total parenteral nutrition (TPN). Case presentation A 67-year-old female patient presented to our clinic with history of left loin pain and gross hematuria. She had also loss of appetite and significant weight loss over the last 3 months. She is a heavy smoker with history of ischemic heart disease. She had no previous surgeries. Physical examination revealed a large palpable left upper quadrant mass. Her preoperative complete blood count, kidney function tests, electrolytes and liver function tests were within normal limits. Computed tomography (CT) scan revealed a large soft tissue mass arising from the left kidney (8 × 7 × 13 cm) with few hilar necrotic lymph nodes (Figs. 1, 2). There was an ill-defined necrotic nodule of about (3.5) cm on the left adrenal gland. There were no other distant metastatic lesions, and the right kidney was looking normal. An open left radical nephrectomy was performed through an intercostal incision in which the mass (involving the kidney and adrenal gland) and the hilar lymph nodes were removed en bloc. The peritoneum was opened and part of it excised with the mass. No lymphatic leak was noted intraoperatively, and a drainage tube was inserted. Histopathology revealed high-grade urothelial carcinoma with sarcomatoid features, central necrosis, and perinephric fat invasion, two out of eleven perihilar and three out of nine perinephric lymph nodes were positive for malignancy. The patient was doing well and resumed oral diet on the second postoperative day (POD), on the next day, she started to have whitish milky output drainage through the abdominal drain (250 ml/day), at this point we had two main differential diagnoses; chylous leak and pancreatic fistula, but the biochemical analysis was consistent with chyle. On POD 4, the patient was started on low salt, low fat and high protein diet which resulted in a drop in the chylous output to about (100 ml/day). On the POD 6, the patient was started on octreotide injection (0.1 mcg three times a day) and spironolactone. The chylous leak resolved so the drainage tube was removed on the POD 10 and the patient received a long acting somatostatin analogue (20 mg given as a depot muscular injection). She was discharged and advised to stay on low fat diet for further 3 weeks. Upon follow-up for 3 months, the patient was doing well with no abdominal distension nor leak from the drain site. To the best of our knowledge, only few cases of chylous ascites after open nephrectomy/nephroureterectomy for adult patient with malignant pathology are reported in the English literature. A summary of our literature review is provided in (Table 1). Discussion Chylous ascites is the accumulation of triglyceride-rich lymph in the peritoneal cavity, which can be classified into primary and secondary, the primary is caused by congenital lymphatic system dysfunction while the secondary chylous ascites can be caused by malignant process, infection, trauma, and postsurgeries [10,11]. The incidence of chylous ascites after open donor nephrectomy is (0.6%) and (2%) after laparoscopic donor nephrectomy [12]. Some experts hypothesized that the use of bipolar and other energy devices rather than clipping in laparoscopic surgery has contributed to the higher incidence of chylous ascites in laparoscopic surgery compared to open renal surgery [13]. The lymphatic drainage of the abdomen, pelvis and lower limb goes into paralumbar trunks which joins the intestinal trunks to form the cisterna chyle [14]. So paraaortic dissection of renal artery may result in injury to paralumbar lymphatic trunks resulting in chylous leak and chylous ascites. Because of the anatomical proximity of left renal artery, aorta, and paralumbar lymphatic trunks, chylous ascites is more common after left nephrectomy compared with right nephrectomy comprising (75%) to (99%) of all cases of chylous ascites [14,15]. Lymphadenectomy increases the risk of postoperative chylous ascites [13]; a retrospective study done by Kim et al. concluded that the incidence of chylous ascites is three times more common if lymphadenectomy is done [15]. Patient with chylous leak usually present on POD 4 [16], but can present as early as POD 0 [7], and can be as late as few weeks or months postoperatively [16]. The clinical presentation of chylous ascites and chylous leak differs whether there is a drain or not, patients with drain are usually presented with persistent drainage of milky fluid, while those without drain or those whose drain removed early are presented with abdominal distention, pain, nausea, or less commonly presented with chylous discharge from the wound [15]. The diagnosis of chylous ascites is not difficult but needs high index of suspicion, to confirm the diagnosis many experts use biochemical analysis of the draining fluid and abdominal imaging such as ultrasonography and CT scan, and more invasive imaging modality such as lymphangiography and lymphoscintigraphy is used sometimes when surgical intervention is planned to localize the site of lymphatic disruption [1]. Because of limited number of reported cases of chylous leak after open nephrectomy for adult patients with malignant renal tumors, management of this complication has been guided by reports from other specialties [16]. In general, most patients with chylous ascites after abdominal surgery are successfully treated conservatively [16], and that is consistent with our literature review [3][4][5][6][7][8]. The conservative management consists mainly of diet modification (high protein, low fat, medium-chain triglyceride diet), keep NPO and TPN, paracentesis especially helpful in patient without drain [5,6,9], some medications may be helpful such as octreotide and diuretics, the exact mechanism of octreotide is not completely understood, but it has been shown to decrease the intestinal absorption of fats and decrease the lymphatic flow and thus improves the chylous ascites [16][17][18].
2021-01-10T14:28:33.388Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "e04ca42425175575411523a16f0d3d018a98c5f8", "oa_license": "CCBY", "oa_url": "https://afju.springeropen.com/track/pdf/10.1186/s12301-020-00116-8", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e04ca42425175575411523a16f0d3d018a98c5f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
256016017
pes2o/s2orc
v3-fos-license
Identifying behavioural determinants for interventions to increase handwashing practices among primary school children in rural Burundi and urban Zimbabwe This article presents the development of a school handwashing programme in two different sub-Saharan countries that applies the RANAS (risk, attitudes, norms, ability, and self-regulation) systematic approach to behaviour change. Interviews were conducted with 669 children enrolled in 20 primary schools in Burundi and 524 children in 20 primary schools in Zimbabwe. Regression analyses were used to assess the influence of the RANAS behavioural determinants on reported handwashing frequencies. The results revealed that, in both countries, a programme targeting social norms and self-efficacy would be most effective. In Burundi, raising the children’s perceived severity of the consequences of contracting diarrhoea, and in Zimbabwe, increasing the children’s health knowledge should be part of the programme. The school handwashing programme should create awareness of the benefits of handwashing through educational activities, raise the children’s ability and confidence in washing hands at school through infrastructural improvements, and highlight the normality of washing hands at school through events and poster creation. Background Handwashing promotion programmes are increasingly being implemented in developing countries to improve child health and development. Since schools are important settings for disease transmission, school-based interventions aiming at mitigating communicable diseases are likely to reduce the overall community disease burden [1,2]. According to the WHO/Unicef Integrated Global Action Plan for Pneumonia and Diarrhoea [3], improving access to safe drinking water, providing adequate sanitation, and promoting good hygiene behaviour, such as handwashing with soap, are essential for preventing diarrhoea. In primary schools, interventions promoting handwashing with soap have proven to be effective in reducing infectious diseases in pupils [4][5][6], Potential constraints include lack of soap and water and the absence of adequate handwashing facilities [7][8][9][10]. Increasing the provision of soap and water for handwashing has caused decreases in absenteeism [6,11,12], and several studies have reported an association between proper handwashing behaviour and the availability and accessibility of handwashing facilities [13][14][15]. For handwashing behaviour to be adopted and become a habit, it is not enough to provide proper resources and facilities. Growing evidence suggests that health behaviours such as dietary habits, physical activity patterns, and substance abuse are predicted by such social-cognitive factors as attitude, subjective norms, and self-efficacy beliefs [16][17][18]. Several studies have indicated that hand hygiene practices depend largely on psychological factors within the individual [19][20][21]. So far, very few studies have investigated behavioural determinants underlying children's handwashing practices. Two studies have drawn on the theory of planned behaviour to examine factors affecting proper handwashing. Research by Lopez-Quintero, Freeman, and Neumark [21] in Colombia showed that intentions to perform proper handwashing were determined by perceived control, personal attitudes, and subjective norms. Setyautami, Sermsri, and Chompiku [13] found that students with positive attitudes and perceived behavioural control were twice as likely to wash their hands properly. Several studies have used the knowledge, attitudes, and practices approach to examine the influence of school children's knowledge, attitudes, and practices on hygiene behaviour; they have reported mixed results concerning the importance of knowledge in determining proper handwashing behaviour [14,[22][23][24]. Although attitude was mentioned as an important indicator for hygiene behaviour in all of these studies, it was not assessed above and beyond knowledge and practice. More importantly, selfregulatory processes such as action control and feelings of self-efficacy have not yet been investigated. Researchers urge the use of theories of behaviour change for developing interventions and programmes to change health behaviour [25,26]. Promoting proper handwashing practices is challenging, and the effectiveness of handwashing interventions have been inconsistent [27]. Applying behaviour change theories to promotion programmes for handwashing may increase their potential for changing behaviour [28]. So far, to the best of our knowledge, no study has used social cognition models from the realm of health psychology to design data-driven handwashing programmes in primary schools in developing countries. In this study, Mosler's RANAS (risk, attitudes, norms, ability, and selfregulation) approach to behaviour change [29] served as a theoretical framework to measure the behavioural determinants underlying handwashing with soap among primary school children. The model suggests that the behaviour of people is determined by their risk perception, their attitudes toward a behaviour, their beliefs concerning the advantages or disadvantages of adopting or not adopting the behaviour, normative beliefs, perceived self-efficacy, resources, and skills necessary to perform the behaviour. The RANAS blocks assimilate factors from different theories of social and health psychology, such as the theory of planned behaviour [30] and the health action process approach [31], that have been shown to successfully explain and change many types of health behaviour. The RANAS approach provides an analytical tool to analyse the different determinants of behaviour on the basis of quantitative data. Mosler [29] suggests targeting the determinants with the highest intervention potential, that is, determinants with low mean scores and high predictive values on the behaviour within the target population. The corresponding behaviour change techniques are then selected to develop appropriate practical strategies for intervention programmes [32][33][34]. Several studies have successfully applied the RANAS approach for different health-related behaviours, including handwashing [35], in the water and sanitation sector in developing countries and have shown the added value of implementing data-and theory-based interventions compared to information interventions alone [36][37][38]. This study uses the RANAS social cognition model of health behaviour to analyse data gathered from surveys of primary school children in two countries regarding the behavioural determinants of the children's handwashing practices. The aim of the present paper is to describe a psychological approach to designing a handwashing programme using data collected from study participants, theory, and empirical evidence from the literature. The study addresses two main research questions: (1) Which behavioural determinants are related to self-reported handwashing frequencies after using the toilet at school and what is their improvement potential? (2) What theory-based behaviour change techniques can be directed at these behavioural determinants to generate changes in behaviour? Information from this study will serve as baseline data for future campaign development and policy action for an effective school-based handwashing intervention programme. Data collection and participants This cross-sectional study was conducted in rural parts of the province of Ngozi in the north of the Republic of Burundi and in urban suburbs of Harare, the capital of the Republic of Zimbabwe. For each survey, interviewers with a Master's degree in social or health sciences were recruited and received the same five-day training in the objectives and methodology of the survey, in the theoretical background of the questionnaire, in the procedures, and in interpersonal communication in the field. The interviewers familiarised themselves with the questionnaire by reviewing the purpose for each item and by conducting role-plays and mock interviews on how to administer the questionnaire and use the data collection tools. In Burundi, 20 primary schools with access to water were identified, and within each of the schools' catchment areas one colline (village) was randomly selected for the interviews to take place. In Zimbabwe, 20 primary schools with geographically distinct catchment areas in high-density suburbs of Harare were selected. All households were randomly selected using a random route procedure [39], and only households with at least one child attending primary school were considered. Face-toface interviews with primary school-aged children took place in Burundi from mid-February to mid-March 2014. In Zimbabwe, children were interviewed at school, in a room specifically reserved for the study; here, data collection took place from mid-July to mid-August 2014. A structured questionnaire was developed to assess children's handwashing practices, the RANAS behavioural determinants, and sociodemographic characteristics. The items were worded to suit the age of children attending first through sixth grade and were translated from English into the local languages Kirundi (Burundi) and Shona (Zimbabwe). During interviewer training, the translated questionnaires were closely reviewed by project staff and interviewers to ensure the meaning of the questions was accurate. All measures were pretested in non-study areas among a group of 30 children regarding feasibility, language appropriateness, duration, content validity, and question comprehensibility. The surveys were implemented using the mobile data collection software Open Data Kit Collect [40] on a tablet device and lasted about 15-20 min. In Zimbabwe, response cards were used to increase the children's motivation to participate in the interview and to facilitate their answer choice [41,42]. In Burundi, the response cards were pre-tested but were found to distract the children. Final interview data were available from 669 children enrolled in 20 primary schools in Burundi and from 524 children enrolled in 20 primary schools in Zimbabwe attending first through sixth grade. Information on the study groups is presented in Table 1. Measures Self-reported handwashing frequency after using the toilet at school was measured with a single item ('Do you wash your hands with soap and water after you use the toilet at school?') on a four-point rating scale (from 0 = not at all to 1 = a great deal). The spot-check observational method [43] was used to assess the availability of soap and water and the number, type, and condition of handwashing stations. The operationalization of the behavioural constructs was based on the RANAS model and derived from previous research on handwashing practices and water consumption in developing countries [44][45][46][47]. Responses were scored on a 0-1 scale, representing the minimum and maximum possible values. For example, ' Are you afraid of getting diarrhoea?' (0 = not at all afraid to 1 = extremely afraid). All variables were coded so that high values were favourable to the behaviour. A single question was used to quantify each factor (see Table 2 for the items). Factual knowledge was assessed through several closed-ended questions, to which each correct answer was assigned one point. To standardize the ranges, the scores were transformed into the value range of the other variables (0 = no knowledge to 1 = maximum knowledge). Data analysis Statistical analysis was performed using SPSS version 21 (SPSS, Chicago, IL, USA). Although the data were derived from a clustered design, no multilevel analyses were executed because only a very low percentage of variance (less than 2% for both data sets) was determined by the school clusters. Forced-entry linear multiple regression analyses were performed for each country separately. Cases with missing values were excluded. Results In Burundi, children reported sometimes washing hands at school after using the toilet (M = 0.56, SD = 0.27) (see Table 3). The survey did not find high knowledge about diarrhoea and disease transmission (health knowledge). Accordingly, the children perceived a low risk of contracting diarrhoea (perceived vulnerability) and did not think it is bad if they did (perceived severity). Children reported that washing hands takes a lot of time (instrumental belief ). They indicated liking washing hands (affective belief: liking) and feeling rather dirty if they do not (affective belief: disgust). The overall social influence experienced by the children scored 0.57 (descriptive norm) and was much higher, at 0.74, for their perception of the teachers' approval of the behaviour (injunctive norm). Children expressed medium levels of confidence in their ability to always wash hands (self-efficacy), to always pay attention to executing the behaviour (action control), and to never forget to wash hands (remembering). Finally, children reported always washing hands with soap at school after using the toilet as very important (commitment). In Zimbabwe, children reported washing hands rather frequently at school (M = 0.58, SD = 0.39). Again, the survey did not find high knowledge about diarrhoea and disease transmission. Despite this, perceived vulnerability regarding diarrhoea and perceived severity of the consequences of contracting the disease were rated higher. When comparing the mean scores of the behavioural determinants from Burundi with those from Zimbabwe, primary school children from Zimbabwe reported liking washing hands even more, they expressed higher levels of self-efficacy, action control, and remembering, and their commitment to always washing hands with soap at school after using the toilet was even higher. Behavioural determinants of handwashing practices A multiple regression analysis was conducted to investigate key behavioural determinants of self-reported handwashing frequencies after using the toilet at school using the data from each country (see Table 3). An analysis of the variance inflation factors (VIFs) in the regression models indicated acceptable multi-collinearity. All VIFs were below 2, except for action control (VIF = 2.37) and remembering (VIF = 2.36) in Burundi. In Burundi, the twelve behavioural determinants accounted for a significant proportion of self-reported handwashing frequencies, adjusted R 2 = 0.45, F(12, 656) = 46.17, p < 0.001. The results revealed that children were more likely to report high handwashing frequencies if they were not afraid of getting diarrhoea (perceived vulnerability), if they thought it was bad when they caught diarrhoea (perceived severity), if they perceived that many other children at school washed hands (descriptive norm), and if they felt confident in always being able to wash hands with soap after using the toilet at school (action self-efficacy). In Zimbabwe as well, the behavioural determinants accounted for a significant proportion of self-reported handwashing frequencies, adjusted R 2 = 0.24, F(12, 511) = 14.84, p < 0.001. For Zimbabwe, the results showed that children were more likely to report high handwashing frequencies if they said that handwashing with soap takes a lot of time (instrumental belief ), if they perceived that many other children at school washed hands (descriptive norm), if they were sure that they can always wash hands with soap and water after using the toilet (action self-efficacy), if they indicated paying a lot of attention to always washing hands with soap (action control), and if they claimed to always remember to perform the behaviour (remembering). Intervention potential of the behavioural determinants As described in the RANAS approach, the values of the intervention potentials represent the absolute value of the difference between 1, the highest possible scale value, and the sample mean, multiplied by the unstandardized regression weight of the determinant (see Table 3). Higher values indicate a greater potential impact if that determinant is targeted by an intervention. For Burundi, the three highest intervention potentials were reached for the descriptive norm (IP = 0.176), action self-efficacy (IP = 0.082), and perceived severity (IP = 0.042). For Zimbabwe, the results indicated that health knowledge (IP = 0.090), the descriptive norm (IP = 0.071), and action self-efficacy (IP = 0.071) should be targeted by an intervention. Selection of the behaviour change techniques The RANAS behaviour change techniques that seemed most promising were selected for the three behavioural determinants with the highest intervention potentials in each country (see Fig. 1). In addition to these quantitative results, observational findings on school handwashing characteristics revealed that in many schools, soap, and in some even water, were not available for handwashing on the day of the field visit (see Table 1). Furthermore, in Burundi, there were on average over 250 students per handwashing facility. This pupil-to-handwashing-facility ratio exceeds the international guidelines, which recommend one handwashing facility per 50-100 students [48]. These survey data served as a basis for developing a programme based on informational, infrastructural, and normative interventions with the overall goal of supporting and guiding all participants towards established handwashing habits. The behaviour change techniques selected are meant to (1) create personal awareness for washing hands with soap and water, (2) raise the actual ability to wash hands at school and thus to raise the children's confidence in their own ability to perform the behaviour, and (3) highlight others' handwashing behaviour at school. raise the perceived seriousness of contracting diarrhoea consist of messages about the causes of diarrhoea and the consequences of the disease, creating the precondition for change [32,[49][50][51]. Teachers are trained to sensitize the children on the issue of diarrhoea, using posters depicting transmission routes of diarrhoea pathogens, a description of the handwashing steps, and recommendations for situations in which washing hands is critical, along with risk factors, signs, and symptoms of diarrhoea. Translation into practical strategies (2) Infrastructural interventions are proposed to enhance the children's self-efficacy and thus their confidence in their ability to perform the behaviour [52,53]. Each classroom should be equipped with a simple handwashing device along with a dispenser filled with soapy solution. As a short-term solution, soap should be provided for the duration of the project. A strategy already pursued in the province of Nogzi, Burundi is that children bring water if the school does not have a water source. As a longterm solution, income-generating activities should be discussed with the schools, policy dialogues at provincial and ministerial level should aim at the allocation of funds for soap, and advocacy is needed to assure the availability of water in schools. (3) An intervention highlighting the commonness of handwashing at every school is suggested to tackle social norms [29,54]. A kick-off event to introduce the new handwashing stations should be organized. The inauguration could be accompanied by a handwashing song, and each class should create handwashing posters serving as a public commitment to being a handwashing class. Discussion In this article we describe an application of the RANAS systematic approach to behaviour change for the development of a school handwashing programme for primary school children in a rural and an urban setting in two sub-Saharan African countries. The results of the regression analyses revealed that the RANAS behavioural determinants predicted children's self-reported handwashing frequencies very well in both countries. In Burundi, high reported handwashing frequencies after using the toilet were best predicted by a high perceived severity of diarrhoea, the perception that many other children wash hands at school too, and a strong confidence in one's abilities to always perform the behaviour. In Zimbabwe, the behavioural determinants with the highest predictive value proved also to include the perception that other children wash hands at school too, the confidence in one's abilities to always perform the behaviour, and, moreover, paying a lot of attention to always washing hands after using the toilet at school. The findings in this study are consistent with the results of studies conducted with primary caregivers of young children in Haiti and southern Ethiopia showing that the relevant significant behavioural determinants from the present regression analyses were also predictive of self-reported handwashing [47]. In Bogotá, Colombia, school children also reported higher subjective norms and higher perceived control (akin to self-efficacy) when their intention to wash hands properly was high [21]. School children in Selat sub-district, Indonesia were also more likely to wash hands properly when their perceived behavioural control was high [13]. The results from Burundi and Zimbabwe indicate an overall lack of awareness of hygiene issues in both countries. Low norms for handwashing and the children's low perceived ability are consistent with the lack of adequate infrastructure at the schools. The improvement potentials calculated suggest that an intervention targeting social norms and self-efficacy should be most effective in both countries. Additionally, in Burundi, children that do not perceive diarrhoea as severe should be targeted by the intervention. In Zimbabwe, children with less knowledge of diarrhoea and disease transmission should profit from the proposed programme. Based on these results and taking into consideration the observational findings on the school handwashing characteristics, a school handwashing programme was developed that fit the target groups. The interventions of the programme aim to (1) create awareness of the benefits of handwashing through educational activities, (2) raise children's ability and confidence to wash hands at school through infrastructural improvements, and (3) highlight the commonness of handwashing at school through events and poster creation. Several studies have been able to show that raising awareness for the importance of handwashing and increasing hygiene knowledge leads to an improvement in proper handwashing [4,10,55]. Moreover, the presence of handwashing stands at school has been found to be associated with proper handwashing [13][14][15], and providing soapy water has been shown to raise the frequency of handwashing practices at school [10]. By introducing the new hardware with a big event and because of the continuous use of the handwashing stations by all children, the behaviour should become common practice, increasing the descriptive norm at each school [19,56] and enhancing the children's self-efficacy through facilitation of the behaviour [56][57][58]. Limitations The results should be viewed with the caution necessary with self-reported behaviours. Several studies have shown that self-report overestimates handwashing behaviour when compared to observed frequencies [59,60]. However, collecting observed data on all children included in this study would have been very difficult and costly and extremely time-consuming. In addition, the operationalization of the behavioural determinants can be criticized because they were measured with only one item. Even though we do not have reliability indicators for the survey items, keeping the questionnaire short was necessary to keep the children motivated to participating in the survey. The present study is cross-sectional, so that relationships between variables are descriptive and do not imply causality. However, the results of the regression analyses have been confirmed by previous work focusing on caregivers' handwashing practices [47]. Conclusions The RANAS systematic approach to behaviour change allowed us to determine the relative importance of the behavioural determinants underlying school children's handwashing practices and thus to select appropriate behaviour change techniques. Several reviews of health promotion programmes have concluded that the quality of an intervention is increased by the use of methods derived from social-cognitive theories [28,61,62]. The findings of this study strongly suggest that similar handwashing programmes providing education on handwashing issues along with adequate infrastructure could induce behavioural change in rural and urban settings in two different countries. Authors' contributions ES developed the research question and study design, finalized the questionnaire, collected the data in Burundi, performed the analyses, and wrote the initial draft of the article. JS participated in the development of the research question, drafted the questionnaire, helped collecting the data in Zimbabwe, and contributed to drafting the manuscript. MNDF participated in the design of the study and the questionnaire, planned and supervised the data collection in Zimbabwe, co-designed the interventions, and reviewed the article. HJM participated in designing the study, reviewed and commented on analytic results, and reviewed and revised the article. All authors read and approved the final manuscript.
2023-01-20T14:29:22.895Z
2017-07-14T00:00:00.000
{ "year": 2017, "sha1": "c763f252749422cdc9e9d3fe8bbbd885be8fc579", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13104-017-2599-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c763f252749422cdc9e9d3fe8bbbd885be8fc579", "s2fieldsofstudy": [ "Education", "Environmental Science" ], "extfieldsofstudy": [] }
15933432
pes2o/s2orc
v3-fos-license
COMPARISON OF THE DEFLATED PRECONDITIONED CONJUGATE GRADIENT METHOD AND ALGEBRAIC MULTIGRID FOR COMPOSITE MATERIALS Many applications in computational science and engineering concern composite materials, which are characterized by large discontinuities in the material properties. Such applications require fine-scale finite-element meshes, which lead to large linear systems that are challenging to solve with current direct and iterative solutions algorithms. In this paper, we consider the simulation of asphalt concrete, which is a mixture of components with large differences in material stiffness. The discontinuities in material stiffness give rise to many small eigenvalues that negatively affect the convergence of iterative solution algorithms such as the preconditioned conjugate gradient (PCG) method. This paper considers the deflated preconditioned conjugate gradient (DPCG) method in which the rigid body modes of sets of elements with homogeneous material properties are used as deflation vectors. As preconditioner we consider several variants of the algebraic multigrid smoothed aggregation method. We evaluate the performance of the DPCG method on a parallel computer using up to 64 processors. Our test problems are derived from real asphalt core samples, obtained using CT scans. We show that the DPCG method is an efficient and robust technique for solving these challenging linear systems. Introduction Finite-element (FE) computations are indispensable for the simulation of material behavior. Recent developments in visualization and meshing software give rise to high-quality but very fine meshes, resulting in large systems, with millions of degrees of freedom, that need to be solved. When choosing a solver for these systems, we distinguish between direct solution methods and iterative methods. By now, it is well established that iterative solution methods are preferable, due to their potential for better algorithmic and parallel scalability than is possible with direct methods. In recent years, parallel computing has become the standard in FE software packages; therefore, only parallel algorithms are considered here. In our application, the FE stiffness matrix is symmetric and positive definite and, therefore, the preconditioned conjugate gradient (PCG) method is the iterative method of choice from a theoretical point of view. Furthermore, the PCG method is well suited for parallel computing. This paper focuses on the choice of a parallel preconditioner for FE problems in structural mechanics. Many FE computations involve simulations of inhomogenous materials, where the difference in properties of materials leads to large differences in the entries of the resulting stiffness matrices. We have shown in [28] that these jumps in coefficients slow down the convergence of the PCG method using a simple preconditioner. By decoupling regions with homogeneous material properties with a deflation technique, a more robust PCG method has been constructed: the deflated preconditioned conjugate gradient (DPCG) method. The DPCG method proposed in [28] is an extension of the technique of subdomain deflation, introduced in [36], and of the preconditioners to matrices with large differences in the entries, proposed in [50]. There is a correlation between the number of rigid body modes of sub-bodies of materials contained within the FE mesh and the number of small eigenvalues of the scaled stiffness matrix. We used rigid body modes combined with existing deflation techniques to remove those small eigenvalues from the spectrum of the scaled stiffness matrix, yielding a stable and robust adaptation of the PCG method. Like the PCG method, the DPCG method is well suited for parallel computing. For PDEs with heterogeneous coefficients, there are several state of the art (black box) solvers available. Direct solution methods, the FETI method, and algebraic multigrid methods (AMG) are among the most popular solvers and preconditioners. An important advantage of direct solution methods is their robustness: they can, to a large extent, be used as black boxes for solving a wide range of problems, but they are expensive in terms of computational costs. Several high quality, well parallelisable public domain direct solvers exist [32,38,42,10,6]. The FETI and AMG methods are also robust but are often much less expensive than direct solution methods and have been discussed in [23] and [49]. As a comparison to DPCG, we focus on the best AMG adaptation, smoothed aggregation (SA), as it has been demonstrated to be a successful parallel preconditioner for a number of structural mechanics applications [2,4,14]. The two most relevant studies of SA to the simulations considered here are those of [4,7], both of which focus on micro-FE modeling of bone deformation, based on micro-CT scans of human bones. In this paper, we will compare the performance of SA using default parameters as a preconditioner for both PCG and DPCG with that of SA using an optimal choice of parameters as a preconditioner to PCG. All methods are implemented within a parallel environment using Trilinos [26]. We will provide an overview of the DPCG method proposed in [28], and discuss the parallel implementation of the DPCG method into an existing FE software package. Finally, we present numerical experiments on FE meshes from real-life cores of asphalt concrete as case studies for this comparison. Problem definition: composite materials In this paper, we consider asphalt concrete as an example of a composite material. Asphalt concrete consists of a mixture of bitumen, aggregates, and air voids. The difference between the stiffness of bitumen and the aggregates is significant, especially at high temperatures. The surge in recent studies on wheel-pavement interaction show the importance of understanding the component interaction within asphalt concrete, demanding high-quality FE meshes. Until recently, because of the extremely long execution time, memory, and storage space demands, the majority of FE simulations of composite materials were performed by means of homogenization techniques [18]. Unfortunately, these techniques do not provide an understanding of the actual interaction between the components of the material. It is known, however, that component interaction is the most critical factor in determining the overall mechanical response of the composite material. We obtain accurate FE meshes of the asphalt concrete materials by means of computed tomography (CT) X-ray scans and additional, specialized software tools like Simpleware ScanFE [43]. We use the computational framework described in [18] to simulate the response of a composite material that is subjected to external forces by means of small load steps. We define the relation between the undeformed (reference) state of a volume, V , with the deformed (current) state position vector at a fixed point in time as where x = x x , x y , x z T is the undeformed state position vector, X = X x , X y , X z T is the deformed state position vector, andū = ū x ,ū y ,ū z T represents the change of displacement. Changes of the undeformed and deformed state over time are given by, where F = I + ∂ū ∂ X is the deformation gradient. We refer to [18] for details on the balancing of forces within the material. For any given externally applied force,f , the linearized virtual work equation at equilibrium is given by where δv are the virtual velocities, ∇ 0ū is the directional derivative in the reference configuration of the displacement field, C is the fourth-order constitutive tensor, and S is the second-order Second Piola-Kirchhoff stress tensor. In this paper, we consider hyper elasticity, and use the Neo-Hookean constitutive model, and where α = ν 1−2ν and ν, μ are the Poisson ratio and the Lamé material constant, respectively. By using the FE method, using standard linear basis functions on tetrahedral meshes, we obtain the corresponding stiffness matrix; solving linear system (6), is the most time-consuming computation of the FE simulation. In this equation, u represents the change of displacement of the nodes in the FE meshes, and f is the force unbalance in the system, which is determined by the difference of the internal forces within the system and the external forces exerted on the system. The internal forces are computed by solving non-linear equations for each finite element. The computing time and costs for these steps are negligible compared to solving linear system (6). The stiffness matrix, K , is symmetric and positive definite for elastic, constrained systems; hence, ∀u = 0 : u T K u > 0 and all eigenvalues of K are positive. Within the context of mechanics, 1 2 u T K u is the strain energy stored within the system for displacement vector u [9]. Energy is defined as a non-negative entity; hence, the strain energy must be non-negative also. Preconditioned conjugate gradient method Because K is SPD, CG [27] is the method of choice to solve (6) iteratively. The CG method is based on minimizing the energy norm of the error in the k-th approximation to the solution over the Krylov subspace, where the energy norm is defined as u K = u T K u 1 2 . We note that minimizing the error in the K -norm is, in fact, minimizing the strain energy over the Krylov subspace K k−1 (K ; r 0 ). This implies that, for a given distributed static load, we construct a displacement vector that has an optimal distribution of the force over the material. Theorem 10.2.6 in [22] provides a bound on the error of the approximations computed by CG. Denote the ith eigenvalue of K in nondecreasing order by λ i (K ) or, simply, λ i . After k iterations of the CG method, the error is bounded by where κ = κ(K ) = λ n /λ 1 is the spectral condition number of K . While this bound is not always sharp, the error reduction capability of CG is generally limited when the condition number is large. The condition number of K typically increases when the number of elements increases or when the relative stiffnesses of the materials change. For plastic and viscous behavior, this can result in a series of increasing numbers of iterations as the stiffness changes with every load or time step. Here, we focus on a single load and time step, although this is an important question for future research as plasticity and viscosity are key to realistic simulations. The convergence of CG is dependent not only on the condition number but also on the number and distribution of very small eigenvalues [48]. The eigenvectors corresponding to the smallest eigenvalues have a significant contribution to the global solution but may need a significant number of iterations for convergence locally. Hence, very small eigenvalues can dramatically increase the number of iterations needed to generate a good approximate solution. In our application, the number of aggregates has a direct correlation with the number of smallest eigenvalues of K . Increasing the number of aggregates may, therefore, result in more very small eigenvalues and deterioration of the convergence rates. To improve the performance of CG, we change the linear system under consideration, resulting in a problem with more favorable extreme eigenvalues and/or clustering. The most efficient way to do this is by preconditioning of the linear system. Preconditioners are essential for the performance of iterative solvers, and no Krylov iterative solver can perform well for these problems without one [41]. The preconditioned equation reads where matrix M is the left preconditioner, which is assumed to also be symmetric and positive definite. The CG iteration bound of Eq. 8 also applies to the preconditioned matrix, replacing κ(K ) with κ(M −1 K ). Thus, the preconditioning matrix must satisfy the requirements that it is cheap to construct, and that it is inexpensive to solve the linear system Mv = w, as preconditioned algorithms need to solve such a linear system in each iteration step. A rule of thumb is that M must resemble the original matrix, K , to obtain eigenvalues that cluster around 1. Obviously, M = K would be the best choice to minimize κ(M −1 K ), but this choice is expensive and equivalent to solving the original system. Common choices of M are the diagonal of K , which is known as diagonal scaling, and the Incomplete Cholesky factorization using a drop tolerance to control the fill-in [41]. Within the field of engineering, the PCG method is widely used because it is easy to implement, PCG iterations are cheap, and the storage demands are modest and fixed. However, there remain pitfalls in its use in practical simulations. As discussed above, its performance depends on the conditioning and/or spectrum of the matrix. This can be improved by the use of an appropriate preconditioner, but this adds to the work and storage required by the algorithm. Consequently, the convergence of low-energy modes can be slow and, more importantly, poorly reflected in the residual associated with an approximate solution. The established alternative is the use of direct solution methods. For our application, the choice of direct or iterative methods should not be made based on the solution of a single linearized system but, rather, based on the full nonlinear, time-dependent equations to be solved. When using simple nonlinear constitutive relations, direct methods may be preferable, if the factorization of one stiffness matrix can be reused many times. If, on the other hand, the stiffness matrix changes significantly with every linearization, the use of PCG may be preferable, particularly if the number of Newton iterations can be controlled. Moreover, many engineering applications do not use an exact evaluation of the Jacobian, hence a relative error of order 10 −2 of the solution of Eq. 9 would be sufficient. The study presented in this paper assumes that a single factorization cannot be reused enough to make direct methods competitive. This assumption is supported by the experiments that we present in Sect. 5, that show that the costs of factorization are substantially greater than a single iterative solve and, consequently, that iterative solvers can reduce the total required computational time. Deflated preconditioned conjugate gradient method We have shown in [28] that the number of iterations for convergence of PCG is highly dependent on the number of aggregates in a mixture as well as the ratio of the Young's moduli. Increasing the number of aggregates introduces correspondingly more (clustered) small eigenvalues in stiffness matrix K . The jumps in the Young's moduli are related to the size of the small eigenvalues. We know from [48] that the smallest eigenvalues correspond to the slow converging components of the solution. Thus, we look to design a preconditioner that directly addresses these modes. For any FE computation, we consider subsets of unconstrained elements as rigid bodies. Their corresponding (sub) stiffness matrices are assemblies of the element stiffness matrices. In the context of asphalt concrete, the aggregates are subsets of elements, with their Young's modulus as a shared property; the bitumen and the air voids are defined similarly. When a matrix, K unc , represents a rigid body, i.e. an unconstrained mechanical problem (with no essential boundary conditions), the strain energy equals zero for the rigid body displacements, as the system remains undeformed, and the matrix is positive semi-definite, ∀u : u T K unc u ≥ 0. More specifically, the number of rigid body modes of any unconstrained volume equals the number of zero-valued eigenvalues of its corresponding stiffness matrix. When a matrix has zero-valued eigenvalues, the kernel N (A) is non-trivial. Moreover, the basis vectors of the kernel of a stiffness matrix represent the principal directions of the rigid body modes. In general, two types of rigid body modes exist: translations and rotations. In three dimensions, this implies six possible rigid body modes and, hence, six kernel vectors can be associated with the rigid body modes. For the partial differential equations considered in this paper, the physical analogue to these kernels are the rigid body modes of the linear elastic components of the material. In [28], we conclude that the number of rigid bodies times the number of rigid body modes (six in three dimensions) is equal to the number of small eigenvalues of stiffness matrix K . By using the deflation technique, we deflate the Krylov subspace with pre-computed rigid body modes of the aggregates and remove all corresponding small eigenvalues from the system. As a result, the number of iterations of the DPCG method is nearly not affected by jumps in material stiffness or by the number of aggregates. This is a significant improvement over many other preconditioning techniques whose performance degrades even for simpler heterogenous problems. To define the deflation preconditioner, we split the solution of (6) into two parts [20], where P is a projection matrix that is defined as where R(Z ) represents the deflation subspace, i.e., the space to be projected out of the system, and I is the identity matrix of appropriate size. We assume that m n and that Z has rank m. Under these assumptions, E ≡ Z T K Z is symmetric and positive definite and may be easily computed and factored. Hence, can be computed directly, and the difficult computation is of P T u. Because K P T is symmetric, and P is a projection, we solve the deflated system, forû using the PCG method and multiply the result by P T giving u = Z E −1 Z T f + P Tû . We note that (14) is singular; however, the projected solution P Tû , is unique, as it has no components in the null space, N (P K ) = span{Z }. Moreover, from [30,48], the null space of P K never enters the iteration process, and the corresponding zero-eigenvalues do not influence the solution. The DPCG method [46] is given as Algorithm 1. To obtain a useful bound for the error of DPCG for positive semi-definite matrices, we define the effective condition number of a semi-definite matrix D ∈ R n×n with rank n − m, m < n, to be the ratio of the largest and smallest positive eigenvalues; analogous to Eq. 8, Theorem 2.2 from [20] implies that a bound on the effective condition number of P K can be obtained. Theorem 3.1 Let P be defined as in (11), and suppose there exists a splitting K = C + R, such that C and R are symmetric positive semi-definite with null space of C, N (C) = span{Z }. Then for ordered eigenvalues λ i , Moreover, the effective condition number of P K is bounded by, Proof See [20] (p. 445). While the large discontinuities in matrix entries due to strongly varying material properties in the FE discretization induce unfavorable eigenvalues (either large or small) in the spectrum of stiffness matrix K , the effective condition number of P K is bounded by the smallest eigenvalue of C and the largest eigenvalue of K . To remove the discontinuities and, thus, eliminate those unfavorable eigenvalues, we decouple the sub-matrices of stiffness matrix K that correspond to different materials by finding the correct splitting. The eigenvalues of the decoupled sub-matrices then determine the spectrum of P K . However, due to the large differences in stiffness, the values of the eigenvalues for the different sub-matrices can still vary over several order of magnitudes. To achieve a scalable solution algorithm, we couple this deflation procedure with another preconditioner to map the spectra of the sub-matrices onto the same region, around 1. This deflation technique can be used in conjunction with any ordinary preconditioning technique, giving a twolevel approach, treating the smallest and largest eigenvalues by deflation and preconditioning, respectively. By choosing a favorable combination of deflation and preconditioning, a better spectrum is obtained, yielding a smaller effective condition number and fewer iterations. For a symmetric preconditioner M = L L T , e.g. diagonal scaling, the result of Thus, as discussed above, we construct our deflation space Z from the null spaces of the (unconstrained) stiffness matrices of chosen sets of elements. In [29], an algorithm is given for computing rigid body modes of sets of elements. The matrix, C, is then defined by the assembly of all finite elements that belong to each body of material. The matrix, R, consists of the assembly of all finite elements that share nodes with the elements on the boundary of a body of material but that are not contained within the sub-mesh. We note that if some elements of a less stiff material are assigned to the element set of a stiffer material, the material stiffness matrices are not decoupled. So, for instance, when a node belongs to two elements and two different materials and is assigned to the wrong (less stiff) element with respect to the splitting of K , then the preconditioning step will reintroduce the coupling, nullifying the effect of the deflation operator. The DPCG method extends PCG, enhancing stability and robustness when solving for symmetric, and positive definite systems, but requires extra storage for the deflation matrix Z . Moreover, P K u in Algorithm 1 needs to be computed in every iteration, but the Cholesky decomposition of matrix E and the computation of the matrix-matrix product K Z are done before entering the iteration loop, saving computation time. The unfavorable eigenvalues, due to the discontinuities in the stiffness matrix, are treated by the deflation method, making these costs worthwhile. The convergence of the DPCG method is assured for even highly ill-conditioned problems, and the method yields more accurate solutions than the PCG method. Algebraic multigrid method Multigrid methods [15,47,51] are among the most efficient iterative methods for the solution of the linear systems that arise from the FE discretization of many PDEs. They achieve this efficiency due to the use of two complementary processes, relaxation and coarse-grid correction. In the relaxation phase, a simple stationary iteration, such as the Jacobi or Gauss-Seidel iterations, is used to efficiently damp largeenergy errors. Errors associated with small-energy modes are corrected through a coarse-grid correction process, in which the problem is projected onto a low-dimensional subspace (the coarse grid), and these errors are resolved through a recursive approach. This decomposition is, in many ways, the same as that in deflation, u = (I − P T )u + P T u; the relationship between deflation and multigrid has been explored in [46,45]. For homogeneous PDEs discretized on structured grids, the separation into large-energy and smallenergy errors is well-understood, leading to efficient geometric multigrid schemes that offer both optimal algorithmic and parallel scalability. For PDEs with heterogeneous coefficients that are discretized on unstructured meshes, algebraic multigrid (AMG) approaches [40,44,49] offer similar scalability, although at a higher cost per iteration (see, for example, [34] for a comparison of structured and unstructured multigrid approaches). While the fundamental complementarity of the multigrid approach doesn't change within AMG, the way in which the coarse-grid problems are defined does. In geometric multigrid schemes, the coarse-grid operators and intergrid transfer operators (interpolation and restriction) are determined based on explicit knowledge of the grid geometry and discretized PDE. In contrast, interpolation operators for AMG are defined in matrix-dependent ways [5,40], while the restriction and coarse-grid operators are given by variational conditions (when K is symmetric and positive definite) [35]. Thus, the challenge in achieving efficient multigrid performance is focused on the definition of appropriate matrix-dependent interpolation operators. In the case of scalar PDEs, there are a wide variety of possible approaches for defining AMG-style interpolation operators [40,49,13,11,33,37]. These approaches are largely based on assumptions about the ellipticity of the underlying differential operator and, for scalar PDEs, they typically result in defining interpolation to closely match a given vector (often the constant vector) with the range of interpolation. For systems of PDEs, such as those that model the displacement of the composite materials considered here, more care must be taken, as the ellipticity of the equations of linear elasticity, for example, depends strongly on both the boundary conditions and Lamé coefficients of the system. As a result, there has been much research into the development of efficient AMG approaches for problems in solid mechanics. For systems of PDEs, there are several possible AMG approaches. Within the setting of classical AMG (often called Ruge-Stüben AMG) [40,12], these approaches are commonly labeled as the variable-based, point-based (or, equivalently, node-based), and unknown-based approaches [16]. The variable-based approach applies AMG as a black-box, ignoring the structure of the PDE system and, as such, is understood to be effective only for very weakly coupled systems (such as systems with no differential coupling) [16]. The unknown-based approach applies scalar AMG to each component of the system, in a block Jacobi or block Gauss-Seidel manner, and was originally applied to systems of linear elasticity in [39]. Most commonly used is the point-based or node-based approach, where all variables discretized at a common spatial node are treated together in the coarsening and interpolation processes. This approach was first proposed for linear elasticity in [39] and has been extended in [24,31]. An extension framework to improve AMG for elasticity problems was proposed in [8], which uses a hybrid approach with nodal coarsening, but interpolation based on the unknown-based approach. Despite these recent developments in the use of classical AMG for structural mechanics problems, a much more common algebraic multigrid approach for elasticity problems is in the framework of smoothed aggregation multigrid [49]. In smoothed aggregation, the coarse grid is created by aggregating degrees-of-freedom nodewide, through a greedy process that aims to create aggregates of size 3 d for a d-dimensional problem. Such an aggregation is used to define a partition of unity on the grid. A tentative interpolation operator is then defined in groups of six columns (for three-dimensional mechanics) by restricting the set of global rigid body modes to each aggregate based on this partition. Smoothed aggregation improves this tentative interpolation operator by applying a single relaxation step (or smoother) to it, leading to an overlapping support of the block-columns of interpolation and giving much more robust performance than unsmoothed (or plain) aggregation. As smoothed aggregation has been demonstrated as a successful parallel preconditioner for a number of structural mechanics applications [2,4,14], we focus on this approach here. Parallel paradigm: domain decomposition We use parallelism based on domain decomposition as found in [17]. Global domain is divided into D subdomains, yielding = D d=1 d . Domain holds E elements, each subdomain holds E d elements, hence E = D d=1 E d . Elements can share nodes and the associated degrees of freedom that lie in multiple subdomains, but no element is contained in more than one subdomain. Elementwise operations can be done independently for each subdomain, but the values of any quantity at shared nodes must be synchronized between subdomains after finishing the operation. The synchronization yields communication between the boundaries of the subdomains. Examples of elementwise operations are numerical integration, matrix-vector multiplications etc. Parallel implementation of PCG The PCG algorithm is constructed from basic linear algebraic operations. The matrix-vector operation and inner product require communication between neighboring and all subdomains respectively. All other linear algebraic operations (e.g., vector scaling and addition) can be done locally; i.e., there is no communication with other subdomains. This makes the PCG method easy to parallelize. The other operation that needs to be taken care of explicitly is the preconditioner. In this research, we consider AMG and for comparison also diagonal scaling. We note that diagonal scaling is an inherently parallel operation. We elaborate on the parallel implementation of AMG below. Parallel implementation of DPCG The DPCG method given by Algorithm 1 is similar to the standard PCG algorithm, but the parallelization of the DPCG method involves two extra steps. First, the construction of the deflation matrix, Z , on each subdomain and, second, the evaluation of P K p j for each iteration of DPCG. Obviously, the mapping of the deflation matrix, Z , onto the subdomains is defined by the partitioning of elements over the subdomains. The computation of rigid body modes only requires the element matrices; hence, no communication is needed for the assembly of the distributed deflation matrix. We only store the non-zero elements of Z , giving a small memory overhead. The evaluation of P K p j can be optimized. Consider where K ∈ R n×n , Z ∈ R n×k . Here, K p j = y is computed as usual, while K Z =Z ∈ R n×k and E −1 = (Z T K Z) −1 are computed only once, before entering the Krylov process (iteration loop). Hence, for each iteration of DPCG, we have three extra operations compared to PCG, computing Communication between subdomains is needed for the computation of K Z, E, and Z T . The entries of the k ×k matrix E are distributed over the subdomains, and its decomposition is determined in parallel. The computation of Z T y requires one communication involving all subdomains to compute the k parallel inner products of k × 1 vectors. Parallel AMG In recent years, two general-purpose parallel algebraic multigrid codes have been developed, alongside a number of other codes aimed at specific applications. One of these codes, BoomerAMG [25] (included in the Hypre package [19]), focuses on classical (Ruge-Stüben) AMG algorithms and their variants, while the other, ML [21] (included in the Trilinos project [26]), focuses on the smoothed aggregation setting. In our experiments below, we make use of ML and Trilinos for the parallel SA implementation. There have been a number of studies of the performance of parallel AMG codes for solid mechanics operations. Initial two-dimensional results were reported in [52], based on an AMG treatment that first coarsens appropriately along boundaries shared between processors and then treats the processor interiors. Scalability studies for a simplified AMG approach, based on maximal independent set coarsening of nodes, remeshing the coarse node set, and using geometric grid-transfer operators, for both linear elasticity and nonlinear elastic and plastic solid mechanics are detailed in [1]. This method was compared with smoothed and unsmoothed aggregation in [2], where it was found that the simplified approach was less robust than the aggregation approaches, and that smoothed aggregation was most robust and typically not much more expensive than the other two approaches. A comparison of Ruge-Stüben AMG, smoothed aggregation, and a generalized smoothed aggregation approach (using information from local eigensolves to complement the restricted rigid-body modes) was performed in [14], where smoothed aggregation was shown to generally outperform Ruge-Stüben AMG. The generalized form offers even greater robustness, but relies on an expensive preprocessing step. One important issue is the choice of parallel smoother; this was studied in depth in [3], comparing parallel hybrid Gauss-Seidel orderings with polynomial (Chebyshev) smoothers and concluding that polynomial smoothers offer many advantages. For our study, we investigate primarily two options within the ML software. In the first, denoted by SA (AMG), we treat the smoothed aggregation solver as a "black box", providing the minimal amount of information required and following the default software options. In this case, we provide just the matrix (in node-ordered form) and the geometric coordinates of the mesh nodes. In the second, denoted by SA (AMG) / VBMetis, we explicitly provide complete information about the PDE structure and null space. In particular, we provide the degree-of-freedom to node map, including full details about eliminated degrees of freedom due to boundary conditions, and we give directly the rigid body modes of the entire solid body. In both cases, we make use of Chebyshev smoothers. Numerical experiments The meshes of experiments 1 and 2, given by Figs. 1 and 5 are derived from real-life samples of asphaltic material obtained by CT scan. Both experiments involve different mesh sizes, yielding 230.000 and 2,9 million degrees of freedom, respectively. The meshes contain a mixture of materials that are subjected to an external force applied to the upper boundary of the volume. Zero displacement boundary conditions are imposed on three sides of the volume; that is, homogenous Dirichlet boundary conditions are given for all degrees of freedom in the x, z-, x, y-and y, z-planes for y = 0, z = 0 and x = 0, respectively. These materials give rise to the coupled partial differential equations given in [18]. In both experiments, we make use of the same set of material parameters. We distinguish between three materials: aggregates, bitumen, and air voids. The corresponding stiffness coefficients (Young's moduli) are given in Table 1 and are the dominating contributions to the entries of the stiffness matrix. As described in Sect. 2, the underlying PDEs of both experiments come from the computational framework in [18]. In this paper, only hyperelastic materials are considered. The constitutive model involves non-linear material behavior; hence, the stiffness matrix of Eq. 6 is a tangential stiffness matrix derived from the linearization of the virtual work equation. The deflation vectors are constructed by computing the rigid body modes of the aggregation of stones, bitumen, and air voids, as proposed in [28]. The number of deflation vectors for the experiments are 162 and 342, respectively. All vectors are sparse and Z is of full rank. As described in Sect. 4.3, the parallel implementation involves the distribution of the degrees of freedom, thus vectors; hence, the rigid body modes of an arbitrary aggregation of materials may be spread over multiple domains. Apart from variations due to the round-off errors induced by the domain decomposition, the parallel implementation should yield the same number of iterations as a sequential implementation given the same deflation space. As a result, the number of iterations of the DPCG method for a given problem should be invariant under an increasing number of subdomains. The aim of the experiments is to compare the performance and robustness of our deflation method and (optimized) SA for mechanical problems with heterogeneous material coefficients. In Sect. 3.2, we have argued that DPCG is a twolevel approach, and that we need a preconditioner to treat both ends of the spectrum of the stiffness matrix. We have seen that SA is designed to solve homogeneous elastic equations in an optimal way. A natural choice for preconditioning of DPCG would be using a "black box" implementation of SA for the "decoupled" stiffness matrices. Hence, we compare PCG and DPCG preconditioned by SA (AMG) as well as PCG preconditioned by SA (AMG) / VBMetis. We also include diagonal scaling as preconditioner to have a point of reference from which to compare all methods. The stopping criterion for all computations is r i r 0 < 10 −6 where r i is the residual vector at ith iteration. Although we have argued that direct solution algorithms are not the methods of choice for solving these large, 3D problems, we have included the run time of the decomposition of the stiffness matrix using SuperLU 2.5 (distributed memory) [32]. For a fair comparison of PCG and DPCG, we have implemented the methods in Trilinos [26]. All experiments were done at Tufts University (USA) on an IBM 60-node cluster with over 500 64-bit cores (16-48 GB RAM per node, Infiniband interconnect). Due to the complexity of the meshes and limitations of our meshing software, we only take into consideration the strong scaling effect of the solvers. In Experiment 1, we compare results for 4, 16 and 64 subdomains, where each subdomain corresponds to a computing core. In Experiment 2, we compare results for 4, 8 and 64 subdomains, because of memory limitations of the SA (AMG) / VBMetis solver. Experiment 1 This experiment involves a slice of asphaltic material, represented by a mesh containing 315270 4-noded tetrahedral elements yielding roughly 230000 DOF. The wall clock times as well as the number of iterations of all solvers for all domain The DPCG method combined with the SA (AMG) preconditioner has a good performance in terms of iterations, but shows poor parallel performance. Going from 16 subdomains to 64 subdomains gains very little in terms of speed-up. Also, the SA (AMG) preconditioner has three times the setup cost and ten times the operational cost compared to the deflation operator. The PCG method combined with the SA (AMG) / VBMetis preconditioner performs worse than the DPCG method with SA (AMG) in terms of iterations as well as wall clock time. Although the ratio between setup time and cost per iteration is almost equal for both methods, the overall cost of the SA (AMG)/VBMetis preconditioner is much higher. Again, due to the small problem size, there is no benefit from the parallel implementation. Overall, these results are not surprising. In terms of iterations, we see strong improvement as we improve the preconditioner for PCG, from over 4,600 iterations for diagonal scaling, to about 630 for SA(AMG), to about 320 for SA(AMG)/VBMetis. Similarly, we see uniform improvements adding deflation to PCG, reducing the iteration counts for both the diagonal scaling and SA(AMG) preconditioners by almost a factor of three. In terms of wall-clock time to solution, however, the much simpler approach of diagonal scaling becomes the clear winner over the approaches based on SA for this example; even though many more iterations of PCG or DPCG are needed, the lack of a setup cost and the much lower cost per iteration of the simpler approach pay off in the end. The L 2 norms of the residuals of Experiment 1 are given in Fig. 3. We observe that without deflation, i.e., PCG preconditioned by SA (AMG) or SA (AMG)/VBMetis, not all unfavorable eigenvalues have been removed from the spectrum of M −1 K . This can also be seen from Fig. 4, where the 50 smallest Ritz values of M −1 K and M −1 P K are given. Clearly, the deflated systems have no clustering of very small (approximated) eigenvalues whereas the non-deflated systems, even when preconditioned by SA (AMG)/VBMetis, still contain some unfavorable eigenvalues. Experiment 2 This experiment involves a slice of asphaltic material, represented by a mesh containing 4531353 4-noded tetrahedral elements, yielding roughly 3 million degrees of freedom shown in Fig. 5. The wall clock time as well as the number of iterations of all solvers for all domain decompositions are given in Fig. 6. Again, we see expected performance from the iteration counts. For PCG, these improve from not converging (within 10,000 iterations) with diagonal scaling, to roughly 2,000 iterations with SA (AMG), to roughly 380 iterations with SA (AMG)/VBMetis. Here, the added costs of SA(AMG)/VBMetis are very notable, giving an out-ofmemory error on four CPUs. Also as before, we see the immediate advantage of DPCG, leading to convergence in about 9,000 iterations for diagonal scaling, and about 1,200 iterations for SA (AMG). In this case, however, the added expense of SA (AMG)/VBMetis pays off, yielding about a 50% speedup over DPCG with diagonal scaling on 64 CPUs (and a greater speedup on eight CPUs). We note that the SA (AMG)/VBMetis approach is far from a "black-box" preconditioner, and is available only as beta software within the ML package; as such, further improvement may be possible by refining this software and addressing memory issues that were encountered in obtaining these results. Several interesting observations are possible about the parallel scaling. For the DPCG approaches, the relative cost of the deflation operator increases with the number of subdomains, becoming as expensive the total cost of matrix-vector and inner products; however, the setup time decreases, due to the parallel Cholesky decomposition of E. Excellent strong scaling is observed for DPCG with diagonal scaling, giving speedup of 1.8 and 8.0 for increases in the number of subdomains by factors of 2 and 8, respectively. Slightly less impressive speedup of DPCG preconditioned by SA (AMG) is observed, with factors of 1.87 and 6.54 for increases in the number of subdomains by factors of 2 and 8, respectively. This sub-optimal speed-up is due to the SA (AMG) preconditioner, which involves an extra set-up phase that scales poorly. The speedup of the SA (AMG)/VBMetis approach shows the poorest results, with a factor of only 2.45 when the number of subdomains increases by a factor of eight. This, again, is due to poor scaling of the SA(AMG)/VBMetis preconditioner, although better scaling might be observed with more degrees-of-freedom per subdomain on the 64 CPU scale. The L 2 norms of the residuals of Experiment 2 are given in Fig. 7, and the Ritz values of the preconditioned stiffness matrix derived form the (D)PCG iterations are given in Fig. 8. We observe that the stiffness matrix preconditioned by SA (AMG)/VBMetis yields a more favorable spectrum of eigenvalues compared to preconditioning with the deflation operator combined with SA (AMG). This can also be observed in Fig. 7, as the residual curve of PCG preconditioned by SA (AMG)/VBMetis is steeper than the curve of DPCG preconditioned by SA (AMG), which indicates that the eigenvalues of the spectrum of preconditioned K lie relatively closer to one. However, compared to PCG preconditioned by diagonal scaling and SA (AMG), all unfavorable eigenvalues have been removed from the spectrum by DPCG and by PCG preconditioned by SA (AMG)/VBMetis. For this reason we do not consider the obvious combination of DPCG and SA (AMG)/VBMetis; there are no small eigenvalues left to be treated by DPCG. Remark The results for a direct solver for both experiments are given in Table 2. Using the parallel direct solver, wallclock times are uniformly worse than those of DPCG for the smaller test problem; for the larger test problem, memory issues prevented convergence on all but the largest numbers of cores, where the wall-clock time was again substantially worse than all of the iterative approaches considered here. Conclusion We have compared the PCG method and the DPCG method for the solution of large linear systems from mechanical problems with strongly varying stiffnesses of materials. We have compared three preconditioners with PCG and two with DPCG, using two implementations of smoothed aggregation (SA), an adaptation of algebraic multigrid designed for solving elasticity equations, and diagonal scaling. For one implementation of SA, we choose the default parameter set and, for the other implementation, we choose an optimal parameter set for the experiments involved. DPCG offers clear enhancements over standard PCG in terms of both the number of iterations required for convergence and the wallclock time to solution. It is well-suited for parallel computing and can be easily implemented within any existing FE software package with basic parallel linear algebraic operations. The combination of DPCG with diagonal scaling offers an exceptionally low cost per iteration, giving much better wall-clock performance than PCG, even with the SA (AMG) preconditioner. We suspect that DPCG and diagonal scaling combine so well because deflation and scaling are complementary operations, working on the lower and upper part of the spectrum respectively. DPCG with the SA (AMG) preconditioner strongly improves the iteration counts over diagonal scaling but, for our experiments, does not improve the wall-clock time to solution. For our larger test problem, the optimized SA (AMG)/VBMetis preconditioner does outperform the much simpler DPCG with diagonal scaling approach; however, this approach requires significantly more software development effort and, as such, may not be readily available for all simulation codes. Thus, we have demonstrated that the DPCG approach is efficient, scalable, and robust and, as such, is an effective tool in large-scale simulation of the mechanics of composite materials. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "69b3c3dfa24a0b4aef091de2ad7b63f0c5e9b948", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00466-011-0661-y.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a5266376fd709bb317ca84f6e66fa9709f2d08f3", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Mathematics" ] }
240053471
pes2o/s2orc
v3-fos-license
Architectural Layers of Internet of Things: Analysis of Security Threats and Their Countermeasures A pervasive network architecture that interconnect heterogeneous objects, devices, technologies and services called Internet of Things has prompted a drastic change in demand of smart devices which in turn has increased the rate of data exchange. These smart devices are built with numerous sensors which collect information from other interacting devices, process it and send it to remote locations for storage or further processing. Although this mechanism of data processing and sharing has contributed immensely to the information world, it has recently posed high security risk on privacy and data confidentiality. This paper therefore analyses different security threats to data at different architectural layers of Internet of Things, possible countermeasures and other in-depth security measures for Internet of Things. The paper identifies device authentication on IoT network to be of paramount importance in securing IoT systems. This paper also suggests some essential technologies of security such as encryption for securing IoT devices and the data shared over IoT network. Introduction Presently, the number of objects connected to the internet are more than the people in the world. As long as more objects gain the capability to directly connect and communicate with other objects or become physical representations of data accessible via Internet, this gap will continue to grow geometrically. According to Metcalfe's law, -the value of a telecommunication network is exponentially proportional to the number of its connected users‖. Recent research conducted by Cisco showed that, approximately 50 billion of devices will be connected to the global network by 2020 which implies 6.6 physical devices per person which is quite a large in number of devices [1]. This increase in connected devices and device to device communication has been referred to in many ways: Internet of Things (IoT), Internet of Everything (IoE), Internet of Anything (IoA), Machine-to-Machine (M2M), Industrial Internet of Things (IIoT), to mention but few. The common aspect between all these terms is the connection of new kinds of objects to the Internet in order to build a connected world. In 1999, the term -Internet of Things‖ was used by Kevin Ashton for the first time in terms of supply chain management [2]. Due to the influence of several and various interpretations of the subject in almost everything from scientific research to electronic marketing, the precise definition of Internet of Things is still a subject of debate. Most often, it is viewed as a paradigm that permits the connection of people to people, people to things, or things to things [3]. In view of device vulnerabilities, attacks and threat analysis, IoT may be considered as the communication between physical objects like mobile phones and some other smart devices that receive and send data and other useful services via public network. The connection to the Internet is possible because of new technologies, such as RFID, wireless networks, Internet Protocol version 6 (IPv6), and fieldbuses [4]. The objective of IoT is to make interconnection between machines. Thus IoT surrounds and connects the real world through these physical devices which are embedded with different types of sensors. This demands that security of important information on IoT should incorporate different security goals such as identification, data privacy, integrity, availability non-repudiation and confidentiality so that the predicted threats on the development and interconnection of heterogeneous devices will be reduced to the barest minimal as lives function daily with the help of information from smart devices. Most security threat on IoT infrastructures occurred because of no encryption mechanism on data exchange, weak password, insecure data exchange channels and data leakage [5]. Security Goals The security goals of the Internet of things which is the same as information security triad, Data Confidentiality, Integrity and Availability (CIA) suggests that secure connection, and accurate authentication mechanisms should be in place for heterogeneous connections of devices in any network because vulnerabilities, threats and breaches in any of these areas could cause damage to the devices and alter the integrity of information shared via this media [6]. The security goals can be further explained as below. (i). Data Confidentiality: This entails information is not disclosed to unauthorised users. It results in users' privacy protection. This can be achieved through data encryption which gives room for two way verifications between interconnected devices. Biometric verification can also be implemented at the communicating parties' ends. Confidentiality is also achieved by providing secure connections only to the authorised user. As in the case of IoT, devices ensure that sensor network nodes don't connect to neighbouring nodes and tags don't transmit their data to unrecognized readers [7]. (ii). Data Integrity: This is integrated in information sharing channels in order to protect data from cyber criminals during the communication processes, so that data modification cannot be done by unauthorised users without the system detecting and catching the threat. Data integrity is mostly checked using Hashing, Checksum and cyclic redundancy over the network [7]. (iii) Data Availability: Data availability is one of the major goals of IoT. It therefore means that a mechanism for an uninterruptable access to data from the resources by its user at all conditions should be possible. Data backup operations can ensure Data Availability. Attack such as Denial-of-service (DoS) can be prevented through installation of firewalls on the network in order to ensure data availability to users [8]. [9] (i). User Identification: A security process that ensures proper validation of IoT system user before they use the system [9]. (ii). Tamper Resistance: A security requirement that ensures IoT system security even when it is possessed by unauthorised parties. This property can be used to physically or logically probe the unauthorised user [9]. (iii). Secure Execution Environment: This focuses on secure environment, code management and runtime environment designed to protect IoT systems against unauthorised software [9]. (iv). Secure Content: Also known as Digital Rights Management (DRM), this protects the rights of the digital content used in IoT system [10]. (v). Secure Network Access: This provides secure connection to IoT network and services only to authorised devices [10]. (vi). Secure Data Communication: This focuses on maintaining security goals of IoT information through authentication of devices, user and entity identity protection, ensuring confidentiality and integrity of shared data and protecting repudiation of communication transactions [9]. (vii). Identity Management: This is an administrative security requirement that ensures proper identification of devices on IoT network and controls access to resources based on users' rights and restrictions [9]. (viii). Secure storage: This ensures security goals such as confidentiality and integrity of sensitive data stored in IoT systems [10]. Iot Architecture The general architecture of Internet of things is basically made up of four distinct layers based on the scope of this research. These layers are; Perception, Network, Processing and Application layers [11], as shown in the figure below. [12] Each of this architectural layer is specific in it functions and tasks as briefly explained below: Perception/Device Layer This layer is also known as physical or device layer. It collects data obtained from the real world with the support of sensor nodes and other physical devices such as GPS Arduino, Barcodes and RFID [12]. This in turn helps the layer to aid communication between different physical devices. The primary objective of this layer is to provide services to the network and authentication of devices. Devices in this layer possess unique tags which permits strong network connection with most devices using Universally Unique identifiers (UUID) [13]. Information carried from this layer are transmitted and transferred to central processing system. B. Network Layer This layer is in charge of network management, device communication and maintenance of information through different protocols such as MQTT 3.1 and CoAP (Constrained Application Protocol) within the communication in an IoT system. The primary objective of this layer is to collect information gathered by the perception layer and the processing unit and securely transfer them to other layers [13]. Processing Layer This layer interconnects (combines) the physical and network layers together. This layer performs intelligent functions such as automatic evaluation of information, processing data based on intelligent computing, and ubiquitous computing function [12]. Application Layer This layer supports services (context-aware services) between connected objects in a pervasive way for end users. Information processed at this layer provide a platform to application of IoT which facilitates user needs in different ways such as smart homes and offices [12]. Attacks at Different Layers In this section various security threats which threaten the confidentiality of data on each layer will be briefly discussed Perception Layer Attacks Attacks on the perception layer tend to tamper with the physical components of IoT and are relatively difficult to carry out because of the expensive nature of the devices and materials required to carry it out and the attacker needs to have physical contact with the IoT system [12]. Some examples of these attacks are as follows (i) Physical Damage: The attacker interrupts the network of IoT by attacking the devices. This may be due to poor physical security of the infrastructure that hosts IoT system [12]. (ii) Spoofing: The target here is the RFID system. The attacker spreads fake information on the RFID system and makes it appear as originating from a reliable and original source thereby capturing information from the network and gaining access to the network completely [14]. (iii) Malicious Node Injection: Also known as Man-in-the-Middle Attack tends to take over the communication channel by introducing a new malicious node between the sender and recipient node. The attacker then take charge of data exchange between different nodes in IoT system [15]. (iv) Node Tampering: In this attack the adversary tends to cause damages by destroying the sensor node or accessing stored information by physically using intelligent devices to examine nodes on IoT system [14]. (v) Tag Cloning: The tags deployed on different devices in IoT systems are mostly visible such that data can be read and easily modified by an attacker that can discover duplicate tag and hence the user cannot differentiate between duplicate and original data. (vi) Malicious Code Injection: This attack is performed mainly by physically injecting a virus code into a node by using plug and play devices in order to gain access and control all IoT system [15]. (vii) Unauthorized Access to the Tags: This occurs as a result of poor authentication processes in RFID system thereby leading to modification and complete deletion of data by an attacker. (viii) Replay Attack: This attack tends to exploit the privacy of the device or perception layer where the attacker modifies or replay node by spoofing the identity and location of the nodes in an IoT system [16]. (ix) Timing Attack: This attack targets the confidentiality of an IoT system whereby the attacker gets access of encryption key by monitoring and evaluating the time taken to perform encryption process. Sometimes this might be termed as Side Channel attack when the attacker utilises leaked information on device processing duration to attack the IoT system [17]. (x) Eavesdropping: An attacker utilises the RFID wireless characteristic of IoT system to get access to confidential information such as password [18]. (xi) Social Engineering: In this attack, an attacker physically communicates with and tricks IoT users in order to gather and gain access to secret information. Network Layer Attacks In this type of attack the attacker's target is the network of the IoT system. Some common examples are: (i) Wormhole attack: In this attack, the attacker receives packets at one network node and tunnels them to another point in the network thereby replaying them into the network from that point [19]. (ii) Flooding: The most common flooding attack is DDoS flooding attack. The network is congested with unnecessary tasks and processes thereby flooding the IoT system network with unnecessary packets. (iii) Node Replication: The attacker creates a virtual node by copying the identity of a node and, then sending false messages through random route to slow down and disrupt the network [20]. (iv) Man-In-the-Middle Attack: In this attack, the attacker breaches privacy between nodes, access confidential data and sometimes take control over communication by monitoring and interfering with the sensor nodes of IoT system. This attack might be in the form of eavesdropping, routing and replay attack [8]. (ii) Denial of Service: This attack involves flooding the network of IoT with much traffic data thereby denying IoT devices of access to network service [21]. (iii) Traffic Analysis Attack: This attack can be launched on IoT network using any web browser. Confidential information is accessed from RFID technology due to its wireless characteristics when information about the network is captured. The attacker uses sniffing operation to accomplish the attack [22]. (iv) Hello Flood Attack: Is a form of network jamming attack whereby the attacker sends useless messages with the intention of blocking the network channel through large number of traffic. (v) Sybil Attack: In this attack, neighbouring node in wireless IoT system accepts false messages. The attack tends to claim to hold the identification of many nodes [22]. (vi) Wormhole Attack: This involves the relocation of bits and dropping of packets from one node to another or from channel of bits where there is link with low latency. (vii) RFID Cloning: In this type of attack the attacker accesses useful information through mimicking of RFID and copying data from valid RFID to another RFID tag [22]. (viii) RFID Spoofing: In this attack, the attacker spoofs the signals of RFID in order to capture the transmission of data and make it to be original thereby transmitting his own data which have original ID of RFID tag hence by showing to be the actual source the attacker can access the IoT system [22]. (ix) Unauthorized Access of RFID: The attacker exploits the poor authentication procedure in RFID systems to get access of tags thereby modifying, reading and deleting important data on IoT network [22]. (x) RF interface on RFID: This is a form of DoS attack targeted at the RFID, implemented by sending noisy signal across the radio signal thereby stopping all communication in an IoT system [22]. (xi) Sleep Deprivation Attack: Most IoT sensor nodes use replaceable batteries as means of power which makes them to operate in sleep routine in order to enhance their battery life span [23]. Whenever sleep deprivation attack is lunched on IoT system, the sensor nodes are unnecessarily kept busy in order to increase the rate of battery consumption. (xii) Sinkhole Attack: The attacker creates a tempting sinkhole for traffic from different nodes of IoT wireless sensor network. The attack targets the confidentiality and privacy of information by obstructing the transmission of packets to their right destination [23]. (xiii) Routing Information Attack: This attack tends to change the routing information of IoT network thereby resulting in drop of traffic signal and other network transmission error which in turn causes data not to reach their intended destination. (xiv) Selective forwarding: The attacker restricts some nodes from transmitting or forwarding data packets to required destination for malicious purpose [24]. (xv) Routing Threats: This attack occurs when an attacker generates routing loops by altering and falsifying routing information. The network transmission is blocked and the network path is enlarged thereby leading to increase in point-to-point delay [12]. (xvi) Jamming of node in Wireless Sensor Network: This is very common in wireless sensor networks. The adversary stops communication by blocking the communication signal after he must have gained access of radio frequencies of wireless sensor nodes of IoT system [16]. Processing Layer Attacks This layer is made up of different technologies such as data storage and data processing. One major attack in this layer is cloud attack. Other attacks in this layer are: (i) Platform Lower Layer Attack: The attacker exploits the vulnerability of lower layer IoT data before they are being secured by Platform as a Service (PaaS), (ii) Unauthorized Service Access: The attacker gains unauthorized access to services of IoT systems during data storage and processing thereby deleting and modifying confidential information. (iii) Insider Attack: This takes place mostly within organisations using IoT devices whereby a malicious insider alters and extracts confidential data. (iv) Virtualization threats: The attacker exploits the vulnerability of the virtual machine environment to attack IoT system [24]. (v) Shared Resources: Resources sharing platform such as cloud are vulnerable due to resource sharing. Attackers exploit such vulnerability to attack IoT networks. Application Layer Attacks These attacks are used to destroy IoT system using malicious codes such as spyware, virus and worms. Some common examples of software attacks are; (i) Application layer software vulnerabilities Attack: In this attack, Hackers exploit vulnerability in application layer that occur as a result of poor standard code from programmers, one of such is buffer overflow. (iii) Phishing Attack: The attacker uses special software -which are activated or installed by the users unknowingly-to capture login credentials and other important authentication details to gain authorized access to IoT network and system [10]. (iv) Sniffing Attack: Here, the attacker introduces a special software called sniffer into the IoT network and system in order to eavesdrop communication and corrupt IoT system. (v) Malicious Code Attack: In this type of attack, the attacker injects malicious codes such as Spyware, Worms, Virus and Trojan Horse into IoT system and network in order to modify data, deny end users of legitimate services and equally hold the device user to ransom. (vi) Malicious Scripts Attack: This is mostly used on web application. Here the attack tends to cut communicating IoT devices via web by shutting down access to necessary applications and services [10]. (vii) Denial of Service Attack: The attacker tries to launch attack on all users in a network of IoT system at the same time by injecting denial of service attack on IoT network hence authorized users cannot access network resources effectively. (viii) Cryptanalysis Attack: The attacker aimed at cryptanalyzing the mechanism of encryption in IoT system in order to get the security key combination or get the plaintext from the cipher text without legitimate access [25]. (ix) Side channel Attack: Here, the attacker focuses on obtaining the encryption technique in order to hack data by some special technique such as Electromagnetic analysis and power. (x) Man In the Middle Attack: In this type of attack, the attacker intercepts communication channels and signals of IoT system in order to collect useful information and access key exchange process [26]. Countermeasure of Perception Layer Threats The Perception provides different security to the physical components of IoT. Some of the Security measures to put in place at this layer are as discussed below. (i) Safeguard Physical Infrastructures: Physical infrastructures that houses IoT system network such as building, cables, masks and antenna should be protected from unauthorised access. (ii) Device authentication: Malicious devices can be kept out of connection to IoT network by authenticating new IoT devices whenever they enter the market [27]. (iii) Software Verification: Software authenticity and originality can be verified through the application of cryptographic process such as hash algorithm with the help of device digital signature. This can only be implemented on devices with high processing capabilities [27]. (iv) Encryption: Encrypting IoT data using encryption algorithms such as RSA, Blowfish and AES can protect IoT devices against attacks as this turns the data into cipher text so that its content cannot be read by an attacker [28]. (v) Error Detection Technique: To avoid altering the content of sensitive information, there is an availability of error detection mechanism on each physical device using cryptographic technique such as hash algorithm which has the ability to utilize low power [29]. (vi) Data Access Protection: Encryption schemes such as DSA, RSA, BLOWFISH and DES can provide high data security by preventing the attacker from unauthorized access to data in transit and at rest [14]. (vii) Risk Assessment: High data confidentiality and security against breaches in IoT network can be achieved through Dynamic Risk Assessment technique as it provides means of discovering different types of threats to the network. In most cases, RFID runs an auto-kill command of RFID tags whenever an error is discovered with using dynamic risk assessment mechanism. This in turn stops unauthorized access to data [22]. (viii) Protection of sensitive information: Privacy of sensitive data is one of the major concern of all security measures in system and information security. A common technique that provides mechanism to hide sensitive information through anonymity of identity is the K-anonymity technique. It provides security for information by hiding its properties such as location and identity (ix) Anonymity: An important requirement for maintaining high data confidentiality as it travels through the network is hiding of private information like data address and location. To achieve this in IoT network, Zero-Knowledge and K-anonymity techniques are normally implemented but K-anonymity technique seems to be the best technique for IoT devices due to its low power consumption rate [29]. (x) IPSec Security channel: Encryption and authentication are the two major secure functionalities of IPSec Security channel. These provide security by avoiding Node tampering and eavesdropping through encryption which ensures confidentiality of data and permits a receiver to identify that the sender of the data with a given IP address is fake or real [30]. Countermeasure of Network Layer Threats Although the IoT network layer is threatened by different types of attacks, proper counter measures can keep it checked. Some of the counter measures include: (i) Active Firewalls: For filtering the traffic, passive monitoring (probing) to raise alarms, traffic admission control through authentication, and bi-directional link authentication. IoT sensors are very often simple, low-power end devices. Due to the limited functionality of IoT sensors, security processing, such as encryption get handled in hardware [31]. (ii) GPS location system: Implementation of GPS system can identify spoofing attack from network layer of the IoT system. (iii) Encryption: Using strong encryption scheme on IoT network nodes and payload of a protocol layer can lower the rate of attack on this layer [31]. (iv) Data privacy: Data privacy can be ensured by implementing strong authentication mechanism on sensor nodes to avoid illegal access and data integrity. (v) Security aware Ad-hoc Routing (SAR) Protocol: This protects network of IoT from insider attacks where some security measures are implemented on network packets in order to give an eavesdropper a different result after analysis of received packets. (iii) Authentication: Illegal access of IoT network nodes can be prevented through strong authentication mechanism and implementation of secure encryption schemes. This will also reduce Denial of Service (DoS) which is one of the most common network layer attack to a great extent [32]. (iv) Routing security: There is need for secure routing in virtually all applications in sensor networks. In order to secure confidentiality in most routing protocols, different routing algorithms are applied on sensor network on data exchange on different nodes in IoT systems. For routing purpose source routing technique in which transmitted data is stored in packets after analysis, before been sent for processing is applied. (v) Hello flood Detection Technique: Hello message attack in IoT can be prevented by sending a hello message from a node to determine signal strength. If strength is the same as in radio range then routing message and information about a route is received by the receiver. (vi) Data Integrity: Data integrity can be achieved through cryptographic hash mechanism by checking the transmission of data onto the other node. When tampering of data is proved, error correction process can also be applied [33]. Countermeasure of Processing Layer Threats Some important concepts of security measures in processing layer are: (i) Web application scanners: Different IoT front end threats can be identified using web application scanner. Other web firewall applications can also be implemented on IoT network to detect potential attacks. (ii) Data Fragmentation Redundancy Scattering (DFRS): DFRS is a simple and fast method of securing essential data on cloud by splitting them into fragments and storing them in different servers. Risk of data theft is at the minimal as data fragment has no useful information about the data. (iii) Homomorphic encryption: In this method of data security, cipher text is re-encrypted before decryption although high computational power is required [33]. (iv) Encryption: It helps to overcome side channel attack by firstly encrypting data before sending it to the cloud [25]. (v) Hyper Safe: This technique protects memory pages from being altered and also allows restriction of pointing index that monitored data onto the pointer indexes [34]. Countermeasure of Application Layer Threats Countermeasures of application layer threats are discussed below: (i) Data security: To avoid unauthorized access to data, encryption and secure authentication mechanism need to be implemented. High confidentiality of data and privacy of entire IoT system will also be achieved via this technique [14]. (ii) Access Control Lists (ACLs): Implementing rules that govern access privilege to requests for data will help in monitoring the IoT network thereby ensuring confidentiality of the system and data privacy. ACL can operate by stopping or allowing incoming or outgoing traffic and monitors access requests from many users in the IoT system [33]. (iii) Intrusion Detection Method: This security mechanism provides security solutions by producing an alarm whenever there is an intrusion of threat or uncertain activity is performed on the network [35]. The detection process can be achieved through different methods such a data mining technique. (iv) Risk Assessment: An effective security approach can be achieved through implementation of consistent risk assessment procedures. This may in turn enhance the existing network architecture and security plan. (v) Firewalls: This protection mechanism tends to be the most effective especially when authentication, threat analysis, access control list and encryption process fail to stop unauthorized users. Authentication and encryption techniques might fail in a situation where weak password is used but firewall can block threat exploiting such vulnerability. Firewall tends to filter packets through its filtration process hence unwanted packets can be easily blocked [35]. (vi) Anti-Malware: Security software such as anti-virus, anti-spyware and anti-adware is essential for the confidentiality, reliability and integrity of the IoT network [36]. Perforance Evaluation This paper evaluates and discusses security threats and possible countermeasures to each architectural layer of IoT system. The mechanism, effect and purpose of each attack was elaborately discussed as presented in table 1. In the long run, device authentication on IoT network seems to be of paramount importance if an IoT system must be secure over the internet. Conclusion IoT surrounds and connects the real world through physical devices which are embedded with different types of sensors that can be attacked. This paper gives an elaborate overview of IoT system in view of its security goals, security requirements, architectural layers & working principle, vulnerabilities, threats and attacks on each architectural layer and possible countermeasures. In the future, the research will focus on attacks peculiar to IoT on 5G network.
2019-08-17T10:06:01.110Z
2018-10-15T00:00:00.000
{ "year": 2018, "sha1": "e720440114b5b65d7a9b3685fdaa907e929296c0", "oa_license": "CCBY", "oa_url": "https://arpgweb.com/pdf-files/sr4(10)80-89.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ebf6f583e6d3d97af6ccac38ca55dfc8599f4263", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
97366644
pes2o/s2orc
v3-fos-license
CONTROLLED DRUG DELIVERY FROM A NOVEL INJECTABLE IN SITU FORMED BIODEGRADABLE PLGA MICROSPHERE SYSTEM .......... ... .. ........ .. .............. .. ..... .. .......... ... ..... .... ... .. ...... ..... ...... .... .......... .. . ACKNOWLEDGEMENT .... ......... ........ .... ...... .... ... .. ... ......... ......... ....... .... .. ... .. .. ... . iv PREFACE ... .......... ...... ... .. ... ......... ........... .. .. ..... ............ ............ ....... ......... .. ............ vi LIST OF TABLES ... .... .. .... ..... .. ...... .. ......... ............... .. .... ... ... .. ... ......... ... ................ ix LIST OF FIGURES ..... ... ..... .... ................... .... ... ....... ..... ......... ..... .. ................... .. .. .. x LIST OF PUBLICATIONS ..... ..... .. ..... .. ..... ....... ............. ... .. ... ... ....... ... ..... ... .. ...... xiv OBJECTIVES The main intention of this research project were to achieve controlled drug delivery of micromolecules and macromolecules, such as proteins, from a novel injectable biodegradable poly(lactide-co-glycolide) (PLGA) microsphere system. This system would overcome some of the disadvantages associated with the traditional methods for controlled drug delivery. On injection, the system would come in contact with water from aqueous buffer or physiological fluid and as a result, form solid matrix type microparticles entrapping the drug (in situ formed microspheres); the drug would be released from these microspheres in a controlled fashion. The specific objectives of this research project were as follows: (1) To develop a novel method for controlled delivery of drugs from an in situ forming biodegradable PLGA microsphere system. (2) To evaluate the effects of various formulation variables on the characteristics of this system. (3) To determine the effects of formulation, process and storage conditions on the reproducibility and stability of this system as well as the stability of the encapsulated proteins. (4) To modify this novel microencapsualtion process, to produce in situ formed implant or isolated microspheres and also compare the characteristics of the three biodegradable devices: in situ formed implant v/s in situ formed microsphres v/s isolated microspheres. INTRODUCTION To avoid inconvenient surgical insertion of large implants. injectable biodegradable and biocompatible polymeric particles (microparticles and nanoparticles) could be employed for parenteral controlled-release dosage forms . Microparticles of size less than 250 µm, ideally less than 125 ~Lrn are suitable for this purpose. Biodegradable polymers are natural or synthetic in origin and are decomposed in vivo, either enzymatically or non-enzymatically to produce biocompatible, toxicologically safe by-products which are further eliminated by normal metabolic pathways. Drugs formulated in polymeric devices are released either by diffusion through the polymer barrier, or by erosion of the polymer material, or by a combination of both diffusion and erosion mechanisms. The polymers selected for the parenteral administration must meet several requirements like biocompatibility, drug compatibility, suitable biodegradation kinetics and mechanical properties, and ease of processing. Although a wide variety of natural and synthetic biodegradable polymers have been investigated for drug targeting or prolonged drug release, only a few of them are actually biocompatible. Natural biodegradable polymers like bovine serum albumin (BSA), human serum albumin (HSA), collagen, gelatin, and hemoglobin have been studied for drug delivery. The use of these natural polymers is limited due to their higher costs and questionable purity. In the past two decades synthetic biodegradable polymers have been increasingly used to deliver drugs, since they are free from most of the problems associated with natural polymers. Poly(amides), poly( amino acids), poly(alkyl-cx.cyano acrylates), poly(esters), poly(orthoesters), poly(urethanes), and poly(acrylamides) have been used to prepare polymeric devices to deliver drugs. poly(glycolide) (PGA), and especially the copolymer of lactide and glycolide referred to as poly(lactide-co-glycolide) (PLGA) have generated immense interest due to their excellent biocompatibility and biodegradability. Also PLGA has been approved by the U.S. FDA for a number of clinical applications including surgical sutures and as controlled-release microspheres. PLGA is shown to be biocompatible and degrades to toxicologically acceptable lactic and glycolic acids that are eventually eliminated from the body. Release of drugs from PLGA microspheres occurs by two mechanisms: (i) diffusion of the drug through a tortuous, water-filled path in the polymer matrix and (ii) matrix bioerosion (bulk hydrolytic degradation) after undergoing sufficient hydration. The actual release is a combination of both the processes. There is a particular interest in controlled delivery of macromolecules like peptides and proteins through PLGA microspheres. Although a wide variety of pharmacologically useful peptide and protein based drugs have been recently developed by genetic engineering, their therapeutic use is restricted due to certain disadvantages: (i) on oral consumption they are subject to attack by the acidic and enzymatic environment in the stomach and the enzymes from the brush border membrane of the intestine, (ii) their high molecular weight and size impede their effective transport across the gastrointestinal membranes, and (iii) they have a short biological half-life and on injection they are quickly metabolized and eliminated. To achieve sustained blood levels of these drugs, minimize their denaturation or degradation, and to extend their biological half-life, their delivery by encapsulation in PLGA microspheres has become an interesting approach. The literature on PLGA microspheres is full of different techniques describing their manufacture, where the microspheres are produced in a freeflowing, powder form. Some of the methods reported are : (i) single/double emulsification followed by solvent removal by evaporation or extraction, (ii) phase separation ( coacervation), and (iii) spray-drying. Most of these manufacturing processes suffer from drawbacks such as: (i) the microspheres need to be reconstituted (suspended) in an aqueous media, before they could be injected in the body, (ii) the hazards and environmental concern associated with the use of organic solvents like methylene chloride for the solubilization of PLGA polymer, and (iii) residual organic solvents remaining in the final microsphere product. have described a novel implant system which is parenterally administered as a liquid and subsequently solidifies into a gel matrix (implant) in situ, from which the drug is released in a controlled manner. Although this implant system precludes the need for any surgery for its administration, it has a number of disadvantages: (i) the safety of solvents like N-methyl-2-pyrrolidone. used to formulate these systems is questionable and not well documented, (ii) the injection of these liquid implant systems and their subsequent solidification produce non-uniform matrix implants having variable consistency and geometry. and (iii) due to formation of matrix implants having inconsistent texture. shape and size, the drug release from them is variable and unpredictable. The present process of microsphere formation is based on the principle of coacervation. This method overcomes the problems faced by the above systems by forming a dispersion of PLGA micro globules ("premicrospheres" or "embryonic microspheres") in an acceptable vehicle mixture (continuous phase) and whose integrity is maintained by use of appropriate stabilizers. Of serious concern are the problems associated with the oral administration of If a drug cannot be administered orally due to any of the above reasons, a parenteral route of delivery is an alternative. One advantage that a parenteral controlled release dosage form has over oral controlled release dosage forms is patient compliance (2). Although an oral dosage form might have a good bioavailability, a long-acting parenteral dosage form that is safe and efficacious for days or weeks or months could be beneficial because it ensures that the patient is receiving medication. Also a parenteral controlled release dosage form is preferred over conventional parenteral dosage form for chronic treatment where routine multiple injections could be inconvenient and painful. Parenteral controlled release dosage forms are also effective in site-specific drug delivery. thereby improving its efficacy and reducing its toxicity. The main disadvantage of these dosage forms is that once administered, they cannot be easily removed (2). This could be a problem for the patient if a drug was no longer needed. or worse if it caused an undesirable reaction. Although a wide variety of natural and synthetic biodegradable polymers have been investigated for drug targeting or prolonged drug release, only a few of them are actually biocompatible. Natural biodegradable polymers like bovine serum albumin (BSA), human serum albumin (HSA), collagen, gelatin, and hemoglobin have been studied for drug delivery (1 ). The use of these natural polymers is limited due to their higher costs and questionable purity ( 1 ). This review provides a comprehensive outlook on different techniques of preparation of various drug loaded PLGA devices, with special emphasis on preparing microparticles. Certain issues about other related biodegradable polyesters like PLA and PGA have been discussed as well. HISTORICAL DEVELOPMENT OF DRUG DELIVERY USING PLGA The discovery and the synthetic work on low molecular weight oligomeric forms of lactide and/or glycolide polymers was first carried out several decades back (3, 5). The methods to synthesize high molecular weights of these polymers were first reported by Lowe (3). During the late 1960s and early 1970s a number of groups had published pioneering work on the the utility of these polymers to make sutures/fibers (2, 3, 5, 12). These fibers had several advantages such as good mechanical properties. The biodegradation, biocompatibility, and tissue reaction of PLA and PLGA have been extensively investigated and well documented by many researchers (5, 14). The first work on parenteral controlled release of drugs using PLA was reported by Boswell, Yalies, Sinclair, Wise,and Beck (3,5) . Since then an ocean of literature on drug delivery using PLA, and especially PLGA has been published. Various polymeric devices like microspheres, microcapsules, nanoparticles, pellets, implants, and films have been fabricated using these polymers for the delivery of a variety of drug classes. SYNTHESIS OF PLGA COPOLYMER Low molecular weight PLGA can be prepared by direct condensation (polyesterification) of lactic and/or glycolic acids (5, 12). Temperatures as high as 130-190° C are required for the condensation process and the water generated is removed by boiling, using vacuum, purging with nitrogen, or azeotropic distillation with an organic solvent (3, 12). An acid catalyst like antimony oxide increases the reaction rate if used at reaction temperatures below 120° C. but above this temperature water removal is the rate-limiting step (3,12). This method yields PLGA having molecular weight of -l 0,000 ( 12). The low molecular weight PLGA has limited biomedical application, due to its poor mechanical strength and faster degradation (3). Intermediate and high molecular weight PLGA (-10,000-40,000) can be prepared by the ring-opening polymerization of the cyclic dimers (cyclic diester of lactic and/or glycolic acids) as the starting materials (3,5,12,14). The advantage of this method is that no water removal/dehydration method is needed in the polymerization system (3). Also the cyclized monomer(s) and the linear form of the polymers produced can be readily purified (3). Compounds of lead. tin, cadmium, zinc, antimony, and titanium have been used as catalyst to initiate the polymerization process (12,14). Acid catalyzed bulk polymerization (melt method) for two to six hours at around 175° C is generally employed for preparation of PLGA from lactide and glycolide monomers (3 ). The molecular weight of the resultant PLGA is determined by the concentration of the catalyst added (12). Monomer purity of99.9% or greater and monomer acidity of 0.05% or less are required with the starting lactide and glycolide materials (5). Also important are the low levels of humidity in the processing area (5). PHYSICAL, CHEMICAL, AND BIOLOGICAL PROPERTIES OF PLGA It is important to understand the physical, chemical, and biological properties of the polymer before formulating a controlled drug delivery device . The various properties of the polymer and the encapsulated drug directly ( influence other factors like the selection of the microencapsulation process, drug release from the polymer device, etc. (1). PLA can exist as the optically active stereoregular polymer (L-PLA) and a optically inactive racemic polymer (D, L-PLA) (1 , 5, 9). L-PLA is found to be semicrystalline in nature due to high regularity of its polymer chain while D, L-PLA is an amorphous polymer because of irregularities in its polymer chain structure (3 , 9). Hence the use of D, L-PLA is preferred over L-PLA as it enables more homogeneous dispersion of the drug in the polymer matrix (9. 13 ). PGA is highly crystalline because it lacks the methyl side groups of the PLA (3, 9). Lactic acid is more hydrophobic than glycolic acid and hence lactide-rich PLGA copolymers are less hydrophilic, absorb less water, and subsequently degrade more slowly (1, 3, 13). The molecular weight and polydispersity index of the polymer are factors which affect the mechanical strength of the polymer and its ability to be formulated as a drug delivery device (3, 5, 12). Also these properties may control the polymer biodegradation rate and hydrolysis (3, 12). The commercially available PLGA polymers are usually char!lcterized in terms of intrinsic viscosity which is directly related to their molecular weights (3). The degree of crystallinity of the PLGA polymer directly influences its mechanical strength, swelling behavior, capacity to undergo hydrolysis, and subsequently its biodegradation rate (3) . The resultant crystallinity of the PLGA copolymer is dependent on the type and the molar ratio of the individual monomer components (lactide and glycolide) in the copolymer chain (1). PLGA polymers containing 50:50 ratio of lactic and glycolic acids are hydrolyzed much faster than those containing higher proportion of either of the two monomers (5, 12). PLGAs prepared from L-PLA and PGA are crystalline copolymers while those from 0, L-PLA and PGA are amorphous in nature (3, 5). Gilding and Reed have pointed out that PLGAs containing less than 70 % glycolide are amorphous in nature ( 18). The degree of crystallinity and the melting point of the polymers are directly related to the molecular weight of the polymer (3, 5). ( The glass transition temperature (Tg) of the PLGA copolymers are above the physiological temperature of 37° C and hence they are glassy in nature (3. 5). Thus they have a fairly rigid chain structure which gives them significant mechanical strength to be formulated as drug delivery devices ( The carboxylic end groups present in the PLGA chains increase in number during the biodegradation process as the individual polymer chains are cleaved; these are known to catalyze the biodegradation process (3, 5). The biodegradation rate of the PLGA copolymers are dependent on the molar ratio of the lactic and glycolic acids in the polymer chain, molecular weight of the polymer, the degree of crystallinity, and the Tg of the polymer (3, 5, 13). A three phase mechanism for the PLGA biodegradation has been proposed (21 ): 1. Random chain scission process. The molecular weight of the polymer decreases significantly, but no appreciable weight loss and no soluble monomer products formed. 2. In the middle phase a decrease in molecular weight accompanied by rapid loss of mass and soluble oligomeric and monomer products are formed. The PLGA polymer biodegrades into lactic and glycolic acids ( 1-3, 5, 12. 13). Lactic acid enters the tricarboxylic acid cycle and is metabolized and subsequently eliminated from the body as carbon dioxide and water (1-3 , 5. 9). ln a study conducted using 14 C-labeled PLA implant, it was concluded that lactic acid is eliminated through respiration as carbon dioxide (22). Glycolic acid is either excreted unchanged in the kidney or it enters the tricarboxylic acid cycle and is eventually eliminated as carbon dioxide and water (3). METHODS OF PREPARING VARIOUS PLGA DEVICES [1] MICROPARTICLES A number of microencapsulation techniques have been developed and reported to date. The choice of the technique depends on the nature of the polymer, the drug, the intended use, and the duration of the therapy (1 , 2, 4, 5, 10). The microencapsulation method employed must include the following requirements (1, 2, 23): (i) The stability and biological activity of the drug should not be adversely affected during the encapsulation process or in the final microsphere product. (ii) The yield of the microspheres having the required size range (upto 250 µm, ideally < 125 µm) and the drug encapsulation efficiency should be high. (iii) The microsphere quality and the drug release profile should be reproducible within specified limits. The microspheres should be produced as a free flowing powder and should not exhibit aggregation or adherence. A. Solvent Evaporation and Solvent Extraction Process (1) Single emulsion process This is essentially an oil-in-water (o/w) emulsion process. The polymer is first dissolved in a water immiscible, volatile organic solvent; dichloromethane (DCM) most commonly used. The drug is then added to the polymer solution to produce a solution or dispersion of the drug particles (particle size of the drug added to be< 20 µm) (4). This polymer-solvent-drug solution/dispersion is then emulsified (with appropriate stirring and temperature conditions) in a larger volume of water in presence of an emulsifier (such as poly (vinyl alcohol) (PY A)) to yield an o/w emulsion. The emulsion is then subjected to solvent removal by either evaporation or extraction process to harden the oil droplets (I 0). In the former case the emulsion is maintained at reduced pressure or at atmospheric pressure and the stirring rate reduced to enable the volatile solvent to evaporate ( 4, 10). In the latter case the emulsion is transferred to a large quantity of water (with or without surfactant) or other quench medium, into which the solvent associated with the oil droplets diffuses ( 4, 10). The solid microspheres so obtained are then washed and collected by filtration, sieving, or centrifugation ( 4 ). These are then dried under appropriate conditions or are lyophilized to give the final free flowing injectable microsphere product. It should be noted that the solvent evaporation process in a way is similar to the extraction method, in the sense that the solvent must first diffuse out into the external aqueous dispersion medium before it could be removed from the system by evaporation ( 4, 10 and water are used as dispersed and continuous phases respectively. DCM is widely used because it is a good solvent for the polymers and due to its high volatility it can be easily removed by evaporation. A major problem with the use of DCM is its potential toxicity (28). Chlorinated solvents in general are considered hazardous to environment and undesirable for use in manufacturing processes (28) ( 4 7). In the former case, most of the acetone was first al lowed to diffuse out from the dispersed organic phase (chloroform-acetone mixture) into the external aqueous phase, followed by gradual evaporation of the residual solvents to give the final microspheres. In the latter case, the o/w emulsion was first subjected to solvent (DCM) evaporation for a certain period until semisolid droplets were obtained and the residual DCM was removed by the extraction process in a large volume of water. Microspheres from evaporation-extraction process were less porous and exhibited better encapsulation than those prepared from extraction-evaporation process ( 4 7). Extraction process using lidocaine base resulted in encapsulation efficiency of less than l 0% (33). The same group also reported a better product from the extraction process for the drug ketoprofen in terms of drug content, loading efficiency, particle size, and surface feature as against the evaporation process ( ( 2) Double (multiple) emulsion process The double emulsion process is essentially an water-in-oil-in-water (w/o/w) method and is best suited to encapsulated water-soluble drugs like causing reduction of the microsphere size and increase in particle porosity due to better stabilization of the inner w/o emulsion (100). Other researchers have also reported use of L-a-phosphatidylcholine ( 112). In another study it was found that a decrease in the DCM phase volume yielded particles with dense core (81 ). The entrapment efficiency of the drug increased with decrease in drug loading and increase in particle size (84). However other groups have found no Cohen et al. have reported that for microspheres in which the inner emulsion was prepared using low shear (e.g. vortex mixing), the particles were large in size and the drug encapsulation was low as compared to microspheres in which the inner emulsion was prepared using high shear (e.g. probe sonication) which yielded smaller particles with higher encapsulation efficiency (81 ). However Sah et al. reported no effect of the shear rate (to prepare the o/w emulsion) on the encapsulation efficiency and the final particle size of PLA/PLGA microcapsules; particles prepared from low shear rate were however more porous than those prepared from high shear rate (116). and i.m. routes (85 , 86, 88) and a three-month release profile following a s.c. injection (87) has been reported by these researchers. B. Phase Separation (Coacervation) The The phase separation method, unlike the o/w emulsification method is suitable to encapsulate both. water-soluble as well as water-insoluble drugs. since its a non-aqueous method. However the coacervation process is mainly used to encapsulate water soluble drugs like peptides, proteins, and vaccines. The addition rate of first nonsolvent should be such that the polymer solvent is extracted slowly, so that the polymer has sufficient time to deposit and coat evenly on the drug particle surface during the coacervation process (4) . The concentration of the polymer used is important as well, since too higher concentrations would result in rapid phase separation and nonuniform coating of the polymer on the drug particles. Due to absence of any emulsion stabilizer in the coacervation process, agglomeration is a frequent problem in this method ( 4 ). The coacervate droplets are extremely sticky and adhere to each other before the complete phase separation or the hardening stages of this method. Adjusting the stirring rate, temperature, or the addition of an additive is known to rectify this problem (4). Unlike the solvent evaporation/extraction process, the requirement of solvents for the polymer are less stringent since the solvent need not be immiscible with water and the boiling point can be higher than that of water ( 4 ). CONCLUSION AND FINAL REMARKS The salient features of the novel microencapsulation process and the drug delivery sytem described in this .research project are as follows: (1) The sytem excluded the use of unacceptable organic solvents like methylene chloride and used acceptable vehicle mixture instead to prepare biodegradable PLGA microspheres. (2) The sytem formed drug containing PLGA microglobules ("premicrospheres" or "embryonic microspheres") which could be considered as precursors to the final microsphere product: these on coming in contact with water hardened to form discreet PLGA microspheres (in situ formed microspheres) which subsequently exhibited non-variable. predictable, and controlled drug release profile. (3) Unlike the traditional methods, this system precluded the need for reconstitution of the PLGA microspheres as they are formed in situ. ( 4) Various formulation varibles affected the characteristics of this system. The formulation and process conditions did not adversely affect the physical stability of the encapsulated protein drugs. (6) Besides in situ forming microspheres, the novel microencapsulation method can be modified to produce in situ formed implant or isolated microspheres; these drug loaded devices exhibited different characteristics. This research project makes a significant overall contribution to the knowledge of the underlying theoretical principles of drug delivery through biodegradable devices and in particular, problems associated with protein drug delivery. (8) The novel nature of the system provides a high probability that a patent application would be filed.
2017-11-01T21:32:08.748Z
1998-01-01T00:00:00.000
{ "year": 1998, "sha1": "8fd7bf5647340c8b74d4f02332faaf80f5c45bca", "oa_license": "CCBY", "oa_url": "https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1195&context=oa_diss", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "aa37d0b9b1e5200969245b3bc71a684845ba2a16", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
237417710
pes2o/s2orc
v3-fos-license
The Loss of Diversity in the Anthropocene Biological and Cultural Dimensions Theories of nationalism emphasise its standardising effects. Ernest Gellner compared the pre-nationalist world to a painting by Kokoschka (a colour extravaganza) and the world of nationalism as one by Modigliani (calm, monochrome surfaces), while Benedict Anderson showed how the standardisation of language through the medium of printing was a condition for shared national identities. In this article, homogenisation remains a concern, but the empirical framework differs from that of late 20th century theory. Taking its cue from Charles Mann’s 1493, a study of the world after Columbus where the term Homogenocene was proposed, the article shows how homogenisation is a key element in modernity, and analyses some implications of its recent acceleration. The effects of economic globalisation are detrimental to both biological and cultural diversity, since the Anthropocene era does not only refer to a reduction of biological diversity but also the incorporation of cultural groups into market economies, the loss of languages and of traditional livelihoods. The article then briefly surveys some responses to the upscaling of economies, the flattening of ecosystems and the growing power of corporations. The loss of flexibility is countered in a number of ways, from attempts to restore damaged ecosystems to groups defending their cultural and political autonomy. The analysis argues for a broad definition of politics (seen as the political), thereby questioning the ability of the state to solve the dilemma, which is a dual one relating simultaneously to cultural and biological loss. The conclusion is that upscaling (e.g., to the global system) is usually part of the problem rather than the solution, and that sideways scaling may address the shortcomings of downscaling (e.g., to the community level). INTRODUCTION In a world consisting of more than two hundred sovereign states in competitive relationships, shared global challenges are difficult to deal with. Foremost among these are currently climate change and environmental destruction. An urgent question for scholars and policymakers concerns whether solutions are to be found by upscaling or downscaling; should more power be allocated to the United Nations; does the world need more strongly phrased or more binding climate agreements? Or are the proposed large-scale solutions rather part of the problem since they fail to take diversity and local agency into account, and usually come to naught since international treaties on climate have so far scarcely been followed up in practice? The aim of this article is to address the question how to respond effectively to the collective global challenge of anthropogenic climate change. I will give an account of the present world of overheated global modernity, its origins and some of its characteristics, with an emphasis on homogenisation as a central feature of the modern world. Both cultural and biological homogenisation, or tendencies towards monoculture, are described, and the parallels and differences between the "flattening" of cultural diversity and the impoverishment of ecosystems are shown to be results of imperial expansion and modern capitalism. The outcome will be analysed as a loss of semiotic freedom and flexibility. This dual process, it is subsequently argued, is frequently a result of upscaling, creating a growing gulf between life on the ground and the level of decision-making, as well as unintended consequences leading to global tragedies of the commons. I finally describe briefly some forms of resistance by identifying countermovements attempting to reinstate diversity, both in the realm of culture and in that of ecology. These attempts could come from indigenous groups, but just as easily from concerned middle-class people in the OECD or even startup businesses, but rarely from major corporations or governments. This is why the conceptualisation of politics in the present context has to move beyond institutional politics and look at the way in which political agency works in practice. The parallels between biological and cultural diversity should not be exaggerated. The time scales differ enormously. Evolution is driven by "the blind watchmaker" (Dawkins 1986) of natural selection, while cultural differentiation relies on human consciousness and creativity. Yet, a comparison can be fruitful at this historical moment, when the homogenising forces of globalisation threaten and reduce both biological and cultural diversity. We may be witnessing a sixth extinction (Kolbert 2014) in nature, and it is estimated that only ten per cent of the roughly 6,000 languages spoken today are safe from extinction (Crystal 2014). Some estimates suggest that one language loses its last native speaker every 2 weeks. Both processes have accelerated in the last few decades. Only four per cent of the mammalian biomass on Earth now belongs to wild animals (Elhacham et al., 2020). Seventy per cent of the birds in the world are domesticated, mainly poultry. The reduction of variation and of difference thus seems to apply both in the natural and the sociocultural world, often with similar causes and comparable results. THE HOMOGENOCENE Seen with the hindsight afforded by the world of the 21st century, it is a striking fact that influential theories of modernity in the last century rarely included environmental destruction and climate change as major concerns. By contrast, a related feature of the contemporary world has been studied and theorised since the advent of social theory, namely homogenisation and standardisation as central features of modernity. The tendency of the modern state and the capitalist economy to iron out differences and create homogeneity has been necessary both at the political level (the nation-state, emerging in the 19th century, required cultural flattening) and in the world economy (which is increasingly globalised, often following Ricardo's principle of comparative advantage). The urbanisation and increased differentiation of modern societies in the North Atlantic world was already a major concern in late 19th century social theory. For example, Tönnies's distinction between Gemeinschaft and Gesellschaft ("community" and "society", Tönnies 1963Tönnies [1889) identified a shift in the mode of social organisation and value orientation, towards greater individualism and anonymity. Similarly, Durkheim's contrast between mechanical and organic solidarity (Durkheim 1997(Durkheim [1883 referred to a transition from relatively undifferentiated rural societies to societies with an advanced division of labour, and the perhaps most celebrated of all classic social theorists-Marx and Weber-both wrote copiously on the implications of these radical transformations. The reduction of cultural diversity as a result of colonialism and its accompanying modernisation was a concern already for early 20th century anthropologists, for example in W. H. R. Rivers (1922) anxiety over the assumed population decline in Melanesia and the "salvage anthropology" promoted in the United States by Franz Boas and his students, who were scrambling to save indigenous cultures from oblivion before they vanished, as they were predicted to do. More recently, research on nationalism and globalisation has addressed questions of social and cultural homogenisation. Both Gellner (1983) and Anderson (1983) describe a historical moment in which a world of many small differences has been transformed to a world of just a few major ones, with Anderson referring to the standardising effects of print capitalism, Gellner to the implications of the industrial revolution. In a memorable allegory, Gellner compares the modern industrial world to a painting by Modigliani-large, calm, monochrome surfaces-contrasting it with a mainly agrarian world reminiscent of a painting by the expressionist Kokoschka, known for his intense use of colour. A decade later, Castells (1996) wrote about the emerging global network society, which produces a common language for talking about both similarities and differences owing to intensified contact across borders. This situation was, incidentally, described almost avant la lettre by (McLuhan 1994(McLuhan [1964), who was nevertheless aware that "the global village" was not a peaceful place, but rather one fraught with friction and conflict of the kind described decades later by Barber (1995) as Jihad versus McWorld. Later still, Ritzer (2004) wrote about what he calls the globalization of nothing, which refers to generic phenomena with no discernable local provenance, spreading rapidly as a consequence of a flattening global modernity rendering everything comparable to everything else. Such theoretical perspectives on the present era offer important insights into global cultural homogenisation and its accompanying frictions, but climate and the environment are conspicuously absent in all these analyses. By now, it is nonetheless difficult to speak credibly about the human condition under accelerated globalisation without recognising that environmental destruction and climate change are major issues and fundamental political challenges. This shift represents nothing less than a watershed: Speaking about international relations, global inequality, nationalism or economic globalisation without mentioning climate or the environment now seems about as dated as talking about development in the 1980s without a gender perspective. A fifth into the twenty-first century, human domination of Earth is such that the term Anthropocene has become widespread as a general description. Since the onset of the industrial revolution in Europe, human activity and expansion have transformed the planet in unprecedented ways, and change continues to accelerate in a number of domains. This situation represents an escalating problem for all of humanity-indeed for all life on the planet. The challenges for research and theory are enormous, and the Anthropocene moment may well be seen retrospectively as a turning point in the social sciences and humanities (Mathews, 2020). Ecological and environmental perspectives on politics and the human condition have never been absent, but they have become mainstream in the social sciences and humanities only recently. A reasonable starting-point for the current growth of theoretical and empirical literature on the Anthropocene could be the moment when the term itself was introduced around the turn of the millennium, coined independently by the atmospheric chemist Paul Crutzen and the biologist Eugene Stoermer. Crutzen was also the co-author of a much cited article, with his colleague Will Steffen and the historian John McNeill (Steffen et al., 2007), on social aspects of climate change. McNeill is the author, with Peter McNeill and Engelke (2016), of a book about "the great acceleration" since 1945, describing it mainly as one of human expansion and environmental destruction. In a recent review article, Syvitski et al. (2020) identify 1950 as the take-off point for the new epoch, showing rapid increase both in population and energy consumption from that year onwards. In the last couple of decades, the literature on climate, the environment and the human condition has grown exponentially in the humanities and social sciences. However, few have paid systematic attention to the implications of the dual process of ecological and cultural homogenisation. In one of the few studies which takes on the drive to homogeneity in both domains, Charles Mann (2011) coined the term The Homogenocene as a label for the modern world, characterised by unprecedented, and accelerating, flows of people, pests, crops, and forms of political domination. Mann takes a longue durée perspective on homogenisation, arguing that the seeds of the current era of monocultures, species extinction and invasion, language death and ubiquituous consumerism were sown at the time of the European conquests, and tellingly, the book introducing the term Homogenocene is titled 1493. Global homogenisation has gained pace since its beginning at the start of the Columbian exchange (Crosby 2003(Crosby [1972). China, in important respects culturally quite distinct from the North Atlantic world, is now competing on a par with the latter in the global economy, and Chinese citizens seem to be no less devoted to consumption of manufactured goods than Westerners. Comparability along several axes becomes more feasible than in a past when cultural differences overshadowed the emerging similarities. ENERGY: THE TRIPLE BIND This is about politics in the Anthropo-or Homogenocene, and energy is a key factor. Perhaps the most influential interdisciplinary writer on energy is Václav Smil (Smil 2017), who takes a historical, comparative and contemporary view on energy. His analysis of energy transitions, especially of the shift from muscle power to machinery, makes it possible to understand why megacities have become possible in the present era, since the size of pre-modern cities was limited by the supply of energy, which had to be produced by people and beasts of burden. Energy is also a key factor in the loss of flexibility characterising our era. A society committed to high energy use can only with great difficulty, and with painful sacrifices, return to a low-energy society. Like language, money and mechanical time, energy renders societies comparable by producing a shared set of parameters for evaluating them (these days both in terms of development/affluence and in terms of ecological sustainability). A focus on energy also indicates the difficulties of Anthropocene challenges. As shown by many scholars, most recently Vogel et al. (2021), the correlation between energy use and life satisfaction is clear if not unanimous. There can be no easy transition from a high-energy society to a sustainable one, especially in light of the rapid global population growth of the last two centuries (Wilhite 2016;Hoff, Gausset, and Lex 2019). The archaeologist Joseph Tainter has analysed the causes of civilizational collapse in the past (Tainter 1988, Tainter 2014), a perspective subsequently popularised by Jared Diamond (2005). Tainter indicates ways in which contemporary societies can learn from archaeological research when faced with urgent or simmering crises. In his comments on the present, which draw heavily on the collapse of the Roman and Maya empires, environmental destruction comes across as just one factor in accounting for the decline of complex societies. In his view, the decisive cause consists in decreased marginal returns on investments in energy (EROI), owing to population growth and subsequent intensification of food production with decreasing returns, coupled with growth in bureaucratic, logistic and transport costs. Since the late 18th century, we have been able to exploit enormous amounts of energy, at first just in the shape of abundant and easily accessible coal deposits, subsequently through the exploitation of oil and gas for the betterment of humanity. The fossil fuel revolution enabled us to support a fast growing global population with seemingly insatiable desires for consumption. Yet the cost of taking out fossil fuels increases as the low-hanging fruit is being depleted. At the same time, production relying on fossil fuels is tantamount to destruction (Hornborg 2019), in a dual sense, since we are simultaneously eating up capital which it has taken the planet millions of years to produce, and are undermining the conditions for our own civilization by altering the climate and ruining the environment on which we rely. We are entangled in a triple bind, a wicked trilemma where sustainability, growth and reliance on fossil fuels cannot be reconciled: only one of the three is possible (see Bateson et al., 1956 on the double bind). Coal and its close relatives oil and gas, the salvation of humanity for two centuries, are now becoming our damnation, and there is no easy way out. The lesson from cultural history may nevertheless be that lean societies, decentralised and flexible, with less bureaucracy than farming, fewer PR people than fishermen, are the most sustainable in the long term, and to this possibility I shall return. As Tainter remarks: "Complex societies . . . are recent in human history. Collapse then is not a fall to some primordial chaos, but a return to the normal human condition of lower complexity" (Tainter 1988: 198). This is an insight with potential implications for a politics of the Anthropocene. OVERHEATING AS A CONDITION FOR THE HOMOGENOCENE A further elaboration of Anthropocene effects may apply the concept of overheating in order to interrogate the acceleration of acceleration since the end of the Cold War and the coming of the Internet and mobile telephony, around 1990, where changes in a number of interrelated domains have taken off at ever increasing speed-from urban growth in the Global South and international trade to mining and international travel (Eriksen 2016;Erisken 2018). The current human population of nearly eight billion (compared to one billion in 1800 and just two billion as late as 1920) travels, produces, consumes, innovates, communicates, fights and reproduces in a multitude of ways, and we are increasingly aware of each other as we do so. The steady acceleration of communication and transportation in the last two centuries has facilitated contact and made isolation difficult, and is weaving the growing global population ever closer together, affecting cultural differences, local identities and power relations. Indeed, as decades of research on collective identification has shown, intensified identity management and the assertion of group boundaries is a likely outcome of increased contact and perceived threats to group integrity. A general formula is that the more similar we become, the more different we try to be, although it could be added that the more different we try to be, the more similar we become, since there is a shared global grammar for the effective expression of uniqueness. The standardization of identity currently witnessed in nationalism and religious revivalism is a feature of modernity, not of tradition, although it tends to be dressed in traditional garb. Tradition is traditional, but traditionalism is modern. Ranging from foreign direct investments and the number of internet connections to global energy use, urbanisation in the global south and increased migration rates, rapid transformations impact social life in many ways, and have in some respects visibly stepped up their pace since the 1990s. Dramatic alterations to the environment, economic transformations and social rearrangements are the order of the day in so many parts of the world, and in so many areas, that it may not be hyperbole to speak of the global situation as being overheated. Overheating does not merely designate climate change. In physics, heat is simply a synonym for speed, and translated into the language of social science, overheating can refer to fast change. The changes brought about by modernity have unintended, often paradoxical consequences, and when changes accelerate, so do the unintentional side-effects of changes. The term overheating calls attention to both accelerated change and the tensions, conflicts and frictions it engenders, as well as-implicitly-signalling the need to examine, through dialectical negation, the possibility of deceleration or cooling down. Generally speaking, when things are suddenly brought into motion, they create friction; when things rub against each other, heat is generated at the interstices. Heat, for those who have been caught unawares by it, may result in torridness and apathy, but it may also trigger a number of other transformations, the trajectories of which may not be clear at the outset. When water is brought to the boiling point, for instance, it changes into a different substance. In a similar fashion, we arguably find ourselves at a "systemic edge" these days, as economic, as social and cultural forms of globalisation are expanding into ever new territories, often changing the very fundamentals of customary life for those who find themselves taken in by the whirlwinds of change. These processes are not unilaterally negative or positive for those affected by them, since what may be perceived as a crisis by some could very well represent positive opportunities to others, and the potential for spontaneous transformative moments is always present. Even climate change is sometimes welcomed, for example in cold regions where agriculture becomes feasible, or even further north, where the melting of the Arctic ice creates exciting opportunities for oil companies and may lead to the opening of new shipping routes. Overheating consists in a series of unintended, and interrelated, consequences triggered by global neoliberal deregulation, technological developments rendering communication instantaneous and transportation inexpensive, increased energy consumption, and a consumerist ethos animating the desires of a growing world population. One significant aspect of overheating is the lack of a thermostat or governor. There is no instance which has the authority to order the Anthropocene world to cool down owing to its destructive effects. As a result, runaway competition continues to escalate, notwithstanding the sudden break caused by the Covid-19 pandemic. This is one reason why a sustainable politics of the Anthropocene urgently needs to be theorised and conceptualised. Overheating can be identified in many domains. Tourism has increased sixfold since the late 1970s, from 200 million to more than 1.2 billion international tourist arrivals annually in 2019. Global energy consumption, which has increased by a factor of thirty since Napoleon Bonaparte's exile, has doubled since 1975. Capitalism, globally hegemonic since the nineteenth century, is now becoming universal in the sense that scarcely any human group now lives completely independently of a monetised economy. Traditional, often communal forms of land tenure are being replaced by private ownership, subsistence agriculture is being phased out in favour of industrial food production, siphoning former peasants into the informal sector in cities; the affordances of the smartphone replace orally transmitted tales, and by 2007, more than half of the world's population lived in urban areas. By the middle of this century, the proportion may be seventy per cent. The state by now enters into people's lives almost everywhere, though to different PLANTATIONOCENE The overheated Anthropocene was not an inevitable outcome of 1493. Other trajectories are easily imaginable. Nonetheless, the convergence and mutual reinforcement of the scientific revolution after the Renaissance, the economic growth in the imperial centres resulting from increased trade and pillage, slavery and plantations, technological advances resulting at least partly from competition between the early modern European states and the incipient secularisation leading to faith in progress and development replacing Christian dogma, encouraged the growth of a capitalist world economy, as famously analysed by Wallerstein (1974-79) and-seeing it from the perspective of the colonised- Wolf (1982). The plantation, described by Mintz (1985) as a "proto-factory" based on standardisation, mass production and the disposability of labourers, contributed massively to the economies benefiting from it. The great homogenisation was under way. For centuries, species of plants and animals were deliberately introduced to the colonies (and elsewhere-silkworms were smuggled out of China as early as the sixth century CE). Tropical botanical gardens were experimental sites for exploring agricultural potential. Cattle were shipped to Argentina, maize to East Africa, sugar cane to the Caribbean and so on. Only in the last few decades have introduced species come to be seen as a problem rather than a solution. Species have migrated since the beginning of life on Earth, and-to note the parallel with cultural diversity-cultures have influenced each other since we started making abstractions many thousand years ago. The field of biogeography is the study of the dissemination of species in evolutionary time, and barriers such as mountain ranges, climatic zones and open stretches of ocean have been of particular interest. In oceanic islands, and in the isolated continent Australia, evolution could take separate paths. The giant tortoises in Galápagos, the dodo in Mauritius and the Komodo dragon on a handful of Indonesian islands could thrive for millions of years in the absence of competition or devastating predation. The temporal axis of cultural history is much shorter, but the patterns are comparable. In dense forests, barren semideserts and narrow mountain valleys, cultural forms evolved which long had limited contact with the outside world. In New Guinea, mountainous and forested, horticulture has probably been practised as long as grain production in Mesopotamia. When Europeans arrived in its highlands less than a century ago, it appeared to them as if time had stood still. Headhunting remained widespread, metals were unknown, and several hundred languages were spoken, most of them unrelated to all other languages. Along the northern coast, where there had been continuous contact with traders, pirates, castaways and eventually missionaries and colonial administrators, the situation was different. Most of the inhabitants spoke Austronesian languages, related to other languages from Madagascar to Rapa Nui. The ocean has always been a road, both biologically and culturally speaking, its islands and ports crossroads and hubs of migration, hybridisation, creolisation, and exchange. This road was macadamised and turned into a smooth highway in the centuries following 1492. Eventually, the territorial expansions of animals and plants on land, natural rafts and migratory birds were no longer needed for species to spread. Human migrations might now take the form of transatlantic slavery, enforced labour in silver mines and movement into growing cities both in and outside of Europe. States and empires took shape worldwide, and they increasingly began to resemble each other, especially after the First World War. Again, it needs to be pointed out that such exchanges and movements existed before 1492 as well; one may only think of the trade networks of the Roman empire or the slave raids of the Moors. Yet, the scope, extent and velocity of these exchanges started to increase, with serious unintended side-effects for people and nature. Since the end of the Cold War, it is as if all speed limits on the global highways have been abandoned. Changes now take place at a rate making it difficult for researchers and commentators to follow them; for example, climate change projections are uncertain and are continuously being modified. In 1493, Mann devotes a great deal of attention to food production, and one of his concerns is the reduction of biological diversity in an era dominated by the logic of the factory and the plantation, where the entire world is considered a market. When an oil palm plantation replaces a rainforest, not only do a variety of trees of different species disappear, but so do microorganisms, insects, the birds feeding on the insects, rodents, lizards, a diverse undergrowth and the fungal networks helping to sustain the forest. The soil composition changes, and the entire biotope is simplified and standardised. I am writing this in a cabin on the south-eastern coast of Norway, where the cod, until recently ubiquituous, has all but been driven to extinction locally. At the same time, fish farming-the cotton plantations of the sea-is booming. A similar objection as that directed to plantation monoculture was raised against industrially produced goods in the nineteenth century, when guilds and connoisseurs criticised them for being simplified, identical and bland. Yet mass production turned out to be profitable, commodities became cheaper, and the standard of living improved. The green revolution in agriculture has led to comparable effects. Productivity has increased, mass starvation has nearly been eradicated in large countries like India, but the price to pay is a loss of diversity and flexibility. In The McDonaldization of Society, Ritzer (first ed. 1993) argues along the same lines, updating Weber on rationalisation. He describes a world of production and consumption where upscaling, simplification and standardisation dominate. Large chains outcompete smaller businesses, and common denominators rule because they generate the most revenue. In the realm of culture, it is more difficult to measure diversity than in biology. Ritzer refrains from an assessment of entire If it can reasonably be claimed that each language conjures up a world with some unique features, predictions of language death suggest that the cultural diversity in the world is faced with a mass extinction comparable to the observable reduction in biological diversity. WHAT OF THE NEW DIVERSITY? The claim that cultural diversity in the world is being reduced demands a closer examination. For is it not a fact that precisely this moment of accelerated globalisation produces a plethora of new cultural forms owing to transnational communication and migration? The concept super-diversity has been suggested, by Vertovec (2007), in order to describe the diversification of diversity, especially as it can be observed in cities, the cultural crossroads par excellence. His observation is valid, and it is true that new identities are continuously being produced-religious, ethnic or post-ethnic, pertaining to gender-but they tend to conform to a uniform, global grammar. Across the world, there are people who emphasise their uniqueness, but they usually do so in the same ways, conforming to individualism, consumerism and choosing among the alternatives on offer in the supermarket of individual choice. Ethnicity does not result from cultural differences, but amounts to ideologies of cultural difference. Ethnicity consists in making cultural differences comparable, meaning that in order to communicate their difference, people must first attune themselves to a transcultural conversation about cultural difference. Before the Homogenocene, different peoples could be unintelligible to each other. In Tristes Tropiques, (Lévi-Strauss 1976[1955) describes an encounter with a Brazilian indigenous group in the 1930s as if there were an invisible glass wall between them: They could see each other, but communication was impossible. The great leveller of modernity, producing what Gellner (1983) spoke of as cultural entropy, enables communication and comparison. Formerly, the other could come across like Wittgenstein's lion: If it could talk, we would not understand what it said. The reduction of diversity is not without its benefits. While it did reduce crop diversity, the Green Revolution saved millions of lives by concentrating on a few, highly productive cereals. The advantages of using English as an international language are similarly obvious, and arguably enables many to expand rather than limit their cultural repertoire. The new forms of diversity led Hannerz to argue, in a rejoinder to Gellner, that a "return of Kokoschka" (Hannerz 1996) had taken place in the new, diverse cultural settings. Similarly, invasive species have sometimes found vacant niches and led to an increased diversity in local ecosystems (Thompson 2014). At the same time, the underlying grammar is simplified and standardised. In the realm of culture, the anthropologist Clifford Geertz memorably quipped: "[C]ultural difference will doubtless remain-the French will never eat salted butter. But the good old days of widow burning and cannibalism are gone forever." (Geertz 1984: 105). The UNESCO did not see this distinction when they produced the report Our Creative Diversity (UNESCO 1995). The authors celebrated cultural diversity while at the same time promoting a global ethics. Everybody should, in other words, be encouraged to be different and unique, but only in so far as they followed the established rules. They had to become similar in order for their uniqueness to be legitimate. Handicrafts, yes. Headhunting, no. In a manner resembling the new cultural diversity, biological diversity is being safeguarded in national parks, zoos, and seed banks, but outside the reserves, the tendency is unequivocal. The loss of variation is undisputable both as regards culture and biology. This reduction of options leaves us with reduced flexibility, and the systemic effects are potentially catastrophic. EXTINCTIONS History never has a single direction, unless imposed by historians. Different parts of a culture change at different speeds. Norwegians will continue to be devoted to the outdoor life, and in Melanesia, people will still sacrifice pigs to the ancestors, although they now have smartphones and take part in a monetary economy. It may well be the case that English suppresses many small languages, but as a compensation, the English language becomes richer and more diverse, with many local variants and dialects. Yet there are striking parallels between descriptions of species extinction and biodiversity loss, as detailed in Kolbert's celebrated The Sixth Extinction (Kolbert 2014), and the situation for cultural diversity today, not least as regards small, stateless groups. It is true that indigenous people have never lived in total isolation, but the speed and comprehensiveness of the present encompassment by the forces of globalisation are unprecedented in history. Kolbert identifies a series of causes for what she speaks of as the sixth extinction, taking lessons from the previous five extinctions as she goes along (the most famous of which was the temporary cooling of global climate following a meteor crashing on Yucatán, 66 million years ago, and leading to the extinction of the dinosaurs). Some of the causes of extinction described by Kolbert are species invasion, habitat loss or fragmentation, overexploitation of natural resources and natural disasters, but the most important cause, related to some of the others, is anthropogenic ecological destabilisation, that is pollution and climate change. Parallels can be drawn between Kolbert's analysis of biodiversity loss and processes affecting people and their cultural worlds. Habitat loss resembles the effects of "accumulation by dispossession" (Harvey 2003) whereby people lose their homes and livelihood owing to large-scale infrastructural developments, becoming urbanised or proletarianised because there is no other option available. Overexploitation of resources also deprives indigenous people of their livelihood, and species invasion may have a parallel in the homogenising effects of states and markets. Climate change, needless to say, affects people as well as the rest of nature (UNEP, 2021). Culture has a different internal dynamics than biology, but this should not detract attention from the parallels. Benevolent state policies on indigenous matters resemble the thinking behind national parks. State control and the relentless desire to translate everything to measurable and profitable "resources" in the corporate world contribute to upscaling and homogenisation in both realms. The benefits of homogenisation are gauged with the universal standards of modernity: Economic growth, improved access to education, reduced child mortality, improved sanitation and so on. Not everybody benefits. Some are faced with the bill without having had the chance to reap the benefits. Ultimately, everybody loses because future options are narrowed and we are collectively painting ourselves into a corner. The greatest loss, seen from a long-term global perspective, is the loss of flexibility. The insistence on a single economic system presupposing eternal growth, a few highly productive food crops and, not least, the destructive and potentially catastrophic reliance on fossil fuels, leads to a game with high stakes and one that cannot be won in the long term. A potentially fruitful way of conceptualising this situation is by analysing it as one of reduced semiotic freedom. SEMIOTIC FREEDOM AND THE HOMOGENOCENE A pioneer in the emerging field of biosemiotics, Jesper Hoffmeyer had a suitably interdisciplinary background in chemistry, biology, philosophy, and semiotics. In biosemiotics, relationships in nature are interpreted as acts of communication. When a fox becomes aware of a hare in the vicinity, its reaction forms part of a semiotic chain together with the hare's response and flight, the hunt and its outcome. Hoffmeyer once said that if he were to summarise the entire history of evolution in one sentence, he would say that evolution has, over the millions of years, led to an overall growth in semiotic freedom (Hoffmeyer 1998). Allow a short explanation. All organisms have a certain degree of semiotic freedom, that is an ability to respond to their environment in different ways. A plant may stretch towards the sunlight or direct its roots to the most nutritious parts of the soil; some plants do not, and they lose. A dog may play with its owner and pretend to bite her; in other words, it is capable of meta-communication (Bateson 1972). The relationship between human and dog releases a greater semiotic freedom-more alternatives, greater depth in signification, more flexibility-than the relationship between a pine tree and the mushrooms and ferns growing beneath it, although an exchange of signs and responses also take place in the latter case. Hoffmeyer thus describes an evolutionary movements towards more complexity, more communication, more relationships and a denser forest of signs sending a growing number of messages in a hierarchy of logical levels. A reading of biosemiotics which connects it to the homogenising effects of globalisation described in this article makes it possible to conclude that the development is now being reversed. The beginning of the modern environmental movement was marked by the publication of Rachel Carson's The Silent Spring (Carson 1962), the opening gambit of which is the observation that the songbirds were gone. Similarly, in oral African cultures it is said that when an old tribesman dies, it is as if a library is burnt down. Hoffmeyer does not mention the five mass extinctions in evolutionary history, which must have led to a temporary reduction in semiotic freedom, but his argument is nevertheless an important one. It can be applied to the cultural history of humanity. Since the origin of homo sapiens in Africa around 250,000 years ago, groups have branched off, diversified, adapted to and developed viable niches in all biotopes except Antarctica. Thousands of mutually unintelligible languages, unique religions and customs, kinship systems, cosmologies and economic practices produced a world of a fast growing number of differences. What seems to be happening today as a result of frantic human activity across the planet is nevertheless a reduction in semiotic freedom, a loss of flexibility and options. This seems to be the case both with respect to the nonhuman world and that of culture and society. This means that if Hoffmeyer's view has been correct up to the near-present, it now seems that we shall have to reconcile ourselves with a world with decreasing semiotic freedom, both in the cultural and the biological domains. The political challenge consists in halting this movement away from a world of many little differences to one of a few major ones, and thus, it may be argued that a concern with biological diversity and cultural diversity are two sides of the same coin. FROM TINA TO TAMA: SCALING POLITICS OF THE ANTHROPOCENE The political agents resisting the Homogenocene are of a different kind to those typically studied by political scientists. Since the reduction of diversity is caused by governments and corporate interests, it is necessary to look elsewhere for resistance movements. I shall briefly describe some of them, indicating that although they may have comparable objectives, they work in different settings and on different scales. The plurality of movements working to retain local autonomy and healthy ecosystems effectively falsifies the TINA (There Is No Alternative) doctrine popularised in the 1980s by Margaret Thatcher by showing that in fact, TAMA (There Are Many Alternatives). One such proposed alternative is rewilding. Rewilding Europe, an NGO starting in 2011, has partnered with governments and sponsors with the aim to restore ecosystems that have been affected by global homogenisation. Currently, Rewilding Europe has eight active projects, from Portugal to Swedish Lapland. Restoration of ecosystems also takes other forms, and it is practised on many scales. In Tasmania, for example, civil society volunteers spend Sundays removing invasive shrubs-some of which were deliberately imported for their beauty as late as the 1970s-from the landscape, trying to strengthen the relative position of endemic plants. Further north, in Queensland, "toadbusting" is an organised activity for volunteers in many locations, where the aim is to curb the spread of the deliberately imported, but now invasive and destructive, cane toad, originally a Central American species. On a slightly larger scale, the transformation of South African farms into game parks has led to the reintroduction of animals-mainly herbivores, but also big cats in a few cases-to regions where they had been driven to extinction in historical times. An unknown concept at the turn of the millennium, rewilding is now being recognised as a tool of what we might call "salvage ecology". The greater semiotic freedom of humans, compared to other species, entails above all self-consciousness and reflexivity. Hence, although the European bison (part of a rewilding project) cannot represent itself-it must be represented-people can, and they do. Lien (2021) describes a court case concerning land rights in northern Scandinavia, involving the Norwegian state and Sami reindeer herders. One of the herders, called as a witness in court, was asked to identify the location of his migratory route on a map. He refused, explaining its location instead by describing geographical and topological features, affirming that he had never needed to use maps. The literature on Sami ways of engaging with the world is substantial, much of it written by Sami scholars who thus function as cultural brokers. Sami land rights activists emphasise traditional forms of stewardship based on tradition rather than law, and many Sami also show a different way of relating to their environment, a different cosmology and view of social relations than that which is dominant in majority Nordic society (Eriksen et al., 2019). Other indigenous groups are in a weaker bargaining position. Wilhite and Salinas (2019) have showed how indigenous groups in South America as well as India receive the sharp end of the stick threefold: by being deprived of their land and livelihood, by losing their option of cultural reproduction, and as victims of climate change. There are nevertheless positive examples from the Global South as well of indigenous groups mobilising successfully to retain their right to define and govern themselves. The most famous example is probably that of the Yanomami in Brazil, who were given rights to a territory of 99,000 square kilometres (twice the size of Denmark) by the Brazilian government in 1992. In recent years, the autonomous territory has nonetheless been invaded by thousands of garimpeiros, goldminers, with the tacit support of the Bolsanaro government. Another form of resistance, described by Conversi (2021), is represented by faith-based communities such as Amish, who actively choose to stay aloof from mainstream capitalist state society. Both small-scale indigenous groups and these alternative communities (ecovillages could also be mentioned) are downscaled politically, with limited participation in the monetary economy, and they are ecologically sustainable. Hendry (2014), an anthropologist, has surveyed small-scale stateless societies with a view to glean insights into the kind of ecological thinking and practice that could contribute to changing the course of history away from certain catastrophe. Yet considering the size and complexity of the human population (36 per cent of the planetary mammalian biomass is now human), there can be no return to the Garden of Eden, which does not mean that there are not useful lessons to be learned from indigenous cosmologies and small-scale countercultures. Significantly, all attempts to reinstate some of the lost diversity has an element of downscaling. For example, Conversi and Hau (2021) present and compare left-leaning secessionist parties in European countries, notably Scotland and Catalonia, which are favourable to radical climate policies. They identify a shift from a national romanticism glorifying the purity and authenticity of local nature to a pragmatic, concrete and demanding climate policy for the present. The final example is that of the Creole Garden project in the Seychelles, where cultural specificity and biodiversity are at play simultaneously. A pilot project funded by the UNESCO, the Creole Garden aims to recover knowledge about crops and foods that can be grown locally. Ironically, the Creole garden arose from plantation slavery, but as a side-effect deemed uninteresting by the plantation owners, but essential for the slaves, who grew a variety of crops on their tiny plots for subsistence. However, as the project proposal explains, and I quote it at some length, with modernity and the advent of supermarkets and flats and housing estates replacing the traditional creole community, the Creole Garden has lost ground and is not being transmitted to the younger generation. And yet, the Creole Garden provides sustenance, traditional creole culinary skills and ingredients which are the basis of the celebrated creole cuisine in tourism, as well as medicinal plants that reduce the need to go to the doctor. During the Covid-19 lockdown period in Seychelles, our dependency on imported goods became glaringly clear as planes suddenly reduced to essential cargo, and certain fresh vegetables that were flown in every day became scarce. People started planting in pots if they lived in flats, and those who had land began planting typical creole foodstuff such as plantains, dessert bananas, yam, sweet potatoes, tomatoes and herbs.' (University of Seychelles 2021). The Creole garden project brings many of the strands of the argument together. 1) It came about-somewhat ironically-as an unintentional effect of plantation slavery and the beginnings of the Homogenocene. 2) It rejects quests for purity, instead focusing on what works in the local ecology regardless of its origins. 3) It is small scale, scaled down to the household level. 4) The Creole garden combines a concern with biodiversity with the objective of saving Creole culture from oblivion at a time of Netflix and the smartphone. 5) The project is critical of the homogenising tendencies of large-scale production and distribution; in effect, it seeks to replace tinned food, imported mangoes and carrots with locally grown produce. This kind of project may well turn out to be an exemplar for a politics of the Anthropocene. The question I have raised in this article concerns politics, specified as the political, engaging not with established political institutions, but rather political actions and projects engaged in by activists, NGOs and citizens wishing to contribute to political change. Since the contradictions of the overheated Homogenocene are the collateral damage of the state and the globalised fossil fuel economy, solutions must be sought elsewhere. This should not be taken to mean that only localised, or even grassroots movements are the only viable alternative. International agreements such as the ambitious UN Convention on Biological Diversity can be significant, but as the negative experiences of the Kyoto Agreement indicate, they are worthless unless implemented, and most governments have chosen not to do so. For this reason, a politics aiming to counteract the destructive effects of the global fossil fuel industry and the accompanying impoverishment of the biosphere and cultural diversity of the planet should mainly aim to scale down, but sideways scaling through networks of localised initiatives is also a highly relevant option, which can now be achieved, somewhat paradoxically, by means of the very same electronic technology which is also a powerful cause of standardisation. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
2021-09-06T13:27:04.928Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "d57d6c3b0e235f3c622ff88463da2769922aeb94", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpos.2021.743610/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "d57d6c3b0e235f3c622ff88463da2769922aeb94", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
208622989
pes2o/s2orc
v3-fos-license
Upregulation of miR-376c-3p alleviates oxygen–glucose deprivation-induced cell injury by targeting ING5 Background The expression level of miR-376c-3p is significantly lower in infants with neonatal hypoxic-ischemic encephalopathy (HIE) than in healthy infants. However, the biological function of this microRNA remains largely elusive. Methods We used PC-12 and SH-SY5Y cells to establish an oxygen–glucose deprivation (OGD) cell injury model to mimic HIE in vitro. The miR-376c-3p expression levels were measured using quantitative reverse transcription PCR. The CCK-8 assay and flow cytometry were utilized to evaluate OGD-induced cell injury. The association between miR-376c-3p and inhibitor of growth 5 (ING5) was validated using the luciferase reporter assay. Western blotting was conducted to determine the protein expression of CDK4, cyclin D1, Bcl-2 and Bax. Results MiR-376c-3p was significantly downregulated in the OGD-induced cell injury model. Its overexpression elevated cell viability and impaired cell cycle G0/G1 phase arrest and apoptosis in PC-12 and SH-SY5Y cells after OGD. Downregulation of miR-376c-3p gave the opposite results. We further demonstrated that ING5 was a negatively regulated target gene of miR-376c-3p. Importantly, ING5 knockdown had a similar effect to miR-376c-3p-mediated protective effects against cell injury induced by OGD. Its overexpression abolished these protective effects. Conclusion Our data suggest that miR-376c-3p downregulated ING5 to exert protective effects against OGD-induced cell injury in PC-12 and SH-SY5Y cells. This might represent a novel therapeutic approach for neonatal HIE treatment. MicroRNAs (miRNAs or miRs) are small endogenous non-coding RNAs that regulate a wide variety of biological processes, including differentiation, proliferation and apoptosis, by targeting mRNAs [9][10][11]. In recent years, researchers have found that miRs are closely associated with the pathogenesis of hypoxic-ischemic diseases. For example, miR-29b promotes neurocyte apoptosis by targeting MCL-1 during cerebral ischemia/ reperfusion (I/R) [12]. MiR-451 has been reported to target CELF2, protecting against apoptosis and oxidative stress induced by oxygen and glucose deprivation/reoxygenation (OGD/R) [13]. Most recently, O′ Sullivan et al. found that the expression levels of three miRs (miR-374a-5p, miR-376c-3p and miR-181b-5p) are significantly lower in infants diagnosed with HIE than in healthy control infants [14]. This was determined by performing miRNA profile pattern analysis in umbilical cord whole blood. Notably, miR-376c-3p has been shown to regulate cell growth, proliferation and migration in different cancer types [15,16]. We thus speculated that miR-376c-3p might play an important role in neuronal cell survival under ischemic conditions. The inhibitor of growth family member 5 (ING5) is composed of four molecular domains: a nuclear localization signal (NLS), a novel conserved region (NCR), a leucine zipper-like (LZL) domain, and a plant homeodomain (PHD) [17]. A related study indicated that ING5 is a key factor in DNA replication, cell cycle regulation and apoptosis [18]. ING5 overexpression could decrease cell proliferation and induce apoptosis in lung cancer [19] and esophageal squamous cell carcinoma [20]. Interestingly, Zhu et al. reported that ING5 suppresses cell viability and promotes cell apoptosis in human pulmonary artery smooth muscle cells under hypoxic conditions [21]. This highlights its potential for the treatment of hypoxic pulmonary hypertension. These results suggest that targeting ING5 might be beneficial for developing novel therapeutic strategies for HIE injury. In this study, we constructed an OGD cellular model as the most commonly applied in vitro model of HIE [22,23] to investigate the functional significance of miR-376c-3p in regulating neuron survival. Here, PC-12 [24][25][26][27] and SH-SY5Y [28] cells were used to construct an OGD cell injury model to mimic HIE. We confirmed whether miR-376c-3p exerted protective effects on OGD-injured cells. Furthermore, we explored the molecular mechanisms underlying miR-376c-3p in OGD cell injury. Materials and methods Cell culture PC-12 cells and SH-SY5Y cells were purchased from the American Type Culture Collection (ATCC) and cultured in Dulbecco's modified Eagle medium (DMEM; HyClone) supplemented with 10% fetal bovine serum (FBS; Gibco). The culture was maintained at 37°C in a humidified incubator containing 5% CO 2 . OGD cell injury model Cells were cultured in glucose-free culture medium and placed into a hypoxia incubator with 94% N 2 , 5% CO 2 , 1% O 2 for 2 h at 37°C. Growth medium containing glucose was used to replace the culture medium and cells were cultured at 37°C under normal conditions in an atmosphere with 5% CO 2 . The PCR amplification parameters were: 95°C for 5 min, followed by 40 cycles of 95°C for 15 s, 60°C for 30 s and 72°C for 30 s. The relative expression levels of miR-376c-3p and ING5 were calculated using the 2 −ΔΔCt method [29] with the respective internal controls U6 and GAPDH. Cell viability assay Cells from different groups were seeded into 96-well plates (4 × 10 3 cells per well) and incubated with 10 μl CCK-8 solution (Dojindo Laboratories) for 1 h. Using a Bio-Rad Microplate Reader, we measured the optical density values at 450 nm and used them to calculate relative cell viability in experimental groups compared with the control group. Flow cytometry analysis Cells were collected and fixed at 4°C with cold ethanol overnight. After two washes in phosphate-buffered saline (PBS), the cells were re-suspended in 200 μl binding buffer, followed by staining with 400 μl PI (BestBio) for 30 min in the dark. Next, the cell cycle distribution was analyzed via flow cytometry with FlowJo software (BD Bioscience). To assess cell apoptosis, cells were collected, re-suspended and stained with Annexin V-FITC and PI (BestBio) for 20 min in the dark at room temperature. The numbers of early (Annexin V+/PI-), late (Annexin V+/PI+) and total apoptotic cells were determined using a flow cytometer equipped with CellQuest Pro software (BD Bioscience). Luciferase reporter assay TargetScan Bioinformatics software (www.targetscan.org/vert_72) was searched to seek the putative target genes associated with the effects of miR-376c-3p on cell growth. For the luciferase reporter assay, the wild-type (WT) or mutant (MUT) 3′-untranslated region (3′-UTR) of ING5 was cloned into the pmirGLO dual luciferase reporter vectors (Promega) by RIBOBIO. These were transfected into HEK293T cells with mimic or miR-NC using Lipofectamine 2000 (Invitrogen). Cells were harvested after 48 h transfection and relative luciferase activities were determined using the Dual-Luciferase Reporter Assay System (Promega). Western blot analysis RIPA lysis buffer and enhanced BCA Protein Assay kit (Beyotime) were respectively used to extract total protein and determine protein concentration. Approximately 30 μg of protein samples were separated using sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) with 12% sodium dodecyl gel. The separated protein was transferred onto PVDF membranes where it underwent blocking with 5% nonfat milk for 2 h. Subsequently, the membranes were incubated with anti-ING5 and anti-GAPDH (Abcam) overnight at 4°C, followed by incubation with horse radish peroxidase-labeled secondary antibody for 2 h at room temperature. The protein bands were visualized with GAPDH as an internal control using enhanced chemiluminescence (Pierce). Statistical analysis Quantitative data were expressed as means ± SD from at least three experiments. GraphPad Prism 6.0 Software was used to perform statistical analysis. Differences were evaluated using Student's t-test (2 groups) and one-way ANOVA followed by a Bonferroni post-hoc test (multiple groups). Values of p less than 0.05 were considered to be statistically significant. Results The levels of miR-376c-3p decrease in the OGD-induced cell injury model PC-12 and SH-SY5Y cells in an OGD model were used to investigate the potential role of miR-376c-3p in HIE brain injury. MiR-376c-3p decreased significantly in PC-12 cells and SH-SY5Y cells after OGD (Fig. 1a, p < 0.01). Then, we evaluated the OGD cell injury model. The CCK-8 assay showed that the cell viability of PC-12 and SH-SY5Y cells decreased significantly after OGD (Fig. 1b, p < 0.01). Moreover, the percentages of PC-12 cells and SH-SY5Y cells in G0/G1 phase increased significantly (p < 0.01), while the percentages of those in G2/M phase and S phase decreased after OGD (p < 0.01), indicating that OGD induced cell cycle G0/G1 phase arrest (Fig. 1c). Furthermore, the percentage of apoptotic cells was remarkably elevated in the OGD group compared with the control group in both PC-12 and SH-SY5Y cells (Fig. 1d). These results reveal that downregulation of miR-376c-3p might play an important role in the OGD-induced cell injury model. ING5 is directly targeted by miR-376c-3p Using bioinformatics analysis, we predicted the downstream target genes of miR-376c-3p and selected ING5, an important gene associated with cell growth, as a potential (See figure on previous page.) Fig. 1 Expression of miR-376c-3p in the OGD-induced cell injury model. PC-12 and SH-SY5Y cells were subjected to OGD. Cells cultured under normal condition were used as the controls. (a) Quantitative reverse transcription PCR analysis of miR-376c-3p expression in PC-12 and SH-SY5Y cells. (b) Cell viability was measured using the CCK-8 assay. (c) Cell cycle distribution was analyzed via flow cytometry with PI staining. (d) Cell apoptosis was examined using flow cytometry with Annexin V/PI double staining. Data are expressed as means ± SD. **p < 0.01, ***p < 0.001 vs. control target gene of miR-376c-3p. The alignment of the seed regions of miR-376c-3p with the 3′-UTR of ING5 is shown in Fig. 3a. The luciferase reporter assay was conducted to confirm direct target binding. Overexpression of miR-376c-3p significantly decreased the luciferase activity of a reporter vector containing the WT ING5 3′-UTR, but did not affect the luciferase activity of a reporter vector containing MUT ING5 3′-UTR in HEK293T cells (Fig. 3b, p < 0.01). Subsequently, we analyzed the expression of ING5 in the OGD cell injury model using western blot analysis. The protein expression of ING5 was obviously elevated after OGD treatment in both PC-12 and SH-SY5Y cells (Fig. 3c). Furthermore, we demonstrated that overexpression of miR-376c-3p significantly decreased the mRNA (Fig. 3d) and protein (Fig. 3e) expression of ING5 in the OGD-induced PC-12 and SH-SY5Y cell injury model. By contrast, downregulation of miR-376c-3p elevated the mRNA (Fig. 3f) and protein (Fig. 3g) expression of ING5 in PC-12 cells. These results show that ING5 might be a direct target gene of miR-376c-3p. Restoration of ING5 expression reverses the protective effect of miR-376c-3p against OGDinduced injury Next, we performed rescue experiments to confirm whether miR-376c-3p protects against OGD-induced cell injury by targeting ING5. ING5 expression was restored by The mRNA (f) and (g) protein expression levels of ING5 were determined in PC-12 cells transfected with anti-miR-376c-3p or anti-miR-NC and subjected to OGD. Data are expressed as means ± SD. **p < 0.01 vs. miR-NC; ## p < 0.01 vs. anti-miR transfection of ING5 plasmid into PC-12 cells that had undergone transfection with mimic. We first confirmed that the protein expression of ING5 was significantly restored by transfection with pcDNA3.1/ING5 vector (Fig. 5a, p < 0.01). The effect of miR-376c-3p overexpression on cell viability (Fig. 5b) was significantly blocked by restoration of ING5. In addition, the decrease in cell cycle G0/G1 phase arrest (Fig. 5c) and apoptosis (Fig. 5d) after miR-376c-3p overexpression were significantly abrogated by ING5 overexpression. These results suggest that ING5 might be a downstream functional regulator for miR-376c-3p-mediated protective effects in the OGD-induced cell injury model. MiR-376c-3p regulates cell cycle arrest and apoptosis-associated factors by targeting ING5 in the OGD-induced cell injury model Next, we analyzed the effects of miR-376c-3p and ING5 on the protein levels of cell cycle-and apoptosis-associated factors using western blot analysis. Compared with miR-NC + vector group, we found that miR-376c-3p overexpression significantly increased the protein levels of CDK4, cyclin D1 and Bcl-2, but decreased Bax expression in PC-12 cells subjected to OGD. Notably, the effects of miR-376c-3p overexpression on these protein levels were obviously alleviated by ING5 overexpression (Fig. 6). These findings further suggest that miR-376c-3p alleviates OGD-induced cell injury through downregulation of ING5. Discussion MiR-376c-3p levels are significantly lower in infants diagnosed with HIE than in healthy control infants [14]. Consistently, we found that miR-376c-3p is significantly downregulated in response to OGD treatment. By performing gain-of-function and loss-of-function assays, we further found that miR-376c-3p significantly attenuates OGD-induced injury. The underlying mechanism for this might be reversal of cell cycle G0/G1 phase arrest and apoptosis, as confirmed by the upregulation of CDK4, cyclin D1 and Bcl-2 and downregulation of Bax after miR-376c-3p overexpression. Various studies have shown that miR-376c-3p is involved in regulating cell proliferation, cell cycle and apoptosis in neuroblastoma cells [16], gastric cancer [30] and hepatocellular carcinoma [15]. From this evidence, we hypothesized that miR-376c-3p might play a neuroprotective role in OGD-induced cell injury. ING5 is the last member of the ING candidate tumor suppressor family that has been implicated in multiple cellular functions, including cell cycle regulation, apoptosis and chromatin remodeling [18]. Wu et al. [31] found that ING5 overexpression inhibits tumor growth in SH-SY5Y cells by suppressing proliferation and inducing apoptosis. In addition, ING5 has been reported as a potential target for breast cancer [32] and gastric cancer [33] treatment. Our data show that the protein expression of ING5 is obviously elevated in OGDinduced cell injury. ING5 consistently and significantly aggravated hypoxic human pulmonary artery smooth muscle cells [21]. In fact, miRNA directly binds to the 3′-UTR of target mRNAs via complementary pairing sequences to induce their degradation [15]. We then explored whether ING5 is the downstream target gene of miR-376c-3p in OGD-induced cell injury. As expected, we found miR-376c-3p directly binds to the 3′-UTR of ING5. Moreover, ING5 knockdown imitated and ING5 overexpression reversed the protective effect of miR-376c-3p against OGD-induced injury. Furthermore, the regulatory effects of miR-376c-3p on CDK4, cyclin D1, Bcl-2 and Bax were abolished by ING5 overexpression. Similarly, ING5 is a target gene of miR-196a and suppresses head and neck cancer cell survival and proliferation [34]. Based on these data, we speculate that miR-376c-3p may downregulate ING5 expression in OGD-induced injury by regulating cell cycle and apoptosis-associated factors. Conclusions Our experiments have confirmed our initial hypothesis that miR-376c-3p affects OGDinduced cell injury by targeting ING5. This study provides a theoretical basis for further investigation into the protection of neurons against OGD-induced injury. Of course, the impacts of other miRNAs on more target genes for HIE will be explored in future studies.
2019-12-05T09:05:01.026Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "fa5947345348de03c09f57c370db10c30dc305c4", "oa_license": "CCBY", "oa_url": "https://cmbl.biomedcentral.com/track/pdf/10.1186/s11658-019-0189-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "772554b88b0793a96f7d52e594615eb8414649af", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251800057
pes2o/s2orc
v3-fos-license
Impact of the wave-like nature of Proca stars on their gravitational-wave emission We present a systematic study of the dynamics and gravitational-wave emission of head-on collisions of spinning vector boson stars, known as Proca stars. To this aim we build a catalogue of about 800 numerical-relativity simulations of such systems. We find that the wave-like nature of bosonic stars has a large impact on the gravitational-wave emission. In particular, we show that the initial relative phase $\Delta \epsilon =\epsilon_1-\epsilon_2$ of the two complex fields forming the stars (or equivalently, the relative phase at merger) strongly impacts both the emitted gravitational-wave energy and the corresponding mode structure. This leads to a non-monotonic dependence of the emission on the frequency of the secondary star $\omega_2$, for fixed frequency $\omega_1$ of the primary. This phenomenology, which has not been found for the case of black-hole mergers, reflects the distinct ability of the Proca field to interact with itself in both constructive and destructive manners. We postulate this may serve as a smoking gun to shed light on the possible existence of these objects. I. INTRODUCTION Gravitational waves (GWs) provide information about the strong-field regime of gravity and can potentially reveal the true nature and structure of astrophysical compact objects. Their analysis could help unveil the classical and quantum essence of black holes, as well as the interior of neutron stars through the dense-matter equation of state, a long-term open issue. Moreover, theoretical proposals for dark or "exotic" compact objects (ECOs) [1] could be probed through the study of their GW signals as long as those could be distinguished from the signals produced by black holes and neutron stars. Such investigations require a deep understanding of the emitted GWs and, in particular, rely on theoretical waveform templates against which observational data can be compared. As an example, the detection of GWs from compact binary coalescences -the sources so far observed by Advanced LIGO and Advanced Virgo [2][3][4][5][6][7][8] -and the source parameter inference thereof, rely on the matched filtering of the data to waveform templates (or approximants). This makes the production of waveform catalogues of physically motivated exotic compact objects an endeavour both well timed and worth-pushing. Amongst all proposed exotic objects that can reach a compactness comparable to that of black holes, bosonic stars stand out as one of the simplest and best-motivated models [9,10]. Bosonic stars with masses in the astrophysical black hole range, from stellar-origin to supermassive objects, are made of ultralight fundamental bosonic fields that could account for (part of) dark matter. Triggered by this central open issue in theo-retical physics -the nature of dark matter -the study of bosonic stars has earned quite some attention in recent years. From a particle physics perspective, ultralight bosonic particles can emerge in the string axiverse [11,12] or in simple extensions of the Standard Model of particles [13]. Bosonic stars are asymptotically flat (although non-asymptotically flat generalizations exist), stationary and solitonic, i.e. horizonless and everywhere regular equilibrium spacetime geometries, describing selfgravitating lumps of bosonic particles. In their simplest guise, they emerge by minimally coupling the complex, massive Klein-Gordon equation -for scalar boson starsor the complex Proca equations -for vector boson stars, aka Proca stars (PSs) [14] -to Einstein's gravity. Bosonic stars can be either static, in which case the simplest solutions are spherically symmetric (but see also [15,16]) or spinning [17] (thus stationary but nonstatic), in which case they have a non-spherical morphology which depends on the scalar or vector model. In all cases, the bosonic field oscillates periodically at a welldefined frequency ω, which determines the mass, angular momentum (in spinning solutions) and compactness of the star. The dynamical robustness of bosonic stars has been established for some models in well-identified regions of the parameter space (see [18] for a review) making them viable dark-matter candidates. The case of nonspinning spherically symmetric bosonic stars is firmly established. The fundamental solutions (those with the minimum number of nodes of the bosonic field across the star) are perturbatively stable in a range of frequencies between the Newtonian limit (where they become non-compact) and the maximal-mass solution. Additionally, they exhibit a non-fine-tuned dynamical formation mechanism known as gravitational cooling [19,20]. On the contrary, the case of spinning bosonic stars has shown to be more subtle [21]. In particular, while the fundamental PS solutions have been found to be stable in the simplest model where the Proca field has only a mass term (no self-interactions), scalar boson stars are prone to non-axisymmetric perturbations that can trigger the development of instabilities akin to the bar-mode instability found in neutron stars [22], in the corresponding model without self-interactions [23]. The above findings support using the fundamental solutions of the simplest Proca model as a robust starting point to test the true nature of dark compact objects. In particular, this model appears as the most suitable choice to conduct dynamical studies aimed at gauging, through GW information, the potential astrophysical significance, if any, of an appealing ECO model. First, and promising, steps have recently been taken. Pursuing this route [24] found that waveforms from numerical-relativity simulations of head-on collisions of PSs can fit the signal GW190521 as good as those from quasi-circular binaryblack-hole (BBH) mergers, even being slightly preferred from a Bayesian-statistics viewpoint. Moreover, the development of a larger numerical catalogue of PS mergers together with new data-analysis techniques [25], have led to a more systematic study of several LIGO-Virgo-KAGRA (LVK) high-mass events in O3 under the PS collision scenario [24] and to conduct the first population studies of these objects [26]. The present paper complements those recent works. Here, we report on our catalogue of nearly 800 numericalrelativity simulations of head-on collisions of PSs used to obtain the results presented in [24][25][26]. Furthermore, we discuss additional numerical simulations we carried out to explore the impact of the wave-like nature of PSs in their GW emission. We find that the emission at merger dramatically depends on the relative phase of the complex field of each star. This has a major impact in both the net energy emission through GWs and the corresponding mode structure. Since this relative phase is an intrinsic parameter of PSs, absent in BBH mergers, the potential measurement of the GW modulation discussed in this work could serve as a smoking gun for the existence of PSs. The remaining of this paper is organized as follows. Section II briefly describes the formalism needed to perform numerical simulations of PS mergers. The procedure we follow to obtain initial data for the simulations is outlined in Section III as well as the specific numerical setups employed. We report and analyze our results in Section IV. Finally, our conclusions are presented in Section V along with some remarks on possible pathways for future research. Henceforth, units with G = c = 1 are used. II. FORMALISM We investigate the dynamics of a complex Proca field by solving numerically the Einstein-(complex, massive) Proca system, described by the action S = d 4 x √ −gL, where the Lagrangian density depends on the Proca potential A and field strength F = dA. It reads Above, the bar denotes complex conjugation, R is the Ricci scalar, and µ is the Proca-field mass. The stressenergy tensor of the Proca field is given by where g αβ is the spacetime metric, with g = det g αβ , and the parenthesis denotes index symmetrization. Using the standard 3+1 split (see e.g. [27] for details) the Proca field is split into the following 3+1 quantities: where n µ is the timelike unit vector, γ µ ν = δ µ ν + n µ n ν is the operator projecting spacetime quantities onto the spatial hypersurfaces, X i is the vector potential, and X φ is the scalar potential. The fully non-linear Einstein-Proca system can be written as [27]: where α is the lapse function, β is the shift vector, γ ij is the spatial metric, K ij is the extrinsic curvature (with K = K i i ), D i is the covariant 3-derivative, L β is the Lie derivative (along the shift-vector direction), and κ is a damping parameter that helps stabilize the numerical evolution. Moreover, the three-dimensional "electric" E i and "magnetic" B i fields are also introduced in the previous equations in analogy with Maxwell's theory: with E µ n µ = B µ n µ = 0 and ijk the three-dimensional Levi-Civita tensor. The system of equations is closed by two constraint equations, namely, the Hamiltonian constraint and the momentum constraint, which are given by: III. INITIAL DATA AND NUMERICS A. The stationary PS solutions Following the conventions in [14], we consider an axially symmetric and stationary line element where F 0 , F 1 , F 2 , and W are functions of (r, θ). Here, r, θ, ϕ can be taken as spherical coordinates (in fact spheroidal), with the usual range, while t is the time coordinate. The spinning PS solutions of the Einstein-Proca system have been discussed in [14] with these conventions and e.g. in [28] for a slightly different version of (15) with W/r → W . The ansatz for the Proca field is: withm ∈ Z + and is the initial phase of the star. The domain of existence and the compactness of the solutions of the Einstein-Proca equations describing the fundamental spinning PSs are shown in Fig. 1. These solutions havē m = 1 and are nodeless (i.e. A 0 has no nodes). The frequency range of the solutions of interest varies between ω/µ = 1 (Newtonian limit) and ω/µ ∼ 0.562 (maximalmass solution). As the latter is approached, the PS solutions become ultra-compact, i.e. they develop a light ring pair [29] for ω/µ 0.711. This creates a spacetime instability [30] which motivates us to avoid this region of the parameter space. The compactness is defined as where R 99 is the perimetral radius that contains 99% of the star's mass, M 99 . Bosonic stars do not have a surface with a discontinuity of the energy density occurs, i.e. a surface outside which the energy density is zero (in contrast with a fluid star). We remark that for all PS solutions reported in the literature so far, the line element (15) possesses a reflection symmetry, with respect to the θ = π/2 plane. The translation between the functions above, F 0 , F 1 , F 2 , W , V , H 1 , H 2 , H 3 , and the initial value for the metric and the 3+1 Proca field variables is given as follows: B. Binary head-on data As initial data for the head-on simulations we consider a superposition of two PSs with both stars described by the same Proca field following [24,[31][32][33][34][35][36] (see also [37,38]): where superscripts (1) and (2) label the stars and ±x 0 indicates their initial positions. The stars are initially separated by a coordinate distance Dµ = ∆xµ = 40 (x 0 µ = ±20). We note that the solutions are not boosted and that these initial data introduce (small) constraint violations [31]. Figure 2 shows the dependence of the L 2 -norm of the Hamiltonian and momentum constraints, Eqs. (13) and (14), with D, at the initial time. The values of the L 2 -norm are O(10 −4 ) or better. The error decreases with separation, reaching a fairly constant value for Dµ 20 (particularly visible for the momentum constraint), see [39]. Each star is defined by its oscillation frequency, ω 1 /µ and ω 2 /µ. For the initial catalogue used in [26], comprising ∼ 800 initial models, we fix the phase difference ∆ between the stars to zero. Here, we also explore the impact of varying this relative phase on the gravitational-wave emission. Equal-mass cases correspond to ω 1 = ω 2 = ω. Correspondingly, ω 1 = ω 2 for unequal-mass binaries. Moreover, since we assume there is a single Proca field describing both stars, these also share a common value for the boson mass µ. C. Parameter space Our main catalogue of 759 simulations is depicted in a compact way in Fig. 3. Each axis in this plot labels the frequencies of the two stars, ω 1 /µ and ω 2 /µ. For the equal-mass models, placed in the diagonal, we run simulations using a uniform grid in frequencies in the range ω 1 /µ = ω 2 /µ = ω/µ ∈ [0.8000, 0.9300] with ∆ω = 0.0025. For the unequal-mass cases, we fix the oscillation frequency of the primary star, ω 1 /µ, and then vary the frequency of the secondary star, ω 2 /µ. Both frequencies range from 0.8000 to 0.9300 with a resolution of ∆ω 1 /µ = 0.01 for ω 1 /µ and of ∆ω 2 /µ = 0.0025 for ω 2 /µ. As mentioned before, in all of these cases the two stars have null relative phase ∆ = 0 at the start of the simulation. As we show below, the initial set of simulations revealed unexpected non-trivial interactions between the stars described by the same Proca field due to their wavelike nature. As a result, we also build an additional set of models to study the impact of the relative phases of the star both in the dynamics and on the GW emission. The effect of this parameter is studied in two (implicit and explicit) ways. First, for some selected cases we vary the initial star-separation at which the simulation is started keeping ∆ = 0 which, for the cases with ω 1 = ω 2 translates into a varying relative phase at merger. This, however, also causes a variation in the velocity of the two stars at merger whose effect mixes with that of the varying phase. Therefore, in order to explicitly isolate the impact of the relative phase change, for a few selected cases of Fig. 3 we explicitly vary ∆ in a uniform grid ∆ ∈ [0, 2π] with step δ∆ = π/6. D. Numerics To carry out the numerical evolutions we use the publicly available Einstein Toolkit [40,41], which uses the Cactus framework and mesh refinement. The methodof-lines is employed to integrate the time-dependent differential equations. In particular, we use a fourth-order Runge-Kutta scheme for this task. The left-hand-side of the Einstein equations is solved using the MacLachlan code [42,43], which is based on the 3+1 Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation. On the other hand, the Proca evolution equations, Eqs. (6)- (11), are solved using the code described and available in [44][45][46]. We extended the code to take into account a complex field [21,33]. Technical details, assessment of the code, and convergence tests can be found in [21,33,44]. We use a fixed numerical grid with 7 refinement levels, with the following structure {(320, 48, 48, 24, 24, 6, 2)/µ, (4, 2, 1, 0.5, 0.25, 0.125, 0.0625)/µ}, where the first set of numbers indicates the spatial domain of each level and the second set indicates the resolution. The simulations are performed using equatorial-plane symmetry. To extract gravitational radiation we employ the Newman-Penrose (NP) formalism [47] as described in [44]. We compute the NP scalar Ψ 4 expanded into spin-weighted spherical harmonics of spin weight s = −2. IV. RESULTS We have performed 759 simulations of head-on collisions of spinning PSs starting at rest at fixed initial distance, Dµ = 40. We explore both equal-mass and unequal-mass cases to produce a first systematic study of the GW signals emitted in collisions of these objects. Stationary fundamental bosonic stars are described by the oscillation frequency ω/µ of the field, which determines the dimensionless mass M µ and angular momentum of the star Jµ 2 , besides its compactness. Further specifying the boson particle mass µ determines the corresponding physical quantities M, J (see below). Thus, µ can be set as a fundamental scale of the system and all quantities can be simply rescaled. Alternatively, we can trivially rescale the simulations to any fixed total mass, which in turn determines the mass of the boson. We also remark that in contrast with black holes, the angular momentum of PSs is quantized by the relation J =mQ, where Q is the Noether charge of the star, which counts the number of bosonic particles. This means that an infinitesimal loss/gain in angular momentum must be accompanied by a corresponding loss/gain of particles. We restrict to the case of mergers of dynamically stablem = 1 spinning PSs. For our range of frequencies the PS models have masses and angular momentum that vary from (ω/µ, M µ, Jµ 2 )=(0.9300, 0.622, 0.637) to (0.8000, 0.946, 1.008). All of these mergers lead to a postmerger remnant that is compact enough to collapse into a Kerr black hole. Therefore, our waveform catalogue is well suited for the analysis of LVK GW events under the PS merger scenario. A. Single star The dynamical robustness and formation of spinning PSs were addressed in [21,22]. Here we illustrate the stability properties of these objects. We consider the case of a single isolated spinning PS. We fix the oscillation frequency to ω/µ = 0.90, the mass to M µ = 0.726 and the angular momentum In addition, we show in the bottom panel the minimum value of the lapse function α. At the highest resolution, the deviations of the final mass, angular momentum, and lapse function with respect to the initial values are less than 0.4% at tµ = 8000. The resolution is comparable to the merger case, for which we added more refinement levels at the centre of the grid to take into account black hole formation. The initial deviations come from interpolation errors from the initial data computed in a compactified grid to the Cartesian grid used for the numerical simulations. The convergence order of our code under grid resolution is found to be around 2.5. B. Head-on mergers of Proca stars We now move to study head-on collisions of PSs and the corresponding GW emission. Figs. 5 and 6 show the energy density of the Proca field at the equatorial plane (z=0) for two families of collisions respectively characterised by primary-star frequencies, namely ω 1 /µ = 0.8300 and ω 1 /µ = 0.9100, and four illustrative secondary-star frequencies ω 2 /µ. These figures exemplify the dynamics of all PS binaries in our dataset. In particular, we note that the collisions are not strictly head-on since the objects do not follow a straight line. Instead, the trajectories of both stars are curved due to the frame-dragging induced by the stars spins. All mergers lead to the formation of a Kerr black hole with a faint Proca field remnant around the horizon, therefore storing a small fraction of the initial Proca mass and angular momentum [35,48]. The final black holes not always form promptly as for some values of the PS parameters the collisions exhibit the formation of a transient hypermassive PS. The collisions produce a burst of GWs, similar to the signals from head-on collisions of black holes [24,50]. We note that the gravitational waveform sourced by head-on collisions is fundamentally different from that produced in orbital binary mergers. First, it is obviously much shorter as there is no inspiral phase preceding the merger. Second, the radiated energy is significantly lower (only around a 0.2% of the initial energy of the system, when in orbital mergers it reaches a few percent) due to the slow velocities of the two objects at merger, caused by the fact that we release the stars from rest at very short distances. Third, while the GW emission from orbital mergers is vastly dominated by the quadrupole = 2, m = ±2 modes, that from head-on mergers exhibits an ( , m) = (2, 0) mode, equally dominating [24,31,33]. Fig. 7 shows the dominant = m = 2 mode of the Newman-Penrose scalar Ψ 4 in the equal-mass case, for six different PS models. The frequency of the GWs increases with increasing ω/µ, i.e., with decreasing mass and compactness of the PSs. The morphology of the waveforms changes as well: the less compact the stars, the longer the pre-collapse signal before black-hole formation, which corresponds to the peak emission and it is followed by the ringdown phase. For high ω/µ collisions, the transient hypermassive PS that results from the merger has a total mass that is closer (as ω/µ grows) to the maximum mass that defines the linear stability limit of such objects, therefore surviving for a longer time as it emits GWs before collapsing to a black hole. Fig. 8 shows the = m = 2 and = m = 3 modes of Ψ 4 for one equal-mass and five unequal-mass PS binary mergers, with fixed ω 1 /µ = 0.8300 and varying ω 2 /µ. The waveforms look similar to those for the equal-mass cases in terms of shape, duration, and frequency. However, they also exhibit important differences. First, while in equal-mass collisions odd-m modes (e.g. the = m = 3 mode) are almost completely suppressed (modulo numerical noise) due to the symmetries of the problem compared to the dominant = m = 2 (see top middle panel of Fig. 8), these are triggered for unequal-mass systems and can have a significant contribution (see also [35]). In addition, and most importantly, the morphology of the = m = 2 mode manifests a clear non-monotonic dependence on the frequency of the secondary star ω 2 /µ for fixed ω 1 /µ. In particular, the waveform amplitude varies periodically as we increase ω 2 /µ from 0.8000 to 0.9300. For example, for a value of ω 1 /µ = 0.8300, we find that the amplitude maxima correspond to ω 2 /µ equal to 0.8000, 0.8300, 0.8600, 0.8900, and 0.9225, while the minima are found when ω 2 /µ is equal to 0.8150, 0.8450, 0.8750, and 0.9100. This effect is not present in mergers of other types of compact objects as binary black holes or binary neutron stars. The non-trivial dependence of the gravitational radiation with ω 2 /µ for fixed ω 1 /µ becomes more evident when studying the total emitted energy from the GW luminosity, given by Fig. 9 shows the total GW energy as a function of ω/µ or ω 2 /µ for the equal-mass (top left panel) and three illustrative unequal-mass cases, corresponding to fixed values of ω 1 /µ = {0.8300, 0.8950, 0.9100} (top right, bottom left, and bottom right panels of Fig. 9, respectively). In the equal-mass case the emitted energy decreases for decreasing ω/µ reaching a minimum at ω/µ ∼ 0.8625 and increasing onwards. While naively one would expect that the emitted energy would primarily depend on the total mass and compactness of the stars, the described trend depends in a non-trivial way on the dynamics of the binary system, the trajectories followed by the stars due to frame-dragging, and the masses and angular momentum of the PSs. On the other hand, the unequal-mass cases yield interesting results already hinted above. We find that the GW energy displays a distinctive oscillatory pattern as a function of ω 2 /µ for fixed ω 1 /µ. As Fig. 9 shows, the energy maxima are located at intervals of ∆ω max /µ = (ω 1 − ω 2 )/µ ∼ k 0.03 and the minima are located at intervals of ∆ω min /µ ∼ (2k + 1) 0.015 with k ∈ Z. The value of these two intervals between maxima or minima, ∆ω min /µ and ∆ω max /µ, are completely independent of ω 1 /µ. This result can be explained by the wave-like nature of PSs and their fundamental oscillation frequency, which leads to an interference between the different frequencies in the unequal-mass case. The interference behaviour was already found in equal-mass head-on collisions of scalar boson stars with a non-zero initial phase difference [31,32,51,52], but its impact on the GW emission was not systematically explored. C. The role of the relative phase at merger To explain the GW emission pattern, we assume that at the time of the collision we have a linear superposition of both stars (same Proca field) oscillating at different frequencies. Then, removing themϕ-dependence which will not affect the interference and the initial phase , it for six unequal-mass PS collisions with fixed ω1/µ = 0.8300. For animations of the full set of GW signals from unequal-mass collisions see [49]. can be shown that Re(A) ∼ cos(ω 1 t) + cos(ω 2 t) Therefore, the complex amplitude of the Proca field will be given by Since the initial separation between the stars is the same for all cases, Dµ = 40, the time of the collision is also approximately the same, t col µ ∼ 210. This is precisely the time at which the maximum (constructive interference) for the envelope in Eqs. (26) and (27) is reached if ∆ω max /µ = (ω 1 − ω 2 )/µ ∼ k 0.03. On the other hand, the minimum (destructive interference) for the same time t col is found for which gives ∆ω min /µ = (ω 1 − ω 2 )/µ ∼ (2k + 1) 0.015. This simple linear analysis explains the periodicity between maxima and minima observed in Fig. 9, which therefore depends on the initial distance between the stars. This analysis, however, must be regarded as an approximation since the emission also depends on other factors such as the dynamics of the collision, the radius of the stars and the time of merger, which could give rise to some additional features in the GW energy, as hinted by the bottom right panel of Fig. 9. Thus, we anticipate that an increase in Dµ will increase t col and will decrease both ∆ω max and ∆ω min . Accordingly, if the whole merger takes more time to reach the collapse, the factor ∆ω/µ will be low enough so that its period will be longer than the life of the transient hypermassive PS. Depending on the amplitude of the envelope, the GW emission could be critically affected. Gravitational radiation greatly depends on the distribution and amplitude of the energy density. The square of the amplitude of the Proca field is proportional to the energy density (see Eq. (2)) and can be related to the amplitude of the GW emission. To illustrate this, Fig. 10 shows the total energy emitted for the models with fixed ω 1 /µ = 0.8700 computed from the simulations together with the estimated value of the Proca field amplitude Removing the drift of the energy due to dynamics and variations in the total mass, we find an excellent overall agreement, in particular in the location of the maxima and minima. We note that we do not find null GW emission when ∆ω/µ = (2k + 1)0.015, probably because there is no perfect cancellation of the Proca field during the whole merger process. We stress that while this linear argument is a remarkably good approximation, it is not really valid to explain a complete destructive interference of the stars. D. The role of the initial relative distance We now explore the impact of (implicitly) varying the relative phase at merger by changing the time of the collision t col µ. To this end, we place the stars at two additional initial separations, namely Dµ = 30 and 45. We repeat the simulations with these setups for the cases of binaries with fixed primary frequency ω 1 /µ = 0.8000 and secondary frequency in the interval ω 2 /µ ∈ [0.8000, 0.9300] with variations in steps ∆ω 2 /µ = 0.0025. Our results are shown in Fig. 11. The top left panel corresponds to the energy radiated in GWs. This exhibits the same global decreasing trend and periodic oscillations with local maxima and minima as a function of ω 2 /µ for all values of the initial separation distances. However, ∆ω max /µ and ∆ω min /µ are found to depend on Dµ (and t col µ). The new collision times are t Dµ=30 In addition, the top right panel of Fig. 11 shows the GW energy emitted by an unequal-mass binary with ω 1 /µ = 0.8000 and ω 2 /µ = 0.8450 as a function of the initial separation. The GW energy does not depend monotonically with the distance but instead it displays an oscillatory pattern. Moreover, the bottom panels show the l = m = 2 gravitational waveforms for two unequal-mass cases and three initial separations. These two plots illustrate that the initial distance is an important parameter of the system as it can change the morphology and energy of the emitted GWs for the same binary stars. E. The role of the initial phases The fact that the initial separation plays an important role in the dynamics and interactions of the two PSs raises the question of whether the initial phase of the stars may also cause a similar effect. Note that we keep the same phase for both stars (zero initial phase difference), as we have focused in the simplest possible scenario. Recall that while the energy density of PSs is axisymmetric, their real and imaginary parts are not. Therefore, different phases lead to different orientations of the real and imaginary parts at the time of the collision, which in turn yields different results that could potentially reveal the inner complex structure of these stars (for instance, the dipolar distribution of the real and imaginary parts of the Proca field for a m = 1 spinning star). To test this idea, we perform several simulations of a binary with ω 1 /µ = 0.8000 and ω 2 /µ = 0.8450 varying the initial phase in Eq. (16). To check that the key parameter at play is the relative phase of the stars and not their global ones, we first vary the phase of both stars, keeping always the phase difference equal to zero ∆ = 0 with 1 = 2 . Fig. 12 shows the time evolution of the energy density (leftmost column) and the real part of the scalar potential X φ for different values of the phase = {0, π/4, π/2} (remaining columns). The first column shows that even when the orientation of the components of the Proca field (in this case the scalar potential) is different, there is no change at the level of the energy density. No differences are found in the dynamics of the binary, the final object, or the gravitational waveform. These are all completely independent of the initial phase. Therefore, the inner structure and dipolar distribution (m = 1) of the real and imaginary parts of the star do not play a role in the collisions. We note that the real part of the scalar potential shows am = 5 distribution after the collapse and black hole formation (as discussed in [48]; see also [35]) that could trigger the development of the superradiant instability depending on the final spin of the black hole. However, this would happen within a timescale beyond current computational capabilities. Next, we study the effect of varying the initial sepa- Fig. 13 exhibits the GW energy as a function of distance for an equal-mass binary with ω/µ = 0.8000. In both cases we observe the same trend discussed in the top left panel of Fig. 9, together with the corresponding oscillatory pattern for fixed ω/µ. We note that, unlike the unequal-mass case, the top panel of Fig. 13 lacks the maxima and minima arising from the constructive and destructive interferences, as in the equal-mass case we always have ω 1 = ω 2 . Finally, we explore how the relative phase ∆ = | 1 − 2 | impacts the GW energy. Again, even if we change the initial phase, the initial energy density of the stars is independent of the phase. However, the relative phase will change the interference pattern and the dynamics of the Proca field at the time of the collision. From the amplitude of the Proca field we see that varying ∆ produces a similar effect to changing the initial distance separation (and t col ). The stars merge with a different internal configuration producing a different GW emission. This is indeed what we get as shown in Fig. 14 where we plot the GW energy for one equal-mass case (ω/µ = 0.8000) and one unequal-mass case (ω 1 /µ = 0.8000, ω 2 /µ = 0.8450) together with the analytical fit from Eq. (30) taking into account that there is no perfect destructive interference that would lead to zero emission. Compared to the ∆ = 0 situation, now the most luminous collision emits about 25% more energy in the form of GWs in the equal-mass case and about 35% more in the unequal-mass case. The relative phase ∆ also alters the mode-emission structure of the source and the frequency content of the modes (or equivalently, their morphology). In particular, the left panel of Fig. 15 shows the frequency content, by means of the amplitude of the Fourier transform, of the quadrupole = m = 2 mode of an unequal-mass PS merger as a function of ∆ . It can be noted how variations of this parameter have an influence not only on the amplitude of the mode, therefore impacting the observability of the source, but also greatly modify its frequency content. This suggests that this effect (or rather the parameter ∆ ) could actually be measurable in a Bayesian parameter inference framework. The effect of ∆ in equal-mass mergers is particularly useful to understand the potential impact of this parameter in GW data analysis as a possible smoking gun to distinguish PS mergers from vanilla black-hole mergers (equal masses and aligned -or zero -spins). In this situation, for the case of black-hole mergers, odd-m emission modes are exactly suppressed due the symmetry of the source. The same is true, as expected, for the case of PSs when we set ∆ = 0. The right panel of Fig. 15 ever, shows that the introduction of ∆ = 0 activates the = m = 3 mode during merger and ringdown. This reflects the fact that the phase difference between the stars breaks the symmetry of the source. While at the moment we cannot perform simulations for the case of quasicircular PS mergers (for lack of constraint-satisfying initial data) we anticipate that this effect would lead to an inconsistency between the binary parameters inferred from the inspiral stage and the corresponding ringdown emission modes of the final black hole if the source were assumed at face value to be a black-hole merger. Moreover, such a signature shall represent a smoking gun of the non-black-hole nature of the merging objects. We leave the quantitative exploration of this possibility for future work. V. CONCLUSIONS Black holes and neutron stars are widely considered the most plausible compact objects populating the Universe. Theoretical proposals for other types of compact objects, dubbed dark or "exotic" compact objects, however, have also been proposed (see e.g. [1] and references therein). The brand new field of gravitational-wave astronomy offers, potentially, the intriguing opportunity to probe those theoretical proposals. In particular the study and characterization of the GWs from collisions of ECOs -the building of waveform template banks -seems a key requisite towards that goal, as those datasets could allow for direct comparisons with the signals produced in mergers of black holes and neutron stars. The expectation is that the distinct nature of the different families of compact objects is somewhat encoded in the GW signals each member of the class emits, hence offering a way to single them out. In order to identify the specific and subtle signatures of each type of object in their GW emission, it is crucial to produce accurate signal models that can be compared to the data collected by detectors and that can also reveal new specific phenomenology. Presently, numerical relativity offers the most accurate way to do so, particularly in the highly non-linear, strong-gravity situations produced when two compact objects merge. In this paper we have presented a catalogue of nearly 800 simulations of head-on mergers of PSs. We recently used this dataset to search for signatures of these objects in existing LIGO-Virgo data [24,26]. Here, we have performed a systematic study of the properties and gravitational-wave emission of these physical systems. Our study has revealed that the relative phase of the two PSs, an intrinsic parameter of bosonic stars that is absent for the case of black-hole mergers, has a strong impact in the GW emission. This parameter, which reflects the wave-like nature of the PSs by controlling the way the Proca field interacts with itself, impacts not only the amplitude of the emission modes (and therefore the total emitted energy) but also the frequency content of the signal and its mode structure. Interestingly, these findings suggest that such an intrinsic parameter of PS binaries could be measurable. As a particular illustration, we have shown here that the asymmetry induced by phase differences in an equal-mass PS head-on collision can trigger odd-parity (odd-m) modes during the merger-ringdown stage which are completely suppressed for the case of equal-mass (and equal-spin) binary black hole mergers. We argue that this may evidence the nonblack-hole nature of the merging objects. The LVK event GW190521 has represented the first example of a GW signal that can be explained both in the classic framework of binary black hole mergers and in the less-common framework of PS mergers [24]. However, to conclusively probe the existence of the latter class of ECOs will require either the accumulation of small evidences in favour of this scenario through the systematic comparison of signals to waveform catalogs and/or the observation of a signal with distinct signatures that cannot be reproduced by black-hole mergers by current or future LIGO-Virgo-KAGRA detectors or by third-generation detectors, such as the Einstein Telescope [53]. On the one hand, the GW catalogue we have discussed in this paper represents the first step towards such systematic comparisons. On the other hand, our results suggest that the wave-like nature of PSs, via the impact of the relative phase parameter ∆ on the GW emission, might serve as a distinct smoking gun for the existence of these objects. In this work we have focused on the particular case of head-on collisions due its technical and computational simplicity. In the future we plan to extend the catalogue to eccentric and orbital quasi-circular mergers of bosonic stars. This will help us to firmly establish if the GW interference patterns found are specific to or can be am- plified by the geometry of the collisions considered in this paper, and, thus, gauge the potential imprint they may actually have in the GW emission. for different simulation resolution levels. The second and fourth panels show difference between different resolutions scaled for fourth-order convergence.
2022-08-26T01:16:06.328Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "7ebd0a3c092192c9ec5555919aa73657833e095b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2208.11717", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "35e40da85915e25382a932521af94fb92753091c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53956223
pes2o/s2orc
v3-fos-license
Understanding NMR relaxometry of partially water-saturated rocks Nuclear magnetic resonance (NMR) relaxometry measurements are commonly used to characterize the storage and transport properties of water-saturated rocks. Estimations of these properties are based on the direct link of the initial NMR signal amplitude to porosity (water content) and of the NMR relaxation time to pore size. Herein, pore shapes are usually assumed to be spherical or cylindrical. However, the NMR response at partial water saturation for natural sediments and rocks may differ strongly from the responses calculated for spherical or cylindrical pores, because these pore shapes do not account for water menisci remaining in the corners of desaturated angular pores. Therefore, we consider a bundle of pores with triangular cross sections. We introduce analytical solutions of the NMR equations at partial saturation of these pores, which account for water menisci of desaturated pores. After developing equations that describe the water distribution inside the pores, we calculate the NMR response at partial saturation for imbibition and drainage based on the deduced water distributions. For this pore model, the NMR amplitudes and NMR relaxation times at partial water saturation strongly depend on pore shape, i.e., arising from the capillary pressure and pore shape-dependent water distribution in desaturated pores with triangular cross sections. Even so, the NMR relaxation time at full saturation only depends on the surface-to-volume ratio of the pore. Moreover, we show the qualitative agreement of the saturation-dependent relaxation-time distributions of our model with those observed for rocks and soils. Introduction Understanding multi-phase flow processes in porous rocks and soils is vital for addressing a number of problems in geosciences such as oil and gas recovery or vadose zone processes, which influence groundwater recharge and evaporation.Effective permeability, which is defined as the permeability of a fluid in the presence of another fluid, is the decisive parameter for fluid transport, and depends on fluid saturation, wetting condition, and pore structure.In addition, saturation history influences the fluid content and the effective permeability (for a specific pressure), which are different for imbibition and drainage. A method considered suitable for determining water content of rocks non-invasively is nuclear magnetic resonance (NMR), because the NMR initial signal amplitudes are directly proportional to the hydrogen content in the pore space, and the NMR relaxation times are linked to the size of the water-containing pores in the rock.In a two-phase system of water and air, only the water contributes to the NMR signal response.Therefore, NMR is widely used for estimating transport and storage properties of rocks and sediments (Kenyon, 1997;Seevers, 1966;Fleury et al., 2001;Arnold et al., 2006). In recent years, several researchers have studied the relationship between NMR and multiphase flow behavior on the pore scale to better understand and infer the storage and transport properties of partially saturated rocks or sediments (e.g., Chen et al., 1994;Liaw et al., 1996;Ioannidis et al., 2006;Jia et al., 2007;Al-Mahrooqi et al., 2006;Costabel andYaramanci, 2011, 2013;Talabi et al., 2009).As an extension of this research, we study the relationship between the water distribution inside the pores of a partially satu- rated rock and the system's NMR response by using bundles of pores with triangular cross sections.While Al-Mahrooqi et al. (2006) used a similar modeling approach to infer the wettability properties in oil-water systems, this study investigates the evolution of the NMR relaxation-time spectra during drainage and imbibition.For this purpose, we consider a capillary pore ensemble that is partially saturated with water and air.Traditionally, the pores within this ensemble are assumed to have a cylindrical geometry.Depending on pressure, cylindrical capillaries are either water-or air-filled, and thus they either contribute to an NMR response or not.Consequently, the NMR relaxation times of partially watersaturated capillary pore bundles always remain subsets of the fully saturated system's relaxation-time distribution; i.e., they are a function inside the envelope of the distribution curve at full saturation (see Fig. 1).However, in porous rocks, which are formed by the aggregation of grains, the pore geometry is usually more complex (Lenormand et al., 1983;Ransohoff and Radke, 1987;Dong and Chatzis, 1995) and may exhibit angular and slit-shaped pore cross sections rather than cylindrical capillaries or spheres (Fig. 2a).For example, in tight gas reservoir rocks, Desbois et al. (2011) found three types of pore shapes that are controlled by the organization of clay sheet aggregates: (i) elongated or slit-shaped, (ii) triangular, and (iii) multi-angular cross sections.The relaxationtime distribution functions derived from NMR measurements for such partially saturated rocks are frequently found to be shifted towards shorter relaxation times outside the original envelope observed for a fully saturated sample (Fig. 2b) (e.g., Applied Reservoir Technology Ltd., 1996;Bird et al., 2005;Jaeger et al., 2009;Stingaciu, 2010;Stingaciu et al., 2010;Costabel, 2011). In angular pores, water will remain trapped inside the pore corners even if the gas entry pressure is exceeded.Standard NMR pore models that assume cylindrical or spherical pore ensembles (e.g., Kenyon, 1997), however, do not account for such residual water (Blunt et al., 2002;Tuller et al., 1999;Or and Tuller, 2000;Tuller and Or, 2001;Thern, 2014).To overcome this limitation, we adopt a NMR modeling approach initially proposed and discussed by Costabel (2011) and present numerical simulations and analytical solutions of the NMR equations for partially saturated pores with triangular cross sections to quantify NMR signal amplitudes and relaxation times.The NMR response of a triangular capillary during drainage and imbibition depends on the water distribution inside the capillary, which is subject to pore shape and capillary pressure.Thus, in the next chapter, we present the relationship between capillary pressure and water distribution inside cylindrical and triangular pore geometries during drainage and imbibition.For this purpose, the reduced similar geometry concept introduced by Mason and Marrow (1991) is used.Subsequently, based on the spatial water distribution, an analytical solution of the NMR diffusion equation (Torrey, 1956;Brownstein and Tarr, 1979) for partially saturated triangular capillaries is derived and tested by numerical simulations (Mohnke and Klitzsch, 2010).The derived equations are used to study the influence of pore size distribution and pore shape of triangular capillaries on the NMR response, in particular considering the effects of trapped water.Finally, an approach for simulating NMR signals during imbibition and drainage of triangular pore capillaries is introduced and demonstrated using synthetic pore size distributions. Water distribution during drainage and imbibition in a partially saturated triangular tube In a partially saturated pore space, a curved liquid-vapor interface called the arc meniscus (AM) arises due to the pore's capillary forces.In addition, adsorptive forces between water and matrix lead to the formation of a thin water film at the rock-air interface.Such water films with a thickness typically below 20 nm (e.g., Toledo et al., 1990;Tokunaga and Wan, 1997) porous system (Tuller and Or, 2001), the contribution of the film volume to NMR amplitudes is very small with respect to the NMR signal amplitudes arising from the water trapped in the menisci; i.e., V film V meniscus .Therefore, for the sake of simplicity, we neglect water films in this study. In the following discussion, we consider a triangular capillary, initially filled with a perfectly wetting liquid, i.e., contact angle θ = 0 • , which exhibits a constant interfacial tension σ (σ air−water = 73 × 10 −3 Nm −1 at 20 • C) and is under the assumption that gravity forces are weak and therefore can be neglected.The two-phase capillary entry pressure as derived by the MS-P method (Mayer and Stowe, 1965;Princen, 1969aPrincen, , b, 1970) ) can be expressed by the Young-Laplace equation: where r AM is the radius of the interface arc meniscus and p c is the minimum pressure difference necessary for a nonwetting phase, i.e., air, to invade a uniformly wetted (tri-)angular tube filled with a denser phase, i.e., water (see Fig. 3a).Upon consideration of a pressure difference p > p c , the non-wetting phase will begin to enter the pore and occupy the central portion of the triangle, whereas -separated by the three interface arc menisci of radius r AM -the wetting fluid remains in the pore corners (Fig. 3a). From an original triangle ABC, a new smaller triangle A B C of similar geometry with an inscribed circle of radius r = r AM < R 0 can be constructed by means of the reduced similar geometry concept as introduced by Mason and Morrow (1991) (Fig. 3b).To account for different transport mechanisms during imbibition and drainage of the denser wetting phase, Mason and Morrow (1991) introduced two different principal displacement curvatures with radii r I and r D , respectively. During imbibition of a (tri-)angular pore, the radius of curvature r AM increases until the separate arc menisci of the cor-ners touch and the pore fills spontaneously ("snap off").The critical radius of curvature r I , which is equal to the radius of the pore's inscribing circle, for the angular pore at "snap-off" pressure p I is then given by (2) According to Eq. ( 2), the snap-off pressure depends on the geometry of the triangle only, i.e., on its cross-sectional area A and perimeter P .In contrast, during drainage the threshold radius of curvature r D = r AM , at which the center of the fully saturated angular capillary spontaneously empties as the nonwetting fluid phase invades the pore, is given by with r D < r I and drainage threshold pressure p D > p I .The dimensionless and size-independent factor G = A P 2 = A P 2 reflects the shape of the triangle, depending on its crosssectional area A and perimeter P (A and P refer to the reduced triangle), i.e., from near-slit shape (G = 0) to equilateral shape (G = 0.048).A detailed derivation of Eqs. ( 2) and (3) as a consequence of hysteresis between drainage and imbibition can be found in Mason and Morrow (1991). The permeability of a porous system of such triangular capillaries is strongly influenced by the shape factor G. For single-phase laminar flow in a triangular tube, the hydraulic conductance g is given by the Hagen-Poiseuille formula with the cross-sectional area A, the shape factor G, the fluid viscosity µ, and k being a constant accounting for the geometrical shape of the cross section; e.g., k = 0.5 for circular tubes and k = 0.6 for a tube with a cross section of an equilateral triangle (Patzek and Silin, 2001).The hydraulic conductance of an irregular triangle is closely approximated by Eq. (1) using the same constant k as for an equilateral triangle (Øren et al., 1998).Thus, for a constant cross-sectional area, the hydraulic conductance g of the pore is proportional to its shape factor G. Combining Eqs. ( 1)-( 3) with the concept of reduced similar geometry discussed above, the degree of water saturation (S w ) inside a single triangular tube with cross-sectional area A 0 , perimeter P 0 , and radius R 0 of its inscribing circle at a given capillary pressure p c during imbibition and drainage can be calculated according to The total area A of the triangular tube's water retaining corners, γ 1,2,3 (i.e., the gray areas in Figs. 4 and 5), is expressed by where is the area of the triangle's ith water-filled corner (Tuller and Or, 2001).Consequently, the total effective area A that is still occupied by water is equal to the difference between the (reduced) triangular pore area A and the area π r 2 AM of its respective inscribing circle (see Fig. 3).The above Eq.( 7a) + (b) can be simplified to A = 3 √ 3 − π r AM (p c ) when considering equilateral triangles, i.e., γ 1,2,3 = π 3 .The radius r AM (p c ) of the reduced triangle's arc meniscus can be directly calculated from Eq. ( 1).Calculated pressure-dependent water and gas distributions during imbibition and drainage for an equilateral and arbitrary triangular capillary are shown in Figs.4a and 5a.The corresponding water retention curves plotted in Figs.4b and 5b illustrate the resulting hysteresis behavior of the partially saturated system and can be subdivided into three parts: at low capillary pressures, i.e., p c < p I , where the pore always remains fully water-saturated.For the interval p I < p c ≤ p D where two separate behaviors are observed: during imbibition, the water content gradually increases with increasing capillary pressure, while during drainage the pore still remains fully saturated.For pressure levels p c ≥ p D , both drainage as well as imbibition exhibit the same gradual decrease in water saturation. In the following section, analytical solutions for respective NMR responses that arise from partially saturated arbitrary triangular tubes are derived and matched against numerical simulations by means of the generalized differential NMR diffusion equations introduced by Brownstein and Tarr (1979). NMR response for triangular capillaries The measured NMR relaxation signal M(t) is constituted by superposition of all signal-contributing pores in a rock sample (e.g., Coates et al., 1999;Dunn et al., 2002): where M 0 and V 0 are the equilibrium magnetization and total volume of the pore system, respectively.The saturated volume of the ith pore and its corresponding longitudinal relaxation constant are given by v i and T i,1 , respectively.Following derivations of Brownstein and Tarr (1979), the inverse of the longitudinal relaxation time T 1 is linearly proportional to the surface-to-volume ratio of a pore according to where T 1B is the bulk relaxation time of the free fluid and ρ s is the surface relaxivity, a measure of how quickly protons lose their magnetization due to magnetic interactions with paramagnetic impurities and reduced correlation times at the fluid-solid interface, which can be attributed to paramagnetic ions at mineral grain surfaces.V and S a are the pore's volume and active surface boundaries, respectively.In this context, an active boundary refers to an interfacial area, i.e., the pore wall, where ρ s > 0 and, thus, enhanced NMR relaxation will occur as the molecules diffuse at the pore walls.This model, however, is based on the general assumption of a relaxation regime that is dominated by surface relaxation processes (fast diffusion); i.e., the fluid molecules move sufficiently quickly and thus explore all parts of the pore volume several times with respect to the timescale (∼ T 1 ) of the experiment. Upon consideration of a long (triangular) capillary, its surface-to-volume ratio equals its perimeter-to-cross-section ratio, i.e., S/V = P /A.Consequently, Eq. ( 9) can be written as where P 0 is the saturated tube's (active) perimeter and A 0 its cross-sectional area for a circular cross section, P 0 A 0 = 2 r 0 , with r 0 being the capillary radius.Hence, the relaxation rate of a fully saturated arbitrary triangular pore ABC can be expressed in terms of its shape factor G and perimeter P 0 : where L AB , L BC , and L AC are the lengths of a triangle's sides and γ A is the angle at corner A (see Fig. 3).As illustrated in Fig. 6, the relaxation times of a fully saturated pore decrease with decreasing pore shape factor G -and thus, decreasing hydraulic conductance -and increasing pore perimeter P .By reducing one angle from 60 to 0 • while fixing another at 60 • , we increase P /A for a constant cross-sectional area A. In the special case of an equilateral triangular capillary, i.e., , Eq. ( 11) can be simplified to Now we consider the previously discussed water-air system of a partially saturated equilateral triangular capillary.Here, the NMR signal will originate from the water retained at the corners, replacing A 0 in Eq. ( 10) with an effective area A γ or A as derived by Eqs.(7a) and (b), respectively.A reflects the actual pore fraction that contributes to the NMR signal, i.e., the portion of the pore area A 0 that still remains occupied by water.Supposing the air-water interface to be a passive boundary with respect to NMR surface relaxivity, i.e., ρ s = 0, the effective active boundary is exclusively controlled by the pore wall segments (ρ s > 0) in contact with water (wetting phase) (Fig. 7).Thus, the active perimeter of such a partially saturated triangular capillary is equal to its pressure-dependent reduced triangle's perimeter, P r I,D (p c ) , according to with being the perimeter of the ith water-filled corner.Consequently, the NMR relaxation rates and NMR signal (amplitude) evolution during drainage and imbibition of a single equilateral triangular capillary can be expressed by , S I,D w < 1 (15) and respectively.Figure 8 illustrates the pressure-dependent water distribution inside a single equilateral triangular capillary (with a side length of 1 µ m) during drainage (a) and evolution of longitudinal magnetization (b).As the water saturation is reduced with increasing pressure, both NMR amplitudes and relaxation times (c) decrease.Note that only a single characteristic relaxation time at each saturation degree is observed, since each corner has the same P γ /A γ and consequently the same T 1 value. In contrast, each water-filled corner of a partially saturated non-equilateral triangle, i.e., γ 1 = γ 2 = γ 3 , can have a different P γ /A γ ratio, and thus will show a different relaxation time and amplitude.As a result, depending on its individual shape, even a single partially saturated pore exhibits a multiexponential NMR relaxation behavior based on Eq. ( 8) according to with A 0 being the characteristic relaxation time and amplitude contribution of the ith corner of the triangle, respectively.Figure 9 exemplifies such different multi-exponential relaxation behavior for a pore with a right triangle geometry with angles of (γ 1 = 30 • , γ 2 = 60 • , γ 3 = 90 • ) and the same cross-sectional area as the equilateral pores in Fig. 8 (i.e., ∼ NMR porosity). To test the analytical (fast diffusion) models for partially saturated triangular capillaries derived above, the calculated longitudinal NMR relaxation times and amplitudes are compared to solutions obtained from 2-D numerical simulations of the general NMR diffusion equation (Mohnke and Klitzsch, 2010): with normalized initial values m (r, t = 0) = M 0 =1 A and boundary conditions where m is the magnetization density, D the diffusion coefficient of water, T B the bulk relaxation time, ρ s the interface's surface relaxivity, n the outward normal, and A and P the pore's cross-sectional area and perimeter, respectively. To demonstrate the consistency of the introduced model with numerical results obtained by Mohnke and Klitzsch (2010), the above equations were solved numerically using finite elements to simulate the respective NMR relaxation data of the studied triangular geometries. As shown in Fig. 10, analytically (+) calculated NMR relaxation data for drainage and imbibition for an equilateral triangular pore are in a very good agreement (R 2 > 0.99) with data obtained from numerical simulations (o). The model was also matched against numerical simulations for pores with arbitrary angles.Figure 11 illustrates 2-D finite-element simulations using saturated pore corners with angles γ i ranging from 5 to 175 • with equal active surfaceto-volume ratios P γ i /A γ i = const and thus T 1,i = const.The simulations were compiled and compared to their respective analytical solutions.The ratios of the numerical to the analytical model results for NMR amplitudes, i.e., NMR signal amplitudes, A γ , and relaxation times, T 1,γ , as a function of corner aperture γ are shown and confirm a near-perfect correlation of R 2 > 0.99, with deviations generally less than 0.05 %.In this regard, the slight increase in the divergence of relaxation-time ratios at acute and obtuse angles can be attributed to numerical errors resulting from a decrease in the finite element's grid quality due to extremely high or low x- 2) and ( 3), e.g., during drainage. to-y ratios at these apertures.The above model is applicable to any angular capillary geometry, such as square or octahedron. Simulated water retention curves and NMR relaxation data of partially saturated pore distributions The goal of this section is to evaluate how pore shape affects the forward-modeled NMR response of a partially saturated system of pores (a pore size distribution).As discussed earlier, the NMR relaxation time of a single water-filled capillary pore is inversely proportional to its surface-to-volume ratio.Thus, at full water saturation, the relaxation-time distribution obtained from a multi-exponential NMR relaxation signal represents the pore size distribution of the rock.At partial water saturation it is often assumed that the NMR relaxation signal still represents the pore size distribution of the water-saturated pores (e.g., Stingaciu et al., 2010).We are going to demonstrate that this is valid for cylindrical but not for (tri-)angular pores. In contrast to cylindrical pores, capillaries with (tri-)angular cross sections may be partially water-saturated during drainage or imbibition (cf.Figs. 8 and 9) because of the water remaining in the corners.Thus, they show a different water retention behavior, and the "desaturated" pores, i.e., their arc menisci, contribute to the NMR signal.Consequently, with increasing pressure (i.e., decreasing water saturation), the NMR relaxation behavior of the partially watersaturated triangular capillary pore bundle successively shifts to signal contributions with shorter relaxation times, exceeding the original distribution at full saturation.This shift reflects the fast relaxation of residual water trapped in the pore corners (Fig. 12).This behavior in angular pore geometries is demonstrated in Fig. 13.Here, the NMR relaxation components for a fully (blue line) and partially saturated (red and green) distribution of triangular capillaries are plotted.The green and red peaks show the signals of the residual water in the pore corners.As a consequence of the reduced geometry concept, the remaining water in the corners can be considered similar in size and shape due to the same NMR relaxation time, and thus only depends on pressure and not on pore size.Therefore, with decreasing saturation, i.e., increasing pressure, the NMR signal of the arc menisci increases and shifts towards smaller relaxation times.If the non-wetting phase (air) has entered all capillaries, only one single relaxation time remains for the pore bundle of equilateral triangles.For arbitrarily shaped triangular pores, three relaxation times would remain for the desaturated pore system.Hence, the concept of a relaxation-time distribution assumed in conventional NMR inversion and interpretation approaches would be no longer valid. We applied the concept of fitting multi-exponential relaxation-time distributions to NMR transients calculated for pore bundles of circular and equilateral triangle cross sections in order to study how pore shape affects the typically shown relaxation-time distributions. Water drainage and imbibition with water as the wetting and air as the non-wetting fluid were investigated by simulating water retention curves and corresponding NMR relaxation signals for a log-normal distributed pore size ensemble as shown in Fig. 14. Herein, to clarify the subsequent discussion, we focused only on the equilateral triangular capillary model.Other angular pore shapes (e.g., right angular triangles or squares) will exhibit a similar behavior.Capillary pressure curves presented in Fig. 15a were calculated from Eqs. (1), ( 5), and (6) for pore bundles with circular and equilateral triangle cross sections.In contrast to water retention curves calculated for the cylindrical capillary model, significant hysteresis between drainage and imbibition can be observed for the triangular capillary model, i.e., in terms of initial am- plitudes (= saturation) and respective mean relaxation times (Fig. 15b).Corresponding NMR T 1 relaxation (saturation recovery) signals shown in Fig. 15c, d and e were calculated using a uniform surface relaxivity of ρ s = 10 µm s −1 and a water bulk relaxation of T 1,bulk = 3 s.The NMR T 1 relaxation signals were simulated for 20 saturation levels of the drainage and imbibition curves ranging from S = 100 % to S < 1 % water saturation.The corresponding relaxation-time distributions (Fig. 15f-h) of the NMR T 1 transients were determined by means of a regularized multi-exponential fitting using a nonlinear least squares formulation solved by the Levenberg-Marquardt approach (e.g., Marquardt, 1963;Mohnke, 2010).Inverse modeling results of NMR data calculated for the drainage branches using the cylindrical capillary bundle (Fig. 15f) exhibit a shift of the distribution's maximum towards shorter relaxation times with decreasing saturation (i.e., increasing pressure).As anticipated, the derived distribution functions remain inside the envelope of the relaxation-time distribution curve at full saturation (see also Fig. 1a). In contrast, inversion results for equilateral triangular capillary ensembles (Fig. 15f-h) -both for imbibition and drainage -show a similar shift to shorter relaxation times with decreasing saturation but also shift towards the outside the initial distribution at full saturation due to NMR signals originating from trapped water in the pore corners of the desaturated triangular capillaries.The effect of the pore corners on relaxation times at low saturations is also recognizable when comparing the (geometric) mean relaxation times, normalized on the values observed at full saturation (Fig. 15b): both, the drainage and the imbibition hysteresis branch of the triangular pore bundle, show smaller mean relaxation times than the cylindrical pore bundle. In conclusion, the calculated inverse models for the triangular capillary bundle qualitatively agree with the behavior of the inverted NMR relaxation-time distributions at partial saturation that are frequently observed in experimental data, e.g., of the Rotliegend sandstone shown in Fig. 2. Summary and conclusions Experimental NMR relaxometry data and corresponding relaxation-time distributions obtained at partial water/air saturation were explicated by a modification of conventional NMR pore models using triangular cross sections.The derived analytical solutions for calculating surface-dominated (fast diffusion) NMR relaxation signals in fully and partially saturated arbitrary angular capillaries were consistent with respective results obtained from numerical simulations of the general NMR diffusion equations.Shape and size of triangular pores can strongly influence both NMR amplitudes and decay time distribution and the rock's flow properties, i.e., saturation and (relative) permeability.At full saturation the NMR relaxation time depends on the surface-to-volume ratio, which in turn depends on shape if considering angular pore capillaries.However, at partial saturation, the pore shape even more strongly influences the water distribution inside the pore system, and thus the NMR signal.In contrast to cylindrical capillaries, angular capillaries also contribute to the NMR signal even after desaturation of the pore due to the water remaining in the pore corners. In this regard, non-equilateral triangular capillaries at partial saturation exhibit a three-exponential relaxation behavior due to different perimeter-to-surface (= surface-to-volume) ratios of the water in the pore corners, whereas the relaxation time of the trapped water in the corners depends on pressure and not on pore size.Therefore, it can be noted that the NMR signal at partial saturation is affected by both the surface-tovolume ratio of the water saturated and the pore shape of the desaturated pores. Moreover, we studied the NMR response of a triangular pore bundle model by jointly simulating the water retention curves for drainage and imbibition and the corresponding NMR T 1 relaxometry data.With decreasing water saturation, the simulated NMR relaxation distributions shift towards shorter relaxation times below the initial distribution enveloped at full saturation, which is principally in agreement with the relaxation behavior observed in experimental NMR data from rocks (e.g., Fig. 2b). Ongoing research will include further experimental validation and implementation of the introduced approach in an inverse modeling algorithm for NMR data obtained from partially saturated rocks to predict absolute and relative permeability on laboratory and borehole scales.Without considering angular pores, the NMR signal of trapped water cannot be explained; i.e., using the classical approach of circular capillaries, one cannot find a pore size distribution that explains the relaxation-time distributions at all saturations sufficiently (e.g., Mohnke, 2014).On the other hand, angular pore models can account for the trapped water and thus overcome the limitation of the classical approach.Moreover, following the approach of Mohnke (2014) but considering angular pores, we strive to estimate surface relaxivity, pore size distribution, and pore shape by jointly inverting NMR data at differ-O.Mohnke et al.: Understanding NMR relaxometry of partially water-saturated rocks ent saturations.Based on the obtained pore size distribution and triangle shape, expect to improve the prediction of the absolute and relative permeabilities considerably. Figure 1 . Figure 1.(a) NMR decay time distributions at different water saturation levels for a classical cylindrical capillary pore distribution.(b) Concept sketch of saturated (gray) and desaturated capillaries, e.g., during drainage. Figure 2 . Figure 2. (a) Complex pore structure of a Rotliegend tight gas sandstone.Pore spaces are filled with tangential and hairly illite and exhibit different pore types with elongated or slit-shaped, triangular, and multi-angular cross sections.(b) T 1 decay time distributions calculated from inverse Laplace transform performed on Rotliegend sandstone (porosity 13 %, permeability 0.1 mD) at different water saturations (S w = 21 %-100 %). Figure 3 . Figure 3. Cross sections of a partially saturated triangular tube.The arc meniscus of radius r AM separates the invading non-wetting phase (white) from the adsorbed wetting phase (gray).(a) Original triangle ABC with side lengths L AB , L BC , L CA , and radius R 0 of its inscribing circle.(b) Reduced triangle A B C of similar geometry.The wetting phase resides in the three corners (gray) with r = r AM being the radius of both the three interface arc menisci of ABC and of the inscribing circle of A B C . Figure 4. (a) Modeled distribution of water (gray) and gas (white) phases in an equilateral triangular tube with a side length of 1 µm during imbibition (top) and drainage (bottom).(b) Water saturation versus capillary pressure during imbibition ( ) and drainage ( ). Figure 10 . Figure 10.NMR response of an equilateral triangular capillary pore model (with a side length of 1 µm).(a) Magnetization versus T 1 decay time data of numerical ( ) and analytical solutions (+) for all applied pressure levels.(b) Cross-plot of numerically simulated and analytically calculated longitudinal T 1 decay times at partial (•) and full water saturation ( ).A corresponding water saturation versus capillary pressure diagram is shown in Fig. 4. Figure 13.Relaxation pore size distribution.tion level, all pore corners with residual saturation exhibit the same NMR magnetization and relaxation behavior, thus superposing to a single fast relaxation component (e.g., red and green bars). Figure 15 . Figure 15.(a) Modeled drainage and imbibition curves for circular and equilateral triangular capillary ensemble (cf.Fig. 14) and (b) Corresponding normalized mean NMR T 1 relaxation times versus pressure curves.Modeled and fitted (red lines) NMR transient signals (longitudinal magnetization evolution) and corresponding inverted NMR T 1 relaxation-time distributions for 20 fully and partially saturated pore size distributions ranging from < 1 to 100 % saturation using circular (c, f) and equilateral triangular capillaries during imbibition (d, g) and drainage (e, h).
2018-11-26T20:52:19.094Z
2014-11-17T00:00:00.000
{ "year": 2014, "sha1": "9594aaf7d2f1f27144d5faba184674f2a615334d", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/articles/19/2763/2015/hess-19-2763-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "41553ffb1e4524bf6cc8c4e8e484e9d3ccbbfede", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Materials Science" ] }
54471472
pes2o/s2orc
v3-fos-license
The natural history of nodding syndrome Aims. Nodding syndrome is a poorly understood acquired disorder affecting children in sub-Saharan Africa. The aetiology and pathogenesis are unknown, and no specific treatment is available. Affected children have a distinctive feature (repeated clusters of head nodding) and progressively develop many other features. In an earlier pilot study, we proposed a five-level clinical staging system. The present study aimed to describe the early features and natural history of nodding syndrome and refine the proposed clinical stages. Methods. This was a retrospective study of the progressive development of symptoms and complications of nodding syndrome. Participants were a cohort of patients who had been identified by community health workers and were referred for treatment. A detailed history was obtained to document the chronological development of symptoms before and after onset of head nodding and a physical examination and disability assessment performed by a team of clinicians and therapists. Results. A total of 210 children were recruited. The mean age at the onset of head nodding was 7.5 (SD: 3.0) years. Five overlapping clinical stages were recognised: prodromal, head nodding, convulsive seizures, multiple impairments, and severe disability stages. Clinical features before the onset of head nodding (prodromal features) included periods of staring blankly or being inattentive, complaints of dizziness, excessive sleepiness, lethargy, and general body weakness, all occurring two weeks to 24 months before nodding developed. After the onset of head nodding, patients progressively developed convulsive seizures, cognitive and psychiatric dysfunction, physical deformities, growth arrest, and eventually, in some patients, severe disability. Conclusion. The description of the natural history of nodding syndrome and especially the prodromal features has the potential of providing a means for the early identification of at-risk patients and the prompt initiation of interventions before extensive brain injury develops. The wide spectrum of symptoms and complications emphasises the need for multidisciplinary investigations and care. Nodding syndrome (NS) is a poorly understood acquired neurological disorder affecting previously normally developing children in geographicallybound regions in sub-Saharan Africa.Northern Uganda, Southern Tanzania and South Sudan have the highest burden of disease.There are an estimated 10,000 cases in these three countries (CDC, 2012;Idro et al., 2014;Landis et al., 2014).Recently, there have also been reports of similar patients in The Democratic Republic of Congo and the Central African Republic (Robert Colebunders, personal communication).In the affected districts of northern Uganda, the prevalence of nodding syndrome among the affected age group is 6.8 (95% CI: 5.9-7.7)per 1,000 (Iyengar et al., 2014).The aetiology is unclear (Spencer et al., 2013) but there has been a consistent epidemiological association with infection with the filarial worm, Onchocerca volvulus (Kaiser et al., 1996;Ngugi et al., 2013;Colebunders et al., 2015).Recent pilot studies suggest that nodding syndrome may be a neuroinflammatory disorder with antibodies to Onchocerca volvulus-specific proteins cross-reacting with host proteins (Idro et al., 2016;Johnson et al., 2017).Definitive studies to confirm these findings are ongoing.Affected individuals develop symptoms between the ages of three and 18 years and present with a distinct feature, head nodding.Head nodding is characterised by repeated vertical drops of the head onto the chest, 5-20 times/minute, at the sight of food, spontaneously or in association with cold weather (Edwards, 2012;Wamala et al., 2015).The head nods have been characterised as atonic seizures (Sejvar et al., 2013) but in many children are associated with myoclonic jerks and/or atypical absence seizures.Progressively, patients develop convulsive seizures, behaviour difficulties and psychiatric disorders, declining cognitive function, wasting and growth failure, delayed development of secondary sexual characteristics, and physical and motor disability (Idro et al., 2013a).Diagnostic EEGs show generalized slow-wave activity with or without interictal epileptiform discharges.The brain MRI shows varying degrees of cortical and cerebellar atrophy and hippocampal changes in a minority (Idro et al., 2013a;Van Bemmel et al., 2014).The natural history of nodding syndrome is inadequately described (Idro et al., 2013b;Winkler et al., 2014).To date, there are no specifically designed prospective studies.There is no biological diagnostic test and the early symptoms that may help guide on prompt recognition are unknown.Potentially, description of these early features will facilitate prompt initiation and implementation of interventions to arrest progression.In an earlier cases series of 22 untreated patients, we proposed that the symptoms and complications of nodding syndrome probably develop over five clinical stages but this observation is yet to be confirmed (Idro et al., 2013a).This study aimed to describe the natural history of nodding syndrome, the early features, and the progressive development of symptoms and complications of the disease.We also sought to examine if the symptoms and complications progressively clustered in the five proposed stages and if so, whether these stages were distinct enough to allow recognition. Study design This study was part of a larger epidemiologic study of nodding syndrome in Uganda.For this study, between October and November 2013, we conducted a crosssectional survey of a retrospective cohort of patients in Pader district, obtained detailed history of the progressive development of symptoms, and performed clinical testing to describe the functional state, co-morbidities, and complications of the disease. Setting Northern Uganda has only recently recovered from the devastating effects of the Lord's Resistance Army insurgency against the Government of Uganda.The region has high levels of poverty, high malaria transmission (Okello et al., 2006) and is endemic for Onchocerca volvulus (Oguttu et al., 2014;World Health Organization, 2017).The study was conducted in Pader; the district with the highest burden of nodding syndrome.Of the over 3,000 registered patients in the country in the year 2013, about 1,200 lived in Pader.Other affected districts included Kitgum, Lamwo, Gulu, Amuru, Oyam and Lira. Participants A case of nodding syndrome was defined according to the World Health Organization criteria (2012) (World Health Organization, 2012) as: -a child or adolescent with head nodding on two or more occasions; -symptom onset between the ages of three and 18 years; -head nodding occurring at a frequency of 5-20/minute and in whom head nodding has been observed by a trained health worker or documented on EEG/EMG; -plus, any one of: • Triggered by food or cold weather; All suspected cases had been registered by the local Village Health Worker and at the local health centre and had only recently been initiated on symptomatic treatments at the nodding syndrome treatment centres in the district (Idro et al., 2016).These symptomatic treatments included provision of sodium valproate, plus nutritional, physical, and psychological therapy (Idro et al., 2013b). Ethical approval and informed consent procedures Ethical approval for the study was obtained from Makerere University School of Medicine Research and Ethics Committee.Written informed consent was obtained from parents or the primary caregiver if the parent was unavailable.Because of challenges of severe cognitive impairment in some children, no formal assent was obtained in most age-appropriate children.This requirement had been waived by the ethics committee. Recruitment Participants were recruited from two highly affected sub-counties in Pader district: Atanga and Awere.These two were purposively chosen because of the large number of patients and close geographical location.In each sub-county, only the most affected villages, with at least five probable patients, were selected.A total of 11 villages were included.All probable nodding syndrome patients in each selected village were invited to participate. Data collection The study was conducted by a large multidisciplinary team comprised of clinicians, nurses, therapists, psychologists, and a social worker.Each village was allocated a day and in villages with many patients, up two days.Participants were informed about the study at least two weeks earlier and were assessed at any of the village meeting places, the home of the village chair, or the local health centre.Two days before the survey, the study nurses went back to the villages and reminded the village health workers and parents about the study. On the morning of the survey, the village health worker helped gather parents and the patients at the agreed place.Patients who could not move were assessed in their homes.A joint general discussion was held on study procedures with all parents, and prospective participants followed by individual written consent.Specifically developed case record forms were administered to consenting parents/patients and used to carefully document the history of each patient.This included history of the pregnancy, birth and early development, the past medical history, timing of the onset of head nodding, and clinical features prior to this.Symptoms after the onset of head nodding were then documented on a timeline together with the intervals up to the onset.The type, frequency, and severity of seizures, treatments received, schooling, visual, hearing and cognitive difficulties, ability to perform age-appropriate activities of daily living, and independence in self-care were all documented.A full physical examination, standard neurological testing, and functional assessment of the patients were then performed.Motor function was assessed using the Gross Motor Classification System (McDowell, 2008) while the Strengths and Difficulties Questionnaire (SDQ) (Goodman, 1997) was used to screen for behavioural difficulties.Children screening positive for behavioural difficulties on the SDQ had psychiatric assessments using specific domains of the Mini-International Neuropsychiatric Interview for Children and Adolescents (MINI KID) (Sheehan et al., 2010). Data and statistical analysis Completed Case Record Forms were double entered into a Microsoft Access database and exported to STATA (Version 13.0, College Station, TX).The time at onset of head nodding was defined as the time the parent or primary caregiver first noticed head nodding in the participant.To delineate the stages, the timing of the symptoms and development of the complications were all related to the onset of head nodding.To establish clustering of symptoms within a time range, the lower and upper quartiles were used.For each symptom, the median, lower (25%), and upper (75%) quartile time at onset of the symptoms, either prior to or after the onset of head nodding, was determined.Symptoms with similar values of the 25% and 75% quartiles were taken to cluster together. General description Altogether, there were 243 patients with suspected nodding syndrome registered in the 11 villages surveyed.Four were unavailable (three travelled to different relations and one was attending school in another village) at the time of the study and so, were not assessed.On screening, 16 children were found to have been misdiagnosed with nodding syndrome.They had other forms of epilepsy and were excluded.210 children were available for the study.There were no sex differences; just over half were male.The mean age at the onset of head nodding was 7.5 (SD: 3.0) years and the mean duration with head nodding at the time of the survey was 6.2 (SD: 2.7) years.The majority of patients, 163/210 (77.6%), developed head nodding after the age of five years. The prodromal or early features of nodding syndrome The early or prodromal features of nodding syndrome were defined as symptoms that developed before the onset of head nodding.These included what parents and primary caregivers described as periods of staring blankly or being inattentive, complaints of dizziness, excessive sleepiness, lethargy, and general body weakness (table 1).One hundred and thirty participants (61.9%) reported at least one of these symptoms.Overall, the features developed between two weeks and 24 months before the onset of head nodding.Of the group, excessive sleepiness and general body weakness were the most common (17.1%), followed by dizziness (16.2%).Using the 25% and 75% quartiles of the time at onset, lethargy (median time: 10 months before onset of head nodding), general body weakness, and decline in comprehension (median time: 2.5 months before onset of head nodding) were the earliest symptoms to develop.Dizziness with periods of staring blankly and being inattentive, with a median period of a month before the onset of head nodding, were the last prodromal features to develop (figure 1). No EEGs or brain imaging of patients at this stage were obtained. The head nodding stage Head nodding is the pathognomonic feature of the syndrome.In the initial stages, the head nods were reported mostly in the early hours of the morning but also while eating food and with a cold bath or breeze.The head would drop forward repeatedly at a characteristic frequency of 5-20 Hz initially for brief moments, but with time, the episodes lasted several minutes.Earlier EEG studies showed that these head nods are atonic seizures, but some patients also have concurrent myoclonic jerks and atypical absences.Initially, the patients maintained awareness but again with time, they would stare blankly, drool, and then progress to develop other seizure types including tonic-clonic and myoclonic seizures.With continued disease progression and the development of other types of seizures, the head nodding seized in some patients. Features developing after the onset of head nodding After Discussion This study aimed to describe the natural history of nodding syndrome and in particular, the early fea- intensity of convulsive seizures, as well as cognitive and emotional dysfunction; the convulsive seizure stage.Progressively, there are multiple functional impairments in behaviour and motor abilities.Some also develop deficits in speech, vision or hearing while others develop overt psychiatric disorders.Beyond this period, deformities of the limbs, chest, and spine, faltering growth or failure of growth are observed.Some become bedridden; the severe disability stage. However, not all patients progress to these advanced stages and in many cases, marked improvement and reversal of symptoms have been observed with antiepileptic drug treatment and rehabilitation (Idro et al., 2014).The wide spectrum of clinical signs involving multiple systems including the central nervous system, motor, and endocrine systems would in addition also suggest that nodding syndrome is a multisystem disease rather than a purely neurological disorder and emphasise the need for a multidisciplinary investigation of the aetiology and care provision.Stage Five: Severe disability with limited independent mobility (the general picture is that of a severely wasted child with apathy and depressive features including a flat affect, poor appetite, and limited speech.) John Libbey Eurotext, 2018 The continued decline in the burden of seizures in individual patients with antiepileptic therapy compared to reports at the height of the epidemic points to the success and importance of antiepileptic treatment in these patients.In addition, the lower number of patients in advanced stages of the disease in Uganda today may imply halting of disease progression in some patients and possibly a reversal in poor health with application of the symptomatic treatments. Overall, the timing of the progression of symptoms in this study correlates well with that described in the earlier proposed clinical stages (Idro et al., 2013a).However, the stages are not so distinct and overlap. Beyond the prodromal and the head nodding stages, patients developed convulsive seizures within one to three years, cognitive dysfunction within one to four years, and psychiatric disorders and motor difficulties overlapped within two to five years before onset of severe disability at between three and six years (figure 1, tables 2, 4).Prospective studies of incident cases may more conclusively demonstrate the respective stages. With emerging reports of nodding syndrome now in the Democratic Republic of Congo and the Central African Republic, together with the growing problem in South Sudan, the need to clearly understand the pathogenesis of this disorder and develop specific treatment and preventive interventions cannot be stronger.Furthermore, the multisystem nature of the disease demonstrated here should also be explored. The variable progression of the disease may provide some clues.In the meantime, in the absence of a specific treatment, the symptomatic treatment protocol that has been implemented in Uganda with some success could be adopted in the neighbouring countries. The primary limitation of this study was the use of a retrospective cohort and the challenges of recall in using parental history to describe the natural history of the disease.Again, a prospective study would be ideal but since 2014, there have been no incident cases of nodding syndrome in Uganda.Prospective studies may now only be possible in the new areas reporting the disease.Despite this limitation, highly skilled clinicians screened the participants and used standardised forms adapted from the previous case series to obtain the progressive development of symptoms.Secondly, there is no specific biological marker or diagnostic test for nodding syndrome.It was therefore not possible to correlate the clinical stages to plasma or cerebrospinal fluid levels of a specific marker.We also did not specifically obtain any functional (e.g.EEG) or structural imaging of the brain in the prodromal stage nor in the progressive stages to correlate with the specific clinical observations.However, recent documentation of cross-reacting antibodies in some patients with nodding syndrome, which pathologically link Onchocerca volvulus infection to nodding syndrome, is starting to provide an insight into the possible cause of the disease, may explain disease progression, and may, in the not-so-far future, provide biological markers that may correlate with clinical observations.Stage-specific EEG and brain MRI features will be valuable additions in future studies.If indeed nodding syndrome is a neuroinflammatory disorder, as proposed, with antibodies to Onchocerca volvulus antigens that cross-react with host proteins, the slow progression of the disease described here -over several years -would suggest that this is really an insidious process. In conclusion, nodding syndrome is probably a multisystem disorder in which symptoms develop over several overlapping and progressively severe stages, starting with a non-specific prodromal period of variable length.A high index of suspicion and prompt recognition of especially the early features may guide John Libbey Eurotext, 2018 A u t h o r O f f p r i n t Epileptic Disord, Vol. 20, No. 6, December 2018 515 Natural history of nodding syndrome in the early identification of at-risk patients and promote the prompt initiation of interventions before extensive brain injury develops.The wide spectrum of symptoms and complications emphasises the need for multi-disciplinary care.In addition to advancing our understanding of nodding syndrome, the clinical stages identified here will also be helpful in the development and ascertainment of end points in intervention studies. Table 1 . The early or prodromal features of nodding syndrome. Thirteen other children, who had data on demographic characteristics but were missing information for several other variables, were also excluded.The remaining John Libbey Eurotext, 2018 A u t h o r O f f p r i n t Epileptic Disord, Vol. 20, No. 6, December 2018 511 Natural history of nodding syndrome * Participants who reported the presence of the symptom and the duration before head nodding. 2 years before head nodding) Functional impairment 2-5 years before head nodding Severe disability (3-6 years after head nodding) Head nodding Convulsive seizures, cognitive decline and psychosis 0.6-3 years after head nodding Figure 1. The natural history of nodding syndrome. Table 2 . The symptoms and complications of nodding syndrome, and their duration after head nodding was first observed. * Number of participants who reported the symptom and provided duration to its development after head nodding began. Table 3 . Examples of disorders that may potentially be confused with the early stages of nodding syndrome. Table 4 . The natural history of nodding syndrome; the revised clinical stages.
2018-12-16T18:46:00.936Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "3f8f1696b278da23806368c8c11b292eea0cb25e", "oa_license": "CCBYSA", "oa_url": "http://www.jle.com/en/revues/epd/e-docs/the_natural_history_of_nodding_syndrome_313318/article.phtml?pj_key=doc_attach_40955&tab=download", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "a0a37f6c382fe555492d8640855e31bf233881c0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
85912621
pes2o/s2orc
v3-fos-license
HYDROCARBON-DEGRADING BACTERIA ASSOCIATED WITH INTESTINAL TRACT OF FISH FROM THE BALTIC SEA The hydrocarbon-degrading bacterial diversity of the intestinal tract content of fish – the Baltic cod (Gadus morhua), plaice (Platichthys flesus) and the Baltic herring (Clupea harengus) – from the Baltic Sea has been investigated by molecular methods: DNA extraction, amplification polymerase chain reaction product and sequencing of partial 16S rRNA genes. The results of this study show that dense total heterotrophic bacterial populations occur in the intestinal tract of investigated fish. The data obtained showed that the abundance of hydrocarbon-degrading bacteria in the intestinal tract of fish varied from 2.40×104 to 1.08×105 cfu g–1 between fish species and was still high. Phenotypic examination of the recorded hydrocarbon-degrading bacteria from the intestinal tract of the Baltic cod, plaice and the Baltic herring revealed that they belong to Aeromonas, Pseudomonas/Shewanella. Molecular species of hydrocarbon-degrading bacteria found in the digestive tract of fish from the Baltic Sea were: Aeromonas veronii, Aeromonas sobria, Shewanella spp. and Acinetobacter spp. We argue that hydrocarbon-degrading bacteria in intestinal tract of fish take part in purification processes, as well as, bacteria in water and play a role in adaptation and survival of fish chronically exposed to pollution with hydrocarbons. Introduction Studies on the microflora of various ecological groups of fish are necessary for analysis of digestion mechanisms and feeding efficiency of fish from natural ichthyocenoses, control and correction of feeding efficiency of fish of pond populations, prevention and treatment of diseases, and scientifically grounded control of the quality and safety of the fish stock and products (Abramova 2004). Extensive papers were published on various aspects of the microbial flora associated with fish eggs, skin, gills and intestine, and on the relationship of the intestinal microbiota to that of the aquatic habitat. The microbial populations within the digestive tract of fish are rather dense with numbers of microorganisms much higher than those in the surrounding water indicating that the digestive tract provides favourable ecological niches for these organisms (Cahill 1990;Austin 2002;Verner-Jeffreys et al. 2003;Hagi et al. 2004;Sugita et al. 2005;Skrodenytė-Arbačiauskienė et al. 2006;McIntosh et al. 2008). The total number of bacteria isolated from the intestines of nine fish species by Cahill (1990) ranged from 10 5 to 10 8 cells per gram. The same author has shown that the density of microbial population in the fish intestine depends on the density of microorganisms in the ambient water distinguishing between the microflora of intestinal contents and the microflora closely connected with the intestinal wall. Although the rela-tive abundance and diversity of bacteria inhabiting healthy fish are of undoubted interest, the role of these bacteria seems to be more important. Fish harbour the communities of bacteria that fulfil necessary functions also (Sugita et al. 1991(Sugita et al. , 1997(Sugita et al. , 2002Romirez, Dixon 2003). According to the data published, the bacteria of fish intestinal tract are related to numerous functions including the degradation of complex molecules such as starch (production of amylase by intestinal bacteria), cellulose, phospholipids, chitin and collagen; production of vitamins, etc. Petroleum hydrocarbon pollution in marine and estuarine environments is a global problem (Atlas, Bartha 1998;Baltrėnas, Vaišis 2007). Biodegradation by natural populations of microorganisms is the basic and the most reliable mechanism by which thousands of xenobiotic pollutants, including crude oil, are eliminated from the environment (Atlas, Bartha1998). Oil-degrading marine bacteria are of great significance in marine environments because it is well evidenced that a number of bacteria utilize a variety of hydrocarbons in nature and that bacterial oxidation rate may be as much as ten times the autoxidation rate. More than 100 species representing 30 microbial genera have been shown to be capable of utilizing hydrocarbons. In general, the population level of hydrocarbon utilizes and their proportion within the microbial community appear a sensitive index of environmental exposure to hydrocarbons. In unpolluted ecosystems, the hydrocarbon utilizes generally constitute 0.1% of the microbial population; in oil-polluted systems they can rise to much higher levels (Leahy, Colwell 1990). The effects of environmental conditions on the microbial degradation of hydrocarbons and the effects of hydrocarbon contamination on microbial communities are areas of great interest (Delille, Delille 2000;Pucci et al. 2000;El-Tarabily 2002). In general, microbial communities from contaminated ecosystems can adapt to the presence of pollutants, producing shifts in the metabolic and generic diversity of the community (Macnaughton et al. 1999). In this context, the knowledge of the taxonomic and physiological characteristic of the autochthonous biocenosis belonging to a certain natural ecosystem can provide insights into the ecological function of these communities. The information regarding the intestinal microbial flora in fish is abundant, however there is little information in the field of crude oil impact on intestinal microflora and hydrocarbon-degrading bacteria in the intestinal tract of aquatic animals (Šyvokienė, Mickėnienė 2000(Šyvokienė, Mickėnienė , 2004King et al. 2005). Some data concerning crude oil impact on the intestinal bacterioflora in animals are available (George et al. 2001). Almost all natural aquatic ecosystems contain populations of bacteria that can metabolize some oil components and related compounds even if those systems have not ever been exposed to oil or oil products (Leahy, Colwell 1990). Fish are continuously exposed to a wide range of microorganisms present in their environment. Microorganisms that inhabit the digestive tract of fish are specialized to survive and multiply there (Cahill 1990). The current study was initiated to investigate the abundance of hydrocarbon-degrading bacteria in the intestinal tract of fish from the Baltic Sea and to determine them to species level using the 16S rRNA gen sequencing technique. Material and methods The fish for microbiological investigations, i.e. the Baltic cod (Gadus morhua), plaice (Platichthys flesus) and the Baltic herring (Clupea harengus) are widespread in the Baltic Sea and were sampled near Būtingė once in July of 2006. All fish were caught before midday according to guidelines given by Thoresson (1996). Five specimens of cod, three of plaice and three of the Baltic herring were used for microbiological investigation of intestinal tract. All fish were kept on ice and examined within shortest time possible. Populations of aerobic and facultative anaerobic heterotrophic bacteria occurring in the intestinal tract of the investigated fish were estimated using a dilution plate technique. The fish were killed by physical destruction of the brain, and the skin was then washed with 70% ethanol before opening the ventral surface with sterile scissors. From each fish intestinal tract, 1 g of intestinal contents was removed and suspended in 10 ml of sterile saline (0.85% (w/v) NaCl). The suspension was serially diluted to 10 -7 and 0.1 ml of the solution was spread in triplicate on to agar media. The media chosen were: tryptone soya agar (TSA, Oxoid, Hampshire, UK) with added 5% of glucose and 1% NaCl (for isolation of total heterotrophic bacteria) and oil agar (for hydrocarbon-degrading bacteria): 1 l distilled water, 4.0 g NH 4 Cl, 1.8 g K 2 HPO 4 , 1.2 g KH 2 PO 4 , 0.2 g MgSO 4 ·7H 2 O, 0.01 g FeSO 4 x7H 2 O, 20.0 g agar, 2 ml crude oil as hydrocarbon source, pH 7.4 (Ijah, Antai 2003). The same medium without the hydrocarbon source was used as a control. The inoculated plates were cultured 5-10 days at 20 °C and the number of colonies was counted. Bacterial numbers are reported as cfu (colony forming unit) g -1 of intestinal content. Colonies larger and different from those on the substrate-free control plates were selected for further investigations. Bacterial colonies on oil agar were divided into different types according to colonial characteristics i.e. shape, size, elevation, surface, colour, edge and opacity. Three to five representatives of each colony type were streaked and re-streaked on fresh media to obtain pure cultures. Total of 100 hydrocarbon-degrading bacteria isolates from the contents of intestinal tract of investigated fish were identified to genus by phenotypic properties. Each isolate was classified to the genus level using a modified version of the scheme (Sugita et al. 1981(Sugita et al. , 2002 and utilized gram-staining, morphological observation, pigmentation, motility, the OF test, KOH test, oxidase test, catalase test, spore observation and the O/129 sensitivity test. 5 pure hydrocarbon-degrading bacteria grown on nutrient agar were used for PCR amplification. A suspension of 0.1-0.3 g of each bacterial isolate in 0.5 ml of TE buffer was distributed in 3 Eppendorf microtubes. Bacterial chromosomal DNA was isolated by using Genomic DNA Purification Kit #K0512 (Fermentas, Lithuania) (www.fermentas.com). The PCR temperature profile was 95 °C for 4 min. followed by 30 cycles 95 °C for 1 min., 50 °C for 1 min., 72 °C for 2 min. All PCR amplifications were performed with Gene Amp PCR system in a termocycler (Mastercycler, Eppendorf). The PCR products, which had an expected sizes of about 1500, 1100 and 700 base pairs (bp) were examined for purity and size in 0.8% agarose gels, visualised by staining with ethidium bromide and photographed under UV light. Reactions products were purified and ligated into pUC57/T vector according to the supplier's instructions and transformed into high efficiency XL1-Blue Escherichia coli cells using a commercial InsT/Aclone TM PCR Product Cloning Kit #K1213 (Fermentas, Lithuania). Transformants were selected using blue-white screening and multiplied by culture in Luria-Barton medium containing ampicillin. Nucleotide se-quencing reactions were conducted using the plasmid templates with inserts. A selected number of plasmids with 16S rDNA fragments were sequenced on an ABI Prism 377 DNA sequence using BigDye R Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and primers: M13/pUC sequencing primer (-46), 22-mer and M13/pUC reverse sequencing primer (-46), 24-mer according to the manufacture's guidelines. Search of nucleotide sequence homology of 16S rDNA gene was done using the Blast algorithm and the sequences were aligned using the CLUSTALW software program (Thompson et al. 1994;Altschul et al. 1997). Data are presented as the mean and standard error (SEM) of three determinations. Results and discussion The physiology of the fish gut differs in many respects from that of the homeothermal animals (Buddington et al. 1997) and this can be expected to affect the bacterial numbers and the species' composition. The relevant findings of the published studies on fish intestinal microbiota can be summarized as follows: the total cultivable bacterial numbers are seldom higher than 10 6 cfu/g -1 (Ringø 1993;Ringø, Olsen 1999), the many typical genera of homeothermal animals such as Bifidobacteria, Bacteroides, Eubacterium (Isolauri et al. 2004), are either absent or only occasionally present. Lactic acid bacteria (LAB) are relatively common, but their numbers are low (Ringø et al. 2000). Typical fish specific species and genera include, among others Pseudomonas, Aeromonas and Vibrio (Sugita et al. 1996;Ringø, Olsen 1999;Hagi et al. 2004). The significance of the intestinal microbial community to the health and well-being of fish is poorly known, and also the knowledge of the dietary effects on the composition of microbiota is limited also. The results of this study show that dense bacterial populations occur in the intestinal tract of investigated fish (Fig. 1). These results are in accordance with those found for other fish species (Ringø, Birbeck 1999;Šyvokienė, Mickėnienė 2000;Al-Harbi, Uddin 2004;Sugita et al. 2005;. Generally bacteria are abundant in the environment in which fish live and it is impossible to avoid them being component of their diet. The bacteria ingested by the fish along with their diet may adapt themselves to the environment of the gastrointestinal tract and form a symbiotic association (Ringø, Birkbeck 1999). The abundance of total heterotrophic bacteria in the intestinal tract of fish varied from 1.56×10 5 to 6.00×10 5 cfu g -1 depending on fish species. The differences in results may be due to differences in feeding of fish. Heterotrophic counts are representative of a small group of active bacteria that react immediately to changes in nutrient supply (Delille, Delille 2000). Earlier reviews (Cahill 1990;Ringø et al. 1998;Ringo, Birkbeck 1999) suggested that the gastrointestinal tract microbiota of fish are simpler than those of endothermic animals. However, recent studies on Arctic charr , Atlantic salmon (Bakke-McKellep et al. 2007) and Atlantic cod (Ringø et al.2006) demonstrate that this statement may need revision, as several new isolated bacterial species have not been previously reported as a part of the intestinal microbiota in fish. In the present investigation, a considerable population of hydrocarbon-degrading bacteria has been obtained in the intestinal tract of investigated fish (Fig. 2). The ubiquitous distribution of oil degrading bacteria has already been reported in a wide variety of niches (Leahy, Colwell 1990;Delille, Delille 2000). Almost all natural aquatic ecosystems contain populations of bacteria that can metabolize some oil components and related compounds even if those systems have not ever been exposed to oil or oil products (Leahy, Colwell 1990). Hydrocarbon-degrading bacteria are present in low numbers in unpolluted environments. These populations increase in number when petroleum hydrocarbons enter natural habitats (Pucci et al. 2000;El-Tarabily 2002). There are several possible sources for the establishment of intestinal gut flora and it is generally believed that the processes of bacterial colonization in fish are complex and depend upon the bacterial flora of live feed and water (Ringø, Birkbeck 1999). The obtained data showed that the abundance of hydrocarbon-degrading bacteria in the intestinal tract of fish varied from 2.40×10 4 to 1.08×10 5 cfu g -1 between fish species and was still high. Our previous investigations have shown that the addition of crude oil into an environment of molluscs resulted in an increase of two orders of magnitude in the number of hydrocarbon-degrading bacteria in the intestinal tract (Šyvokienė, Mickėnienė 2004). Hydrocarbon-degrading bacteria were found in the liver and bile of fishes: gold-spotted trevally (Carangoides fulvoguttatus) and bar-cheeked coral trout (Plectropomus maculatus) also (King et al. 2005). The authors argue that these fish species have a potential as indicator species for assessing the effect from exposure to petroleum hydrocarbons. Phenotypic examination of the recorded hydrocarbon-degrading bacteria from the intestinal tract of the Baltic cod, plaice and the Baltic herring revealed that they belong to Aeromonas, Pseudomonas/Shewanella (Table 1). Gram-negative rods, motile, polar-flagella, facultative anaerobic, oxidase positive, catalase-positive Acinetobacter Gram-negative rods, aerobic, oxidasenegative, catalase-positive Shewanella Facultative anaerobic, Gram-negative, motile by polar flagella rods The 16S rRNA gene sequences obtained from the hydrocarbon-degrading bacteria from fish was deposited in the EMBL data library under accession numbers: EU916707, EU916708, EU916709, EU916710, EU916711. Phylogenetic analysis based on 16S ribosomal DNA (rDNA) sequences showed that isolates of hydrocarbondegrading bacteria from the intestinal tract of fish were closely related to Aeromonas veronii, Aeromonas sobria, Shewanella sp. and Acinetobacter sp. (Table 2). From the intestinal tract contents of the Baltic cod, the hydrocarbon-degrading bacteria belong to Aeromonas veronii, from plaiceto Aeromonas sobria and Acinetobacter sp., from the Baltic herringto Shewanella sp. and Acinetobacter sp. AY 576723 99 isolated and identified the following Gram-negative bacteria from Atlantic cod, which are not normally isolated from the gastrointestinal tract of fish: Acinetobacter johnsoni, Chryseobacterium spp., Ochrobactrum spp., Psychrobacter cibarius, P. fozii, P. glacincola, P. luti, P. psychrophilus and Sejongia antarctica. In the current study, we isolated and identified Acinetobacter spp. bacteria which are able to degrade oil hydrocarbons from the content of intestinal tract of plaice and the Baltic herring. Our data obtained showed that hydrocarbondegrading bacteria from the intestinal tract of the Baltic cod and plaice belong to species A. veronii and Aeromonas sobria. A recent review by Sugita and Ito (2006) devoted to phylogenetic analysis based on 16S ribosomal DNA (rDNA) sequences of bacteria from the intestinal tract of flounder. The obtained data showed that 82 representative isolates were closely related to three major species of marine vibrios, Vibrio scophthalmi-Vibrio ichthyoenteri group, Vibrio fischeri and Vibrio harvey with similarities of 97.2-99.8%, 96.4-100% and 98.6-99.5% respectively. These findings indicate that intestinal bacteria from Japanese flounder were mainly composed of Vibrio scophthalmi-Vibrio ichthyoenteri group and Vibrio fischeri (Sugita, Ito 2006). Aeromonas isolates were obtained from fish intestines, water and sediments from an urban river (Sugita et al. 1995). The results obtained by authors strongly suggest that aeromonads are indigenous in fish intestines and have the potential to be predominant in aquatic environments. In addition, it was reported that all of the Aeromonas isolates from the intestinal tracts of six species of freshwater-cultured fishes constituted five Aeromonas species: Aeromonas caviae, Aeromonas hydrophila, Aeromonas jandaei, Aeromonas sobria and Aeromonas veronii (Sugita et al. 1995). However, according to Dügenci and Candan (2003) Aeromonas strains were isolated from the intestinal tract of Atlantic salmon from freshwater and the Black Sea. Five of motile Aeromonas strains isolated from freshwater and one motile Aeromonas strain isolated from the Black Sea salmons were identified as A. caviae. The rest were identified as A. sobria. Shewanella putrefaciens is a Gram-negative facultatively anaerobic bacteria belonging to the family Vibrionaceae. It is assumed that S. putrefaciens is derived from the coastal marine environment. Possibly, the organism is the part of the normal microflora of marine fish (Austin, Austin 1999). Therefore it is surprising, but very interesting that the bacterium was isolated from freshwater fish (Kozinska, Pekala 2004). From the fish digestive tract we isolated Shewanella spp., which was able to degrade oil hydrocarbons. According to Floodgate (1984), in marine environment Shewanella spp. and Pseudomonas spp. are often involved in the degradation of hydrocarbons. Earlier we established that in the intestinal tract of fish from the Curonian Lagoon (the Baltic Sea basin) oil hydrocarbons are degraded by: Aeromonas allosaccharophila, Aeromonas eucrenophila, Aeromonas media and Pseudomonas flavescens (Voverienė et al. 2002). We argue that hydrocarbon-degrading bacteria in an intestinal tract of fish take part in purification processes as well as bacteria in water and may play a role in adaptation and survival of fish chronically exposed to pollution with hydrocarbons. The controversial hypothesis that the fish gut microbiota might not be as simple as believed should stimulate bacteriologists to obtain more information on bacteria colonizing the digestive tract of fish and their potential beneficial role. Conclusions 1. Hydrocarbon-degrading bacteria were obtained in the intestinal tract of all investigated fish and varied from 2.40×10 4 to 1.08×10 5 cfu g -1 between fish species and was still high. 2. Molecular species of hydrocarbon-degrading bacteria found in the digestive tract of fish from the Baltic Sea were: Aeromonas veronii, Aeromonas sobria, Shewanella sp. and Acinetobacter sp. 3. We argue that hydrocarbon-degrading bacteria in fish intestinal tract take part in purification processes, as well as, bacteria in water and may play a role in adaptation and survival of fish chronically exposed to pollution with hydrocarbons.
2019-03-30T13:08:12.424Z
2011-10-21T00:00:00.000
{ "year": 2011, "sha1": "4d833c2082dfc054cc76575a6f872966190a1b5b", "oa_license": "CCBY", "oa_url": "https://journals.vgtu.lt/index.php/JEELM/article/download/5340/4620", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bb19dad3d0060c977d61bd6376f1fe01bdad6152", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
212817599
pes2o/s2orc
v3-fos-license
The control system elements of the new generation optical switching cell In this paper calculation of new optical switch parameters that allows us to create next generation all-optical non-blocking switching system without external control devices is carried out for the first time. In particular, switching cell control device is studied in detail. The presented one includes Bragg filter, frequency detector, optical isolator, and former of a control signal. Here we also present the detail description and the numerical calculations of these devices for third transparent window (1550nm). The reflection and transmission coefficients are obtained, the passbands of the Bragg filter are presented, and the amplitude characteristic of the frequency detector is calculated. Introduction Due to the increasing requirements for the throughput of telecommunications networks, the optical switching system design becomes important and interesting problem [1][2][3][4][5][6][7][8]. For today, various architectures of such systems have been presented in the scientific literature [1,[4][5][6]. As rule these systems are built on 2x2 switching cells [1,[4][5]. However, such an approach gives us unwieldy multistage systems of great complexity [1,3,5]. Moreover, the additional disadvantage of current optical switching systems is so call blocking [3]. As a result, the ones need special control algorithms and must use buffer devices. On other words information must be transform from optical to radio domain and back. Additionally, 2x2 switches as rule based on nonlinear optical effects [1] that pulls down system performance. Therefore, the creating all optical switching cells is very important problem of modern optoelectronics. Today, there are only few scientific papers that describe all-optical switches without external control devices based on linear optical effects only [1,4,6]. For example, it has been developed new type of switches base on 4х4 and 8х8 cells [7,8] with relatively low complexity. The main feature of those optical switches is the control method. Note that in general a connecting path between input and output can be set by external control unit or it can be determined by control switching elements inside the system. In [4,6] the authors have presented the structure of optical cross switches controlled by external device. But existing external control devices are electronic and obviously limit a speed of that optical switching. In [7,8] it has been proposed the concept of new generation optical switches based on the control elements inside the system. However, it has been only idea, and the problem of the parameter calculation has not been solved in those works. Here we present the detail description and the numerical calculations of these devices in optical domain (1550nm): the controlled system including Bragg filter, frequency detector, optical isolator, and former of a control signal. All these devices are based on isotropic and anisotropic inhomogeneous, in particular stratified, structures. To calculate of the ones, the analytical methods in scope of linear problem [10,12] are applied in our treatment. Note, that various optical structures have already been calculated by using those mathematical approaches, however here we first of all present practical applications of the developed method and also we obtain unique properties of the structure that allow us to create next-generation all-optical switching cell. The optical switching cell structure The next-generation 4x4 all optical switching cell containing the buffer device and the switching unit (figure 2) has been presented in [7]. The functional principle of the cell is based on frequency separation of control and information signals. Here an input optical signal consists of two control signals with the wavelength  c1 , c2 and an information signal with the wavelength  i . The principle of frequency separation is shown in figure 1. These signals are separated by the Bragg filter (BF) of the switching unit. In the considered case this Bragg filter is actually an isotropic stratified periodic structure. Thus, control signals with  c1 and  c2 are reflected in the Bragg filter and transmitted to the optical isolator (OI) and an information signal  i transmits through the structure to the displacement system (DS). After OI a control signal transmits to the frequency detector (FD). The displacement system is a controlled photonic crystal. Actually it is multilayered structure including ferromagnetic, optoelectronic, thermoelectric or ferroelectric films. The properties of this film can be charged by an external control signal (voltage, current, thermal or optical radiation) and therefore a refractive angle can be controlled by an external signal. We choose the material controlled by an external magnetic field (ferrite-garnet) in our system: It is important that ferrite-garnets are the only existing magnetic controlled materials in the optical domain for today. The frequency detector converts the frequency deviation of the control signal into its amplitude deviation. It also is inhomogeneous isotropic structure with linear dependence of the reflection coefficient on a frequency in the operating domain. Analogous devices in terahertz and optical domains have been described in [13,14]. The amplitude-modulated optical signal from the frequency detector is transmitted to the optical signal former (OSF) that includes a controlled light emission diode and low frequency control scheme. The operating principle of the displacement system (DS) is described in detail in [15] because we do not represent it here. This system has one input and four outputs. For effective control of cell functioning, it is necessary to use two signals with different values. The combination of these two control signals determinates the necessary output. It is obvious that the buffer-multiplexing device must be used in this scheme as the DS has four outputs and only one input (figure 2). The buffer device includes the optical integrated device (OIU) and the four controlled delay lines (DL). The optical integrated device contains the optical multiplexer 3 and the intermittent device. Let us consider the functional principle of the buffer device. It is assumed that an input signal is arrived at one of the delay lines (DL1, DL2, DL3, DL4). This signal transmits to the integrated optical device of the buffer if there is no any signal at other inputs at this time simultaneously. The intermittent device generates a prohibition signal for the other inputs. Note additionally that the optical integrated device performs spatial multiplexing of signals from four inputs. The optical isolator The ooptical isolator is designed to transmit control signals in one direction only. The operation principle of the optical isolator is shown in figure 3. The isolator contains a radar, a receiver, and a stratified slab. The main element of this isolator is a stratified anisotropic slab possessing nonreciprocal properties. Non-linear properties of the presented isolator are based on a dependence of the medium properties, in particular reflection and transmission coefficients, on an incident angle and an orientation of an external magnetic field. It is important that such properties don't appear in the cases of a normal and tangential orientations of a magnetic field. Let us consider the first principal (figure 3a). If an incident control signal passes in the direction 1 than the reflection coefficient is about unit and a maximum power propagates to the receiver (the direction 1). The transmission coefficient is about zero in this case. If an incident control signal passes in the direction 2 than the reflection coefficient is equal to zero and a signal propagates through the slab but not propagates to the radar. Now let us consider the second structure principal (figure 3b). If an incident control signal transmit along the direction 1 than the transmission coefficient is approximately unit and a signal transmits through the slab to the receiver. In the case of incidence from the receiver (direction 2) a signal is totally reflected and it does not transmit to radar. A very important characteristic of an isolator is it's amplitude response. During the researches it has been carried out numerous calculations and it has been chosen the optimal structure [16]. This structure includes the 12 double-layered periods. The first layer is FeF 2 and the second one is MnO. Here it is taken f = 1.93·10 14 Hz,  = 30º,  = 50º. The dependences of the reflection coefficient on an incidence angle for this structure (the amplitude characteristic) are presented in figure 4. It is seen that the reflection coefficient is minimal (R=0.05) at  = 76.4º and the one is unit at  = -76.4º for  = 30º (the solid line). Thus this structure has the isolator properties and it passes a signal only in a forward direction and doesn't pass a signal in an opposite direction. Figure 5 shows the results demonstrating the possibility of mechanical tuning of the isolator by changing the angle of the anisotropy axis inclination. The minimum shifts from  = 72º to  = 82º if inclination angle changes from  = 25º to  = 40º. Simultaneously, the angle bandwidth is narrowed and it is equal to 28º, 10º, and 6º correspondingly. Decreasing the inclination angle less than  = 25º and increasing it more than  = 50º leads to disappearance of the angle selective properties of the structure. Optical Frequency Detector Here we also offer frequency detector based on a 1D anisotropic photonic crystal (stratified anisotropic structure). This demodulator transforms a frequency modulated signal to an amplitude modulated signal. Note that the analogous method has been used in the radio frequency domain. For the demodulation it has been used an ordinary electrical oscillating circuit. The central frequency of a signal, in this case, must correspond to a linear interval of the resonance characteristic ( figure 6). Figure 7. Geometry of the problem: a) a cross-section of a one-dimensional crystal; b) an orientation of the axes within a single layer ( is a wavevector of i-th eigenwave within a single layer, , is an inclination angle, is an angle between an incidence plane and a plane including an anisotropy axis). To realize the described method we used an anisotropic stratified structure with an arbitrary orientation of the anisotropy axis. The important is the fact that a wave is propagating along a slab ( ). The functioning principle of the one is based on the so-called "penetration" effect and a dependence of the reflection coefficient on an anisotropy axis orientation. For this in this work a dependence of the reflection coefficient on an anisotropy axis orientation is studied for the case of tangential propagation of an incident wave (along the y-axis (figure 7 a). Our main aim is studying a dependence of reflection coefficient on an orientation of the anisotropy axis for the case of a tangential wave propagation and investigating practical applications of the obtained results for a tangential wave propagation under a structure ( ) for the arbitrary angles , (figure 7b), and finding the practical applications of these properties. Let us consider a dependence of a reflection coefficient on a frequency for presented in figure 8. It is seen that this characteristic is resonant and additionally the one is approximately linear in the resonance domain ( ). If is a center frequency, then for a frequency deviation the reflection coefficient varies in accordance to the law of an input signal in the scope of . Therefore the amplitude of reflected signal varies in accordance to the same law. Thus this structure transforms a frequency modulated signal to an amplitude modulated one. Then an amplitude modulated oscillation can be detected by the well-known approaches. The considered principle can be used in any frequency domain as choosing parameters of a structure it is possible to obtain an analogous resonance characteristic in any frequency domain. It is obvious that an analogous device can be created for the case of a normal incident wave and the case of an oblique incidence. In our view dimensions of a device must be smaller in the case of tangential wave propagation. Bragg filters The Bragg filter is used for separating a control signal and an information signal. The pass interference filters under a normal incident of an electromagnetic wave is studied. The filter is based on a planeparallel isotropic stratified structure. The calculation of the main optical parameters for the three structures of multi-layered interference filters is carried out in [17]. The layer thickness is multiple of quarter of the wavelength. The initial data for the calculation are taken in accordance with [17]: the central wavelength of a filter is from 780nm to 1600nm, type of a filter by spectral characteristic is narrow band pass, bandwidth at the level of 3dB is less than 100nm, level of the assured decay is more than 40dB, a total thickness of a multilayered structure is 0.21.5mm. A medium is isotropic, lossless, without frequency dispersion. The geometry of the problem is shown on figure 9. The results of the interference multilayered filters calculation are discussed in this section of the paper. For the calculation the characteristic matrix method [10,12] and the method of the needle variation of the refractive index of the layer's material of coating [17] are used for obtaining the best kind of an amplitude-frequency characteristic in the passband. Figure 9. The multilayered optical structure of the filter [9]:index of refraction,layer thickness, -angle of wave's incidence. During a research with the use of [17,18] the three structures of filters are got with the different number of layers and sequence of materials with the different indexes of refraction. So, a filter with a structure 1 ( ) contains of 21 layers, where a material with the index of refraction and the thickness is quarter of wave-length, is a material with the index of refraction , is a base sheet of glass with the index of refraction n=1,52. The filter structure 2 (1,35M1,07L0,7M1,27H2(LH)L4H7(LH)L2H8(LH)L4H8(LH) L2H8(HL)4H3(LH)LG) is obtained by using the method [17,18]. Here M is a material with the refraction index 1.32, H is a material with the greater index of refraction 2.16, the refraction index of L is 1.46, and G is a glass substrate with the refraction index 1.52. Applying the method [17,18] to the described in that work structure we obtain the resulting filter with the structure 3 containing 82 layers that can be described as: (structure 3), where H is a material with the refraction index of n=2,16, L is a material with the refraction index n=1,46, G is the glass substrate with n=1,52. In figure 10 it is shown an amplitude-frequency characteristic of the filter based on the structure (3). This filter (3) is characterized by the followings parameters: half-width is nm, decimal width is nm, slope of characteristic is , bandwidth at the level of 1dB is nm, bandwidth at the level of 3dB is nm, minimum insertion loss is 0,102dB, maximum insertion loss is 0,245dB, ripple of amplitude-frequency characteristic is 0,143dB, level of the assured fading is 103,54dB, adjacent channel isolation is 24,7dB, non-adjacent channel isolation is 65,7dB, thickness of the stratified slab is 0,206mm, transmittance in a maximum is = 97 0 . The filter with a structure (3) has the best characteristics of the calculated ones. It has more even amplitude-frequency characteristic in the pass band, see figure11. In figure 11 the bandwidths of the obtained filters (1-3) are presented also. Conclusion In this work the control system of the 4х4 next generation switching cells is presented for the first time. These new cells are all-optical and self-tuning and these cells can be functioning without an external control system. The development of all optical switching elements is an important stage of all optical communication networks transition. The inside controlled system of the cell includes Bragg filter, frequency detector, optical isolator, and former of a control signal. These devices are described in our paper and the calculations of it's parameters are presented. First, the function principles of these devices are considered and also we obtained the amplitude characteristics of the offered structures in the third transparent window. Figure 11. Bandwidths of the got filters: 1) the filter with the structure (1); 2) the filter with the structure (2); 3) the filter with the structure (3). The optimal structure parameters and reflection and transmission coefficients of the optical isolator are obtained. The reflection coefficient is 0.05 at the incident angle is 76.4º and the one is unit at the incident angle is -76.4º. The optimal parameters and amplitude frequency dependence of the frequency detector are calculated too. We obtained that in the considered case a slab must contain 12 doublelayered periods, and it is seen that amplitude frequency dependence is approximately linear in the resonance domain . The passband of the Bragg filter is presented, and amplitude characteristic of frequency detector is calculated. We obtained that the bandwidth of the Bragg filter structure at the level of 1dB is nm, bandwidth at the level of 3dB is nm.
2019-11-28T12:31:07.593Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "d47e3aadd0ff3cdcfb5dc14db5da2336ca3ee626", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1368/2/022002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0bcfc11fa7cbdad3eab681ad58dd96b22bbfbf1a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270652831
pes2o/s2orc
v3-fos-license
Support of the SDGs as a New Approach to Financial Risk Management in Responsible Universities in Russia : The purpose of this paper was to reveal the influence of the support of the sustainable development goals (SDGs) on the financial risks of responsible universities in Russia. This paper fills the gap in the literature that exists regarding the unknown consequences of SDGs’ support by responsible Russian universities concerning their financial risks. Based on the experience of the top 30 most responsible Russian universities in 2023, we used regression analysis to compile a model for their financial risk management. This model mathematically describes the cause-and-effect relationships of financial risk management in responsible Russian universities. This paper offers a new approach to financial risk management in responsible Russian universities. In it, financial risks to Russian universities are reduced due to universities accepting responsibility for state and private investors. A feature of the new approach is that the effective use of university funds is ensured not by cost savings but by the support of the SDGs. The potential for a reduction in financial risk in responsible universities in Russia through alternative approaches to financial risk management was disclosed. The proposed new approach can potentially raise (to a large extent) the aggregate incomes of responsible universities in Russia compared to the existing approach. The main conclusion is that the existing approach to financial risk management in Russian universities is based on low-efficiency managerial measures which risk burdening universities. This burden could be prevented with the newly developed approach to financial risk management in responsible universities in Russia through support of the SDGs. The theoretical significance lies in clarifying the specific list of the SDGs whose support makes the largest contribution to reducing financial risks for the universities—namely, SDG 4, SDG 8, and SDG 9. The practical significance is that the new approach will allow for full disclosure of the potential reduction in financial risks in responsible universities in Russia in the Decade of Action (2020–2030). The managerial significance is as follows: the proposed recommendations will allow improved financial risk management in Russian universities through optimization of the support of the SDGs. Introduction Financial risks have strong effects on the activities of modern organizations, and they have specific features in each sector of the economy.The specifics for the Russian higher education sector consist of the domination of state universities.From a financial perspective, this means that a large share of Russian universities' resources is composed of state subsidies.Historically, Russian universities were created using the national budget, and their full state provision was planned. Recent decades have seen a full-scale market reformation of the Russian economy, including the higher education sector.The formation of market relations in this sphere initiated the reconsideration of the strategy of financing Russian universities' activities. To reduce the burden on the state budget and increase the total volume of financing for university activities, the inflow of private investments in the sphere of higher education has been stimulated in recent years in Russia.Private investments mainly take the form of payments for higher education services that are provided by universities and university innovations and technologies that are purchased by private businesses. Financial risks to universities are treated as a reduction in the volume of financing of universities' activities from various sources.The essence of financial risk management in universities consists in raising their attractiveness for state financing and private investments.The Decade of Action introduced uncertainty in universities' financial risk management.The practice of achieving the Sustainable Development Goals (SDGs) gained significant popularity. To support the UN (2024a) Global Initiative, respectable organizations, such as THE (2024), began compiling international rankings of universities using the criteria of how they support the SDGs.To preserve global competitiveness, universities had to conform to new criteria set by international university rankings, so they started supporting the SDGs.Universities that support the SDGs can be called "responsible universities" because they accept responsibility for the sustainable development of socioeconomic systems. The problem is that while striving towards strengthening global competitiveness, universities may face the growth of financial risks.This is because support of the SDGs is connected to additional expenditures by universities.State regulators can potentially treat expenditures for the achievement of the SDGs as unplanned expenditures and ineffective use of the budget assets provided to universities.The reduction in the economic effectiveness of the management of responsible universities may become a reason for a decrease in their state financing and redistribution of assets in favor of more effective universities. Private investors focus on price.Universities' support of the SDGs can cause the growth of the cost of paid services of higher education, which are provided by universities, and increase the cost of university innovations and technologies, which are available for purchase by private businesses.This may cause a reduction in the pricing competitiveness of educational and research services that are provided by responsible universities, a reduction in demand, and a decrease in sales volume.Consumers of these services would then shift to alternative local and foreign suppliers of these services-universities from other countries. Striving to solve these problems, this paper seeks to determine the influence of support of the SDGs on the financial risks of responsible universities in Russia.This goal is achieved with the help of the two following tasks.The first is to identify the cause-and-effect relationships of financial risk management in responsible Russian universities.The second is to identify the potential for a reduction in financial risks in responsible universities in Russia through alternative approaches to financial risk management.This paper fills a gap in the literature that is connected to the unknown influence of the support of the SDGs on financial risks to universities in Russia.The paper clarifies this contribution and offers a new approach to financial risk management for Russian universities-through support of the SDGs. The Existing Approach to Financial Risk Management in Responsible Universities in Russia The model of financing for state universities in Russia was changed in the market reformation of the Russian economy on the whole and the Russian system of higher education in particular.Several decades ago, only state universities existed in Russia, and all of them were fully funded from the state budget.Market reforms led to the emergence of private universities, and to the reduction in state financing of state universities (Zheleznov 2023).This created financial risks to state universities in Russia, for the capabilities and volume of state financing decreased, and the mechanisms of attracting private financing were not fully developed. The Russian model of financing state universities is unique-it is completely different from the system of university financing in Europe, Asia, and the USA, where private universities dominate in developed countries (Kelchen et al. 2024).State financing of their activities is not performed directly, but indirectly-through provisions of grants for tuition and educational loans. In Russia, as in many other dynamically developing countries, there exists state procurement to universities for training personnel.In the Russian model, the Ministry of Science and Higher Education determines which specialties are in the highest demand in each region and country on the whole and allocates state-funded places to universities, at which students' tuition is financed from the federal budget.Together with this, there is paid education-paid for by students themselves and/or employers (targeted, corporate education) (Mao et al. 2024). In this regard, the term "investor in higher education" is introduced.It is a private subject financing university activities, namely, students and employers who pay for higher education services, and companies that receive university innovations (Hu ňady et al. 2023).For comparison, the USA has private commercial institutions that rely on investors, but this is not a very popular practice (Blume-Kohout 2023). This paper is based on the concept of financial risk management in universities, the provisions of which are given in the works by Bogoviz et al. (2018), Kato et al. (2024), and Zarova and Tursunov (2022).According to this concept, the financial risk to state universities, which is the research object in this paper, is the reduction in the volume of their financing (Dyrstad et al. 2024;Krieger 2024).However, the following should be differentiated: • Risk of the reduction in the total financing of the activities of universities, whose structure is based on state financing from the national budget (Chairassamee and Hean 2023; Turginbayeva and Domalatov 2019); • Risk of the reduction in extra-budgetary financing of universities' activities from the funds of private investors: consumers (individual and corporate) services of higher education and B2B consumers of university innovations and technologies (Litvinova 2022;Moll 2023). The existing approach to financial risk management in responsible universities of Russia involves maximization of the effectiveness of universities' activities due to the following: an increase in the results in the sphere of education (Tovmasyan et al. 2022), research (Bogoviz and Mezhov 2015;Fukugawa 2023), and international activities (Petrenko and Stolyarov 2019), or in combination with the reduction in wages of representatives of academic staff for saving in the interests of the reduction in costs, or in combination with an increase in wages of academic staff as a means of maximization of the above results (Przhedetskaya and Borzenko 2019). Support of the SDGs in Responsible Universities: International Practice and Russian Experience Currently, international practice is dominated by responsible universities, which are treated as universities that accept responsibility for: • The state for the effectiveness of spending the provided budget funds and for the economic, environmental, and social consequences of universities' activities and practical implementation of the created innovations (Dyrstad et al. 2024;Thawesaengskulthai et al. 2024); • Private investors for the quality and affordability of educational and research services that are provided by universities (Hahn et al. 2024;Khasanov et al. 2019). Thus, a responsible university is a university that in the course of its activities supports the SDGs, publishes the corresponding reports, and, accordingly, is presented in university rankings, including international ones, which are connected with the achievement of the SDGs.Certain recent research, e.g., Athari et al. (2024), showed that national ESG is very important, although the most objective rankings are international university rankings, among which an important role belongs to the respectable ranking THE (2024). Accordingly, an irresponsible university could be defined as a university that does not support the SDGs and/or does not publish the corresponding reports on sustainable development and is not in university rankings, including international ones, which are connected with the achievement of the SDGs.That is why we suggest using the presence and position in the THE "Impact Rankings 2023" (2024) as the criterion of differentiation of responsible and irresponsible universities. Our literature review (Kyambade et al. 2024a;Ncube 2023;Preuss et al. 2023) revealed the high level of support of SDGs among Russian universities, which proves that responsible state universities dominate Russia's higher education system.Also, the existing publications note the significant contribution of support of SDGs by universities for the growth of their global competitiveness and expansion of their international activities (Kyambade et al. 2024b;Marchigiani and Garofolo 2023;Zhao and Cheah 2023). However, the consequences of responsible universities' supporting the SDGs for their financial risks are insufficiently elaborated and largely unknown, which is a gap in the literature.This paper strives to fill the revealed gap, posing the following research question: RQ: How does support of the SDGs by responsible universities in Russia influence their financial risks? Certain literature sources- Bock et al. (2018) and Mántica (2022)-put forward an assumption that support of the SDGs by responsible universities can raise their financial risks, for it is connected with additional expenditures. Contrary to them, Abankina et al. (2018) and Liu and Gao (2021) present their point of view that support of the SDGs by responsible universities can reduce their financial risks because it raises the loyalty of all interested parties: consumers (students and employers), business partners, and state regulators and employees (academic staff), whose labor efficiency and quality results increase.Based on this, the following hypothesis is proposed in this paper: H: Support of the SDGs ensures the reduction in financial risks of responsible universities in Russia. To check this hypothesis, we performed econometric modeling of the influence of the activity of SDG support as an innovative managerial practice, together with traditional managerial practices, on the financial risks to responsible universities in Russia. Materials and Methods This research sample contains the top 30 responsible Russian universities, which were selected by the criteria of their presence among the top 1000 universities in the world and most active support for the SDGs, in the THE "Impact Rankings 2023" (2024).The sample structure in the aspect of the position of responsible Russian universities in this ranking is shown in Figure 1. The sample of the top 30 responsible Russian universities in 2023, which was studied in this paper, is presented in Table A1.Measures of financial risk management, implemented by the top 30 most responsible universities in Russia in 2023, are shown in Table A2.Financial risks of the top 30 most responsible Russian universities in 2023 are characterized in Table A3.The activity of the implementation of the SDGs by the top 30 most responsible universities in Russia in 2023 is indicated in Table A4.As shown in Figure 1, 10% of the sample (3 universities) are in the category "201-300" in the considered ranking.The category "301-400" includes 13.3% of the sample (4 universities).The category "401-600" contains 26.7% of the sample (8 universities).The category "601-800" contains 33.3% of the sample (10 universities)-this is the largest category among responsible Russian universities.The category "801-1000" has 16.7% of the sample (5 universities). The sample of the top 30 responsible Russian universities in 2023, which was studied in this paper, is presented in Table A1.Measures of financial risk management, implemented by the top 30 most responsible universities in Russia in 2023, are shown in Table A2.Financial risks of the top 30 most responsible Russian universities in 2023 are characterized in Table A3.The activity of the implementation of the SDGs by the top 30 most responsible universities in Russia in 2023 is indicated in Table A4. To solve the first task, which involves revealing the cause-and-effect relationships of financial risk management in responsible Russian universities, we performed a factor analysis of this management.The task was solved with the help of regression analysis.This method was used to identify high-precision-regression-dependence of the indicators of financial risks-university's revenues from all sources (Ufr1) and university's revenues from extra-budgetary sources (Ufr2), according to MIREA, MIC (2024)-on the system of factors, which include, first, alternative financial risk management measures: Second, detailed practices of support of SDGs in universities, which include practices that are widespread among Russian responsible universities (share of universities that To solve the first task, which involves revealing the cause-and-effect relationships of financial risk management in responsible Russian universities, we performed a factor analysis of this management.The task was solved with the help of regression analysis.This method was used to identify high-precision-regression-dependence of the indicators of financial risks-university's revenues from all sources (Ufr1) and university's revenues from extra-budgetary sources (Ufr2), according to MIREA, MIC (2024)-on the system of factors, which include, first, alternative financial risk management measures: • Level of implementation of the SDGs (Resp), score 1-100 (according to THE 2024); Second, detailed practices of support of SDGs in universities, which include practices that are widespread among Russian responsible universities (share of universities that implement them exceeds 10% in the total sample according to THE 2024): level of support of SDG 4, SDG 5, SDG 8, SDG 9, SDG 11, and SDG 7 by responsible universities.Hypothesis H is deemed proven if the regression coefficient at the factor variable Resp is positive in regression equations for both resulting variables (Ufr1 and Ufr2).We also selected SDGs at which regression coefficients are positive in regression equations for both resulting variables (Ufr1 and Ufr2).The regression analysis results' reliability was checked with the help of the F-test and t-test. For the most complete consideration and the most correct reflection of the cause-andeffect relationships, we included the macro-level factors in the research model.Among the macro-factors that potentially influence the financial risks to universities, this paper considers the following: economic (EGl), social (SGl), and political (PGl) globalization (according to KOF 2024), as well as state financing of higher education (PETE). To calculate the value of the indicator PETE, we calculated the product of "expenditure on tertiary education (% of government expenditure on education)" (World Bank 2024a) and "government expenditure on education, total (% of GDP)" (transferred into shares of 100, World Bank 2024a).Since the macro-level factors influence the system of higher education on the whole, we evaluated their effect not on specific universities but on the general position of the three leading Russian universities in the THE ranking (TopUTHE from the materials of the UN 2024b).The values of the selected indicators are presented in Table 1.Since the period in the Russian economy for which the required statistical data are available is relatively short, the sample is eight years.Therefore, to find the connection between the financial risks to universities and the macro-environment of the Russian economy, we selected the correlation analysis method. This method was used to identify Figure 2 the interconnection between TopU THE and EGl, SGl, PGl, and PETE.The indicator TopU THE was selected for this research because it reflects the involvement of universities in sustainable development and their interest in the achievement of the SDGs to improve their position in international university rankings.In this way, the connection between the development of responsible universities in Russia and financial risks to state universities and globalization in various directions was determined. We also conducted a correlation analysis of the interdependence of PETE and EGl, SGl, and PGl.This demonstrates the connection between various directions of globalization and Russian state universities' financial risks.The positive connection is demonstrated by positive values of the correlation coefficients, and the negative connection is demonstrated by negative values of the correlation coefficients. For the systemic reflection of aggregate results of econometric research, we used the structural equations modeling (SEM) method.The research model has the following form (Figure 2): demonstrated by positive values of the correlation coefficients, and the negative connection is demonstrated by negative values of the correlation coefficients. For the systemic reflection of aggregate results of econometric research, we used the structural equations modeling (SEM) method.The research model has the following form (Figure 2): In the research model SEM in Figure 2, E is errors-a variation in the indicators' values.To solve the second task, which involved identifying the potential of financial risk reduction in responsible Russian universities in case of alternative approaches to financial risk management, we used the obtained regression equations to forecast the consequences for resulting variables (Ufr1 and Ufr2), which reflect financial risks, from the following: (1) maximization of SDGs' support by responsible universities (Resp = 100) and ( 2) maximization (achievement of the maximum values in the sample) of the values of control variables (Ex1-Ex4).We selected the optimal combination of the level of support for the selected top-priority SDGs for the maximization of the support of SDGs by Russian responsible universities. Factor Analysis of Financial Risk Management in Responsible Universities of Russia To solve the first task, which involves determining the cause-and-effect relationships of financial risk management in responsible Russian universities, we conducted a factor analysis of this management in the top 30 most responsible Russian universities in 2023 (from Table A1).Regression analysis of the dependence of the indicators of financial risks on the alternative measures of financial risk management was conducted in Table 2.The results obtained (Table 2) show that the risk of total financing reduction in Russian universities' activities is 67.43% determined by the implementation of the considered mea-sures of financial risk management.In turn, the risk of the reduction in extra-budgetary financing of universities' activities from private investor funds in Russia is 71.32% determined by the implementation of the considered measures of financial risk management. The F-test was passed in both cases at the level of significance of 0.01.Standard errors are relatively small, which proves the correctness of the regression analysis results.However, the t-test was passed at the resulting variable Ufr1 only for Resp (at the level of significance of 0.05) and Ex1 (at the level of significance of 0.15), and at the resulting variable Ufr2 for Resp (at the level of significance of 0.05) and Ex1 (at the level of significance of 0.10).Regression analysis assessed how financial risk indicators depend on detailed SDG support practices in universities, as shown in Table 3.The results obtained (Table 3) show that the risk of total financing reduction in Russian universities' activities is 69.72% determined by support for the SDGs in universities.In turn, the risk of extra-budgetary financing reduction in Russian universities' activities from private investor funds is 70.29% determined by the support for the SDGs in universities.The F-test was passed at the level of significance of 0.05 for Ufr1 and at the level of significance of 0.01 for Ufr2.Standard errors are relatively small, which proves the regression analysis results are correct. However, the t-test was passed at the resulting variable Ufr1 only for SDG 4 (at the level of significance of 0.05), SDG 8 (at the level of significance of 0.20), SDG 9 (at the level of significance of 0.01), and SDG 17 (at the level of significance of 0.20).At the resulting variable Ufr2, the t-test was passed only for SDG 4 (at the level of significance of 0.05), SDG 5 (at the level of significance of 0.10), SDG 8 (at the level of significance of 0.01), and SDG 11 (at the level of significance of 0.25). The established regression dependencies allowed compiling a model of financial risk management in responsible Russian universities, which is the following system of equations of multiple linear regression: (1) According to model (1), growth of the level of implementation of the SDGs by Russian universities by 1 point leads to an increase in revenues of Russian universities from all sources by RUB 0.2421 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.1144 billion.An increase in the "average score of the Unified State Examination of accepted students" by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.2637 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.1357 billion. An increase in the "volume of R&D per one member of academic staff" by RUB 1 thousand leads to Russian universities' revenues from all sources by RUB 0.00001 billion and a decrease in Russian universities' revenues from extra-budgetary sources by RUB 0.0003 billion.An increase in the "share of foreign students in the total number of students" by 1% leads to a decrease in Russian universities' revenues from all sources by RUB 0.0370 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0470 billion. An increase in "academic staff wages/average wages in the region's economy ratio" by 1% leads to Russian universities' revenues from all sources by RUB 0.0129 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0177 billion. The results obtained mean that Russian universities' research activities increase their financial risks, and Russian universities' international activities have a contradictory effect on their financial risks. The detailed analysis of the dependence of the financial risks of Russian universities on the implementation of concrete SDGs showed that the growth of the activity of Russian universities' support for SDG 4 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0609 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0304 billion.Growth of the activity of Russian universities' support for SDG 5 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0215 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0297 billion. Growth of the activity of Russian universities' support for SDG 8 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0306 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0277 billion.Growth of the activity of Russian universities' support for SDG 9 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0649 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0306 billion. Growth of the activity of Russian universities' support for SDG 11 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0164 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0149 billion.Growth of the activity of Russian universities' support for SDG 17 by 1 point leads to an increase in Russian universities' revenues from all sources by RUB 0.0747 billion and an increase in Russian universities' revenues from extra-budgetary sources by RUB 0.0184 billion. Thus, regression coefficients at the factor variable Resp are positive in regression equations for both resulting variables (Ufr1 and Ufr2) and the connection between the variables is statistically significant.Therefore, hypothesis H is deemed proven.It was established that the connection between financial risks to Russian universities and alternative measures of the management of these risks is unstable-it is statistically significant only with educational activities, while the connection with other measures is statistically insignificant, contradictory, and even negative. We also selected SDGs at which regression coefficients are positive in regression equations for both resulting variables (Ufr1 and Ufr2).These are SDG 4, SDG 8, and SDG 9. Thus, their support should be the focus on efforts of Russian responsible universities for an increase in the effectiveness of management of their financial risks. For the most complete consideration and correct reflection of the cause-and-effect relationships, we took into account the macro-level factors from Table 1.The results of their correlation analysis are shown in Figure 3.The results in Figure 3 show that universities' involvement in sustainable development and their interest in the achievement of the SDGs to improve their position in international university rankings increased in the course of political globalization (correlation equals 0.4360), but reduced in the course of economic (correlation equals −0.1245) and social (correlation equals −0.2414) globalization, as well as in the course of growth of state budget financing of higher education (correlation equals −0.0141). Financial risks to state universities in Russia (which are connected with the reduction in state budget higher education funding volume) are reduced due to economic (correlation is 0.3959), social (correlation is 0.0038), and political (correlation is 0.1286) globalization.For the systemic reflection of aggregate results of the econometric research, they were joined in one SEM model (Figure 4).The results in Figure 3 show that universities' involvement in sustainable development and their interest in the achievement of the SDGs to improve their position in international university rankings increased in the course of political globalization (correlation equals 0.4360), but reduced in the course of economic (correlation equals −0.1245) and social (correlation equals −0.2414) globalization, as well as in the course of growth of state budget financing of higher education (correlation equals −0.0141). Financial risks to state universities in Russia (which are connected with the reduction in state budget higher education funding volume) are reduced due to economic (correlation is 0.3959), social (correlation is 0.0038), and political (correlation is 0.1286) globalization.For the systemic reflection of aggregate results of the econometric research, they were joined in one SEM model (Figure 4).The SEM model systematized the results obtained and allowed for the following generalized conclusions: First, the support factors of the SDGs are much more differentiated and have a larger and non-contradictory influence on the reduction in financial risks to universities in Russia than alternative factors.Second, micro-level factors (support of the SDGs and alternative factors) determine the financial risks to universities in Russia to a larger extent.Third, among the macro-level factors, the largest influence on the development of responsible universities in Russia is performed by political globalization, and on the reduction in financial risks to Russian universities-economic globalization. Potential of Financial Risk Reduction in Responsible Russian Universities in Case of Alternative Approaches to Financial Risk Management To solve the second task, which involved determining the potential of the reduction in financial risks in responsible Russian universities in alternative approaches to financial risk management, we used the obtained regression equations to compile forecasts of the consequences for resulting variables (Ufr1 and Ufr2) in each of the approaches.The forecasts were compiled for the period of the Decade of Action, i.e., until 2030. According to the economic modeling results, we propose a new approach to financial risk management in responsible Russian universities, based on the support for the SDGs.In this new approach, the responsible Russian universities' financial risks are reduced due to their accepting responsibility to the state for the economic, environmental, and social consequences of the universities' activities and the practical implementation of the created innovations, as well as accepting responsibility to private investors for the quality and affordability of educational and research services provided by universities. A feature of the offered approach and its essential difference from the existing one is that in the new approach, the high effectiveness of spending of provided budgetary and extra-budgetary funds and, accordingly, universities' high investment attractiveness is ensured not due to saving but to support of SDGs. In the proposed approach, the focus is on responsible universities' support for SDG 4 through an increase in the quality of higher education services and providing wide groups of the population with the opportunity for life-long learning; SDG 8 through the development of applied skills with students for successful employment in the specialty and career-building by university graduates; SDG 9 through the creation of breakthrough applied innovations for the Russian economy in support for the strengthening of strategic academic leadership and Russian technological sovereignty. The perspective of improvement of financial risk management in responsible Russian universities through the optimization of SDGs' support reflects the forecasted consequences for financial risks from the maximization of the support of SDGs by responsible universities (Resp = 100), which is shown in Figure 5. As shown in Figure 5, growing the activity of Russian university's support for SDGs by 42.93% (from 69.96 points in 2023 to 100.00 points by 2030) will lead to an increase in Russian responsible universities' revenues from all sources by 106.91% (from RUB 6.80 billion in 2023 to RUB 14.07 billion by 2030 in 2023 constant prices).Revenues of responsible Russian universities from extra-budgetary sources will grow by 120.73% (from RUB 2.85 billion in 2023 to RUB 6.28 billion by 2030).For this to be implemented in practice, the following recommendations on the improvement of financial risk management in Russian universities through optimization of SDGs' support are offered (Figure 6). According to Figure 6, to improve financial risk management in Russian universities through the optimization of support of SDGs (achievement of the growth of universities' revenues according to the control values from Figure 2), the following is recommended: As shown in Figure 5, growing the activity of Russian university's support for SDGs by 42.93% (from 69.96 points in 2023 to 100.00 points by 2030) will lead to an increase in Russian responsible universities' revenues from all sources by 106.91% (from RUB 6.80 billion in 2023 to RUB 14.07 billion by 2030 in 2023 constant prices).Revenues of responsible Russian universities from extra-budgetary sources will grow by 120.73% (from RUB 2.85 billion in 2023 to RUB 6.28 billion by 2030).For this to be implemented in practice, the following recommendations on the improvement of financial risk management in Russian universities through optimization of SDGs' support are offered (Figure 6): For comparison, let us consider also the perspective of the reduction in financial risks in responsible Russian universities through the development of the potential of the existing approach to financial risk management.This involves maximization (achievement of maximum values in the sample) of the values of control variables (Ex1-Ex4).The forecast of financial risks to Russian universities at the maximization of the results of implementing the current managerial measures is shown in Figure 7. As shown in Figure 7, maximization of the results of implementing the current managerial measures involves the following: • An increase in the "average score of the Unified State Examination of accepted students" by 34.43% (from 72.25 points in 2023 to 97.13 points by 2030); • An increase in "volume of R&D per one member of academic staff" by 480.22% (from RUB 1158.22 thousand in 2023 to RUB 6720.25 thousand by 2030); • An increase in the "share of foreign students in the total number of students" by 127.26% (from 14.29% in 2023 to 32.48% by 2030); • An increase in "academic staff wages/average wages in the region's economy ratio" by 44.42% (from 218.79% in 2023 to 315.97% by 2030). For these reasons, responsible Russian universities' revenues from all sources will grow by 104.35% (from RUB 6.80 billion in 2023 to RUB 13.90 billion until 2030 in 2023 constant prices).Responsible Russian universities' revenues from extra-budgetary sources will grow by 146.48% (from RUB 2.85 billion in 2023 to RUB 7.02 billion by 2030).For comparison, let us consider also the perspective of the reduction in financial risks in responsible Russian universities through the development of the potential of the existing approach to financial risk management.This involves maximization (achievement of maximum values in the sample) of the values of control variables (Ex1-Ex4).The forecast of financial risks to Russian universities at the maximization of the results of implementing the current managerial measures is shown in Figure 7. Thus, the newly developed approach, which involves an increase in SDGs' support, has a much larger potential for financial risk reduction in responsible Russian universities than the alternative existing approach to financial risk management, because it increases universities' revenues from all sources to a larget extent.Development of this potential in practice in the period until 2030 can be facilitated by the selected optimal combination of the activity of support for the selected top-priority SDGs for the maximization of SDGs' support of SDGs by responsible Russian universities. Discussion This paper's contribution to the literature consists of the development of the concept of financial risk management in universities through clarification of the consequences of SDGs' support by responsible universities in Russia for their financial risks.This paper continues the scientific discussion by Bogoviz et al. (2018), Kato et al. (2024), and Zarova and Tursunov (2022).The influence of the measures of the management of universities' management on their financial risks in Russia, which is estimated in the existing literature and which is specified in this paper, is shown in Table 4.As shown in Table 4, to confirm the results of Tovmasyan et al. (2022), the results obtained in this paper proved that the development of educational activities of Russian universities does reduce their financial risks.However, the effectiveness of other measures of financial risk management, which conform to the existing approach to the management of these risks, turned out to be low.Thus, unlike Bogoviz and Mezhov (2015) and Fukugawa (2023), we have established that the development of research activities does not reduce but instead raises financial risks.This is demonstrated by the obtained negative values of regression coefficients at the factor variable Ex2 in the model (1). Unlike Petrenko and Stolyarov (2019), we established that the development of international activities of Russian universities has a contradictory effect on their financial risks, reducing the risk of reduction in extra-budgetary financing of universities' activities due to the funds of private investors, but raising the risk of reduction in total financing of universities' activities.Unlike Przhedetskaya and Borzenko (2019), we proved that the change in wages of academic staff in Russian universities does not have a statistically significant effect on their financial risks (for this factor variable in Model (1), the t-test was not passed). Unlike Bock et al. (2018) and Mántica (2022), we substantiated that support for the SDGs does not raise but reduces financial risks.This was the proof of hypothesis H, to confirm Abankina et al. (2018) and Liu and Gao (2021).Thus, support of the SDGs was set into the basis of this paper's new approach to financial risk management in responsible universities in Russia, which is the foundation of the scientific novelty and originality of this research. The scientific novelty and value of the authors' results and recommendations in this paper consist in the development of a new approach to the management of financial risks to Russian universities.The essential difference between the newly offered approach and the existing approach is inclusion in the system of measures of financial risk management, which contains an increase in results in the sphere of educational, research, and international activities with wages for academic staff, of an additional factor-support of the SDGs.In this paper, the significant contribution of SDGs' support to the reduction in financial risks in Russian universities was substantiated for the first time, and the necessity for active support of the SDGs by Russian universities to reduce their financial risks was justified. Conclusions The set goal was achieved: we revealed and proved the positive influence of support of the SDGs on financial risks to responsible universities in Russia, which is expressed in the reduction in these risks.Therefore, the obtained new scientific results filled the literature gap, which is connected with the unknown influence of support of the SDGs on financial risks to Russian universities.Given the revealed positive contribution, a new approach to financial risk management of Russian universities was offered-through support of the SDGs.This paper's main results are as follows: First, we identified the cause-and-effect relationships of financial risk management in responsible universities in Russia.Based on the leading experience of the top 30 most responsible Russian universities in 2023, we compiled a model of financial risk management in responsible Russian universities, which mathematically described and quantitatively measured the influence of each managerial measure on the financial risks to Russian universities. The model showed that among the measures of financial risk management, which are used within the existing approach, only the educational activity of universities ensures the reduction in their financial risks, while the consequences of research, international, and personnel activities of Russian universities for their financial risks are statistically insignificant, contradictory, and even negative.In contrast, support of the SDGs demonstrated a significant and statistically reliable contribution to the reduction in financial risks to responsible Russian universities. Second, the potential of financial risk reduction in responsible Russian universities using alternative approaches to financial risk management was disclosed and compared.The considered alternatives showed that the existing approach can potentially increase the aggregate revenues of responsible universities in Russia by 104.35% and the new proposed approach by 106.91%.This is a scientific argument in favor of the new approach to raising the effectiveness of financial risk management in Russian universities. The main authors' conclusion is that the existing approach to financial risk management in Russian universities is based on low-efficiency managerial measures and causes a high risk of burden on universities, which can be reduced by the new approach to financial risk management in responsible universities in Russia through support for the SDGs. The theoretical significance lies in the specification of the concrete narrow list of the SDGs the support for which contribute the most to the reduction in financial risks of responsible universities in Russia: namely, SDG 4, SDG 8, and SDG 9.The practical significance is that the newly developed approach will allow for the fullest development of the potential for financial risk reduction in responsible universities in Russia in the Decade of Action (2020-2030).The managerial significance is that the proposed author's recommendations will allow financial risk management improvement in Russian universities through the optimization of SDGs' support. Figure 1 . Figure 1.The sample structure, the number of responsible universities.Source: Compiled by the authors based on THE (2024) materials. Figure 1 . Figure 1.The sample structure, the number of responsible universities.Source: Compiled by the authors based on THE (2024) materials. Figure 3 . Figure 3. Correlation between the financial risks to Russian universities and the macro-level factors in 2015-2022.Source: Authors. Figure 3 . Figure 3. Correlation between the financial risks to Russian universities and the macro-level factors in 2015-2022.Source: Authors. Figure 5 . Figure 5. Forecast of financial risks to responsible universities in Russia in case of maximization of their support for the SDGs.Source: Authors. Figure 5 . Figure 5. Forecast of financial risks to responsible universities in Russia in case of maximization of their support for the SDGs.Source: Authors. Figure 6 . Figure 6.Recommendations for the improvement of financial risk management in Russian universities through optimization of support for SDGs.Source: Authors.According to Figure 6, to improve financial risk management in Russian universities through the optimization of support of SDGs (achievement of the growth of universities' revenues according to the control values from Figure 2), the following is recommended: • Growth of the activity of support of SDG 4 by 70.98% (from 25.36 points in 2023 to 43.37 points by 2030); • Growth of the activity of support of SDG 8 by 130.70% (from 40.90 points in 2023 to 94.36 points by 2030); • Growth of the activity of support of SDG 9 by 233.56% (from 29.98 points in 2023 to 100.00 points by 2030). Figure 6 .Figure 7 .Figure 7 . Figure 6.Recommendations for the improvement of financial risk management in Russian universities through optimization of support for SDGs.Source: Authors.Risks 2024, 12, x FOR PEER REVIEW 17 of 29 Table 1 . Dynamics of the macro-level factors and position of Russian universities in the THE ranking in 2015-2022. Source: Compiled and calculated by the authors based on (KOF 2024; UN 2024b; World Bank 2024a, 2024b). Table 2 . Regression analysis of the dependence of the indicators of financial risks on the alternative measures of financial risk management. Table 3 . Regression analysis of the dependence of the indicators of financial risks on detailed practices of support of SDGs in universities. Growth of the activity of support of SDG 9 by 233.56% (from 29.98 points in 2023 to 100.00 points by 2030). the activity of support of SDG 4 by 70.98% (from 25.36 points in 2023 to 43.37 points by 2030); • Growth of the activity of support of SDG 8 by 130.70% (from 40.90 points in 2023 to 94.36 points by 2030);• Table 4 . The influence of the measures of the management of universities on their financial risks in Russia, which is estimated in the literature and specified in this paper. Table A3 . Financial risks to the top 30 most responsible universities in Russia in 2023.Compiled by the authors based on materials of MIREA, MIC (2024). Table A4 . The activity of implementing the SDGs in the top 30 most responsible universities in Russia in 2023, score 1-100. Compiled by the authors based on materials of MIREA, MIC (2024); THE (2024).
2024-06-22T15:10:20.106Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "a041786f80949b280d135970ebd521fb95054918", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9091/12/6/101/pdf?version=1718877033", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6a922e066269ea3e7345e684987b9b40a5744cbe", "s2fieldsofstudy": [ "Economics", "Education" ], "extfieldsofstudy": [] }
78964521
pes2o/s2orc
v3-fos-license
Prevalence and Pattern of Impacted Teeth in the North-East China Aims: To investigate the prevalence and pattern of impacted teeth in the sample of North-East China. Study Design: Descriptive and Retrospective study. Methodology: Orthopantomogram radiographs and clinical dental records are used to determine the impacted teeth in Five thousand seven hundred and eighty four randomly selected patients. All of 5784 patients were examined (3754 males, 2030 females), with an age range of 7-76 years and a mean age of 23±4 years. The minimum age for inclusion was 7 years. The data was entered into the computer and analyzed using the Statistical Package for Social Sciences (version 20. Inc. Chicago, USA). The Pearson’s Chi ‑ square was used to determine the differences in the distribution of impacted teeth between genders. The significant level was tested at the 5%. Results: Out of 5784 patients, a total of 1342 (23.2%) presented an impacted tooth, 701 (52.2%) were male and 641(47.8%) were female. Among them, 1485 were the number of impacted teeth. The prevalence of impacted teeth was 23.2%; third molars were the most common (11.70%; n=677), followed by canines (5.55%; n=321), incisor and premolars (2.92%; n=169 and 2.82%; n=163). The impacted teeth were mostly seen in the age group between 17-26 years old (43.8%; n=774). No significant relationship between impacted teeth among the gender was found (p=0.22). Conclusion: The prevalence of impacted teeth was 23.2% in this research and the patients aged between 17 to 26 years were most affected. The minimum age of 7 years must be an inclusion criteria study for to assess the real prevalence of incisor impaction. INTRODUCTION Impacted teeth or unerupted teeth are those teeth that have failed to erupt completely or partially in the dental arch according to clinical and radiographic evaluation. Any permanent tooth can become impacted. The main causal factors are local like supernumerary teeth, dense overlying bone, deciduous tooth retention, archlength deficiency, odontogenic tumors, cleft lip and palate. Also have systemic and genetic disorders such as Cleidocranial dysostosis syndrome, Down, Gardner', and Gorlin-Sedano syndrome were reported [1][2][3]. The prevalence of impacted teeth in different populations and ethnic groups has been the subject of several studies. However, there is a discrepancy in the prevalence of teeth impaction in different population and ethnic groups, as well as variation in the prevalence and distribution of impacted teeth in different regions of the jaw itself [4][5][6][7][8][9]. The selected age group, eruption time of teeth, the racial differences and difference methodology of the study and radiographic criteria are some of the factors that affect the prevalence. According to literature, the most pattern impacted teeth are the mandibular third molars followed by the maxillary third molars, the maxillary canines, the mandibular premolars and mandibular incisors [9,10,11,12,13]. Assessing the prevalence of impacted teeth in population is important for the establishment of parameter data as well as for the planning of preventive and therapeutic strategies aimed at this population and with a direct influence on the management of the patient and clinical decisionmaking [1]. There are currently no study found on the prevalence and pattern of impacted teeth in Chinese patients of the North-East China. The purpose of this study was to investigate the prevalence and pattern occurrence of impacted teeth in the sample of North-East China. MATERIALS AND METHODS The Orthopantomogram (OPG) radiographs and clinical dentals records of 5784 Chinese patients attending the department of Oral and Maxillofacial Surgery, in the School of Stomatology, Second Affiliated Dental Hospital of Jiamusi University, Between Jun 2013 to October 2015 were examined for this retrospective study. All OPG were taken with the Dents ply Gendex Orthoralix 9200 (Dentsply Asia, Milford, US), and the magnification factor was 1.23. A tooth was considered to be impacted when it was obstructed on its path of the eruption by an adjacent tooth, bone, or soft tissue and/or failed to erupt fully into the oral cavity. According to the mean eruption time, the teeth were considered as impacted when they remained in the jaw for a minimum of 2 years after the corresponding mean age of eruption [14]. Therefore, the minimum age for inclusion was 7 years because the generally accepted view is that the first series of permanent teeth have erupted and remained 2 years in the jaw. Patients' clinical dental records and OPG radiographs were examined in order to detect the impacted incisors, canines, premolars and impacted molars. As well as group of researchers examined the OPG radiographs at the same time on X-ray viewer to determine the number and the pattern of impacted tooth. If an impacted third molar was identified in a patient, the eruption of his/her's remaining third molars were also assessed. The depth and orientation of impacted third molars were assessed without it's associated pathologies. The depth of impacted teeth or third molar was documented based on Winter's lines classification while the angulation of impacted teeth was measured using the long axis of the impacted and adjacent teeth as described by Schersten [15]. Even though the Orthopantomogram (OPG) radiograph is very simple and intuitive, but it cannot provide all the information regarding the impacted teeth. For to ensure diagnosis validity in the recent study, radiographic findings were verified with clinical dental records, which were collected on standard forms as part of the routine examination process. The patients who had participated in this study were essentially the Chinese persons from the Jiamusi city and from the small city around Jiamusi. All patients presenting with any pathological conditions including trauma or fracture of the jaw or any hereditary diseases or syndromes that might affect the normal growth of dentition were excluded from the study. The data was entered into the computer and analyzed using the Statistical Package for Social Sciences (version 20. Inc. Chicago, USA). The Pearson's Chi-square was used to determine the differences in the distribution of impacted teeth between genders. The significant level was tested at the 5%. Ethical approval was not received from the School of stomatology of Jiamusi University for the retrospective study because the patients were not exposed to additional radiation and not subjected to additional treatment. But a consent was obtained for to use the patients' medical record data. RESULTS AND DISCUSSION In this analysis data, 1342 patients (23.2%) had impacted teeth; 701 (52.2%) were male and 641(47.8%) were female (Table 1) with an age range of 7-76 years. No statistically significant was found between impacted teeth and gender (P=0.22). In Table 2, the prevalence of impacted teeth was 23.2% and impacted third molars was the most prevailing (11.7%; n=677), followed by impacted canines (5.55%; n=321), impacted incisors (2.92%; n=169) and impacted premolars (2.82%; n=163). Impacted first and second molars were the least prevailing teeth (0.21%; n=12). Impacted deciduous teeth were not noted in the present study. The pattern of impacted teeth were more occurred in the mandible than maxilla (51.9%; n=772 and 48%; n=713) with a ratio of 1.08:1 and the third molars was the most commonly impacted teeth (52.3%). The canine, incisor, first and second molar were mostly noted in the maxilla than in the mandible (22.2% and 1.5% for canines; 10.7% and 0.8% for incisor, and 0.6% and 0.2% for first-second molars) respectively. However, the Third molar and premolars were mostly occurred in the mandible than the maxilla with 42.6% and 9.7% for third molars; and 6.7% and 4.3% for premolars. The impacted teeth mostly occurred in the age group between 17-26 years (43.8%; n=774) ( Table 3). Table 4, showed that, impaction of impacted teeth were diagnosed in all permanent teeth. The impacted teeth were routinely classified according to the direction of the crown of the tooth and a vertical impaction was a common pattern of impaction (34.34%), followed by mesio-angular impaction (29.36%) and by horizontal impaction (24.44%). The distal-angular impaction, buccal-lingual-angular and inverted impaction occurrence were less by 6.46%, 2.82%, and 2.55% respectively. Discussion Unerupted teeth are those teeth that have failed to erupt completely or partially in the dental arch. The main causes are local factors and also systemic and genetic disorders were reported [1][2][3]. The prevalence of impacted teeth is a discrepancy in different population and ethnic groups, as well as, variation in the prevalence and distribution of impacted teeth in different regions of the jaw [4][5][6][7][8][9]. Depending to some studies, the most pattern impacted teeth are the mandibular third molars followed by the maxillary third molars, the maxillary canines, the mandibular premolars and mandibular incisors [9,10,11,12,13]. Our finding prevalence of impacted teeth (Table 2), is the high figure and the less figure compared to studies reported by some authors [4][5][6][7][8][9]. But our result is slightly close to those reported by FSC Zhu et al. at Hong Kong [7]. This can be explained by the fact that, clinical dental data were only collected from the Dental teaching hospital in both studies (Hong Kong and North-East China studies). But, inclusion criteria for minimum age was different like some other previous studies that have investigated specific age-groups and specific type of impacted teeth only, thus justify this difference [8,9,10,11]. More than 27% of patients in the current study were aged between 7 and 18 years (Table 3). This may reflect increased prevalence of impacted incisor tooth which as reported that an impacted maxillary incisors and canines teeth are most often occur in the patients with 8 to 12 years and 10 to 14 years old respectively [8]. The pattern of impacted tooth (Table 2) seen with the most common being mandibular third molars, followed by upper canines, upper incisors and mandibular premolars were unlike to those reported by some authors [4][5][6][7], but similar to study of Hou, et al. [8]. Also, the finding of an impacted third molars occourred in 52.3% of all impacted third molars teeth, and impacted mandibular third molars occurred in 42.6% were not close of those found by others authors [7,12,13], may difference due to the methodology used and selection of age group. The canine tooth has a second complicated eruption pattern and is one of the last teeth to erupt in the dental arch. Due to these conditions, this tooth may not erupt naturally [16]. The maxillary canine was the second most pattern impacted teeth (Table 2) in the recent research, which is close to another study [17]. The results of others studies indicate that the incidence of impacted canine may be higher in some populations whereas less in another population such as study of Fardi, et al. [4] that canines were the most commonly impacted teeth with 8.8% in the Greek population, 3.6% for study of Zahrani AA, [18] in Saudi population, 3.58% for study of Aydin et al. [19] and 0.92 for study of Dachi SF, et al. [20]. The different results may be attributed to the racial differences, Selection of age group, eruption time of teeth and differences in the methodology of the study. Maxillary canines impaction are believed to occur 10-20 times more frequently than mandibular canine impaction and there are limited numbers of studies revealing its frequency of occurrence [21]. In a study of Shah, et al. [22] 8 unerupted mandibular canines were found in 7886 individuals, while in the study of Grover PS, et al. [23] 11 impacted mandibular canines were found in 5000 individuals which resulted in an incidence of 0.10%. These results are quite similar with our finding ( Table 2). Some studies have reported that the impaction of the maxillary central incisor is almost as prevalent as impacted canines but its etiology is different [24]. Chaushu et al. [25] provided evidence of a significant environmental influence from the impacted maxillary incisor in delaying and altering the eruption path of the ipsilateral maxillary canine. The recent study found that the incidence of maxillary incisor was seen in one hundred sixty of the Chinese patients. However, the prevalence of maxillary incisor impaction (10.7%) was much less than that of canine impaction (22.2%). Our results are similar with to those found by Hou, et al. and also for those found by FSC Zhu, et al. [7,8]. But in order of prevalence, the most impacted teeth in the recent study was third molars followed by canines, incisors, and premolars which are also similar to the study of Hou, et al. [8] but different A B D C with the study of FSC Zhu, et al. [4][5][6][7][8][9] who reported that the premolars were seen in third order of prevalence, followed by incisors. According to those results, we can conclude that those studies will only included patients above 17 years old and not have included patient aged between 6 or 7 years, as the patients old than 17 years may have already consulted the dental practitioner for orthodontic treatment or extraction before the study, and hence the prevalence of impacted incisors found will not correlate to reality of that patients. The correct prevalence of impacted incisors must include the patients of minimum age of 7 years in the study. Very few studies have been done regarding impacted premolars. It has been concluded from the results of previous studies that premolars impaction are rare [14,26], with the prevalence ranging from 2.1 to 2.7%, alike our recent research. The mandibular premolars were the frequently impacted than maxillary premolars, analogous to Hou, et al. [8]. Impacted mandibular first and second molars and impacted maxillary first and second molars are relatively rare with few cases reported and their prevalence ranged from 0.03% to 0.3%, with a slight predilection for males [27,28]. This affirmation was similar to our study and also to the study of Matheus Bandeca et al. [5]. It is difficult to estimate the 3-dimensional direction of impacted teeth from the x-ray films. Therefore, we have only performed a 2dimensional investigation of the impaction orientation using the OPG radiographs in our present research. In this research, the vertical impaction was the most common, occurring in nearly 1/3 of all the impacted teeth, followed by Mesio-angular impaction and by a horizontal impaction who impacted mandibular third molars are more usual (Table 4). This result was different to the study of Hou R, et al. [8], who the most common angulation of impaction in both maxillary was vertical and mesio-angular impaction orientation. This dissimilarity could be explained by the fact that all of the type of impacted teeth were included in our study unlike the study of Hou, et al. [8] who the third molars impaction were excluded. About 27.84% impacted maxillary central incisors had inverted impaction. This affirmation is similar to our study and the study of Hou R, et al. [8]. CONCLUSION The prevalence of impacted teeth was 23.2%. The minimum age of 7 years must be included in the study for to assess the real prevalence of incisor impaction. The impacted teeth were mostly encountered in Chinese patients aged between 17 to 26 years old. All dentist, oral and maxillofacial surgeons should know this prevalence and pattern to perform a thorough evaluation, interceptive treatments and valid support in planning suitable treatment to be provided.
2019-03-16T13:02:51.547Z
2016-01-10T00:00:00.000
{ "year": 2016, "sha1": "362751063025d824d668703ebfa71e3e47183e09", "oa_license": "CCBY", "oa_url": "https://doi.org/10.9734/bjmmr/2016/27134", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4edb5235cdf5d040e12a308f7b9648f070936357", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
1899454
pes2o/s2orc
v3-fos-license
Sequence Divergent RXLR Effectors Share a Structural Fold Conserved across Plant Pathogenic Oomycete Species The availability of genome sequences for some of the most devastating eukaryotic plant pathogens has led a revolution in our understanding of how these parasites cause disease, and how their hosts respond to invasion [1]–[7]. One of the most significant discoveries from the genome sequences of plant pathogenic oomycetes is the plethora of putative translocated effector proteins these organisms encode. Many effector genes display signatures of rapid evolution and tend to reside in dynamic regions of the pathogen genomes. Once inside the host, effector proteins modulate cellular processes, mainly suppressing plant immunity [8]–[12]. Effectors can also be recognized directly or indirectly by the plant immune system through the action of disease resistance (R) proteins [13], [14]. The availability of genome sequences for some of the most devastating eukaryotic plant pathogens has led a revolution in our understanding of how these parasites cause disease, and how their hosts respond to invasion [1][2][3][4][5][6][7]. One of the most significant discoveries from the genome sequences of plant pathogenic oomycetes is the plethora of putative translocated effector proteins these organisms encode. Many effector genes display signatures of rapid evolution and tend to reside in dynamic regions of the pathogen genomes. Once inside the host, effector proteins modulate cellular processes, mainly suppressing plant immunity [8][9][10][11][12]. Effectors can also be recognized directly or indirectly by the plant immune system through the action of disease resistance (R) proteins [13,14]. Plant Pathogenic Oomycetes Express RXLR Effector Proteins One expanded family of effector proteins is defined by the sequence RXLR (Arg-X-Leu-Arg, where X is any amino acid), which in some cases is followed by an acidic-rich dEER motif (Asp-Glu-Glu-Arg) ( Figure 1). The RXLR motif was originally identified by comparing sequences of effectors from Hyaloperonospora arabidopsidis, Phytophthora infestans, and Phytophthora sojae [15]. It has since been shown that the RXLR motif is important for translocation of oomycete effectors into plant cells [16,17]. It is widely accepted that RXLR effectors are modular proteins comprising an N-terminal secretion signal, followed by the RXLR region, and a C-terminal ''effector'' domain that encodes the biochemical activity of the protein when expressed directly in plant cells [18,19]. A large family of Phytophthora RXLR effectors contain conserved sequence motifs (W, Y, and L) in their C-terminal domains that often form tandem repeats [2,20]. Structural Biology Uncovers an Effector Fold Conserved across Oomycete Species Our laboratories have employed structural biology to investigate the molecular basis of RXLR effector function. A total of four structures have recently been published, those of AVR3a4 and AVR3a11 (paralogues from Phytophthora capsici), PexRD2 (from P. infestans), and ATR1 (from H. arabidopsidis) [21][22][23]. Each publication focused on a different aspect of structure/function analysis including phospholipid binding, protein folding, and effector recognition by the host. The studies of Boutemy et al. and Chou et al. independently described the structural homology of AVR3a11 and a domain of ATR1, respectively, to the cyanobacterial four-helix bundle protein KaiA [24]. This strongly implied they would also be structurally related to each other. This is unexpected, as these Phytophthora and H. arabidopsidis effectors do not share any significant sequence similarity: the conservation was only apparent after the structures were determined and compared. Further, the structural conservation across different oomycete species was particularly intriguing, as studies with the Phytophthora proteins AVR3a11 and PexRD2 [21] had suggested a three-helix bundle fold could be the basic structural unit adopted by the repeating W-Y motifs found in .520 Phytophthora RXLR effectors (44% of annotated RXLR effectors in P. infestans, Phytophthora ramorum, and P. sojae). Using Hidden Markov Model (HMM)based sequence searches, these motifs had also been detected in H. arabidopsidis RXLR effector proteins, with 35 out of 134 (26%) containing this fold (HMM score.0), including ATR1 with a low confidence score [21]. Boutemy et al. named this conserved structural unit the ''WY-domain'' and it comprises three a-helices connected by variable loop regions. The minimal three-helix WY-domain is found in PexRD2, but Avr3a4, Avr3a11, and ATR1 all have an N-terminal helix as an extension to this unit that forms a four-helix bundle. Further analysis of the ATR1 structure revealed that not only residues 139-210 (the domain originally identified as having a KaiA-like fold), but also 226-308 comprised a WY-domain four-helix bundle (ATR1 also has a fifth helix that creates a five-helix repeat) [22]. This tandem repeat could not be detected from amino acid sequence comparison, and was only discovered after the ATR1 structure was determined [22]. The structure of ATR1 shows how tandem WY-domains encoded by very divergent amino acid sequences are linked in three-dimensional space. This is significant, as it provides insight into how WY-domains may be arranged in other WY motif repeat RXLR effectors. The Conserved Fold Is Based on a ''Flexible'' Hydrophobic Core The availability of these four oomycete RXLR effector domain structures, from three different pathogens, allows us to present a detailed analysis of the WY-domain fold. Structural overlays of the conserved WY-domains from each of the effectors are shown in Figure 1, and the root mean square deviations derived from the overlays are given in Table 1 (obtained using Secondary Structure Matching (SSM) algorithms [25]). What are the features of this fold that allow structural conservation with little, if any, identifiable pair-wise sequence identity? In the HMM models, the conserved motif is largely defined by residues such as the W and Y (for Trp and Tyr) that, in each structure, are buried in the hydrophobic core of the helical bundle (other hydrophobic residues that contribute to the core are also prevalent in the HMM). Critically, the identity of these residues can change, without affecting the fold, as long as their hydrophobic potential is maintained. For example, in the WY-domain structures available, the W and Y positions are Trp-Tyr (AVR3a4 and AVR3a11), Met-Tyr (PexRD2), and Trp-Cys/Tyr-Tyr (for the two WYdomains of ATR1). Further, there is evidence from the existing structures that solvent remains excluded from the hydrophobic core when mutations from bulkier to smaller side chains occurs The WY-Domain Fold Is Restricted to the Peronosporales We believe that the conservation of this ''flexible'' hydrophobic core fold indicates that a large family of plant pathogenic oomycete effectors may have been derived from a common ancestor. Intriguingly, this effector family has rapidly diverged to gain new and/or adapt existing virulence functions and/or evade detection by plant immune systems. New or modified effector functions may be derived from surface point mutations or indels in the connecting regions between helices (including domain duplication). To test the argument for a common ancestor, we extended previous analyses and searched the proteomes of various organisms using the HMM for the WY-domain, as described in [21]. Firstly, we searched the proteomes of 47 different eukaryotes [26]. These searches included diverse species, from fungi through plants and animals. We found no evidence for the presence of the WY-domain signature beyond the level of our previously described false-positive rate [21]. We then narrowed our search and screened the available proteomes of phylogenetically diverse oomycetes: Saprolegnia parasitica [27], Albugo labachii [28], and Pythium ultimum [29], adding to the Phytophthora and Hyaloperonospora proteomes searched previously [21] (Figure 2). These searches show that, with the data available, the WY-domain is limited to a single clade within the oomycetes, the Peronosporales, that are exclusively plant pathogens [30]. This suggests that the WYdomain may be an innovation within the Peronosporales. The WY-domain is also correlated with the emergence of the RXLR motif in the Peronosporales, and is linked with the evolution of haustoria as a possible interface for effector delivery in this lineage [19,29]. Of the seven oomycetes whose genome sequences are available, all of the Peronosporales (four species) have RXLRs and WY-domains ( Figure 2); the non-Peronosporales (three species) encode essentially no RXLR or WY-domain proteins, and those few identified may be false positives given that they are not enriched in the secretome (as in the Peronosporales). It is also notable that oomycetes with expansions of their RXLR effector repertoire (P. infestans.P. sojae.P. ramorum.H. arabidopsidis) also encode a significantly higher percentage of WY-domains in their secretomes (P. infestans.P. ramorum.P. sojae.H. arabidopsidis, Figure 2). Further, as WY-domains are found in both Phytophthora (hemibiotrophs) and Hyaloperonospora (obligate biotrophs), but not P. ultimum, it appears that the WY-domain emerged with biotrophy in this lineage along with the evolution of haustoria and RXLR effectors [19,31]. Whilst arguing in favour of a common ancestor of the WY-domain within the Peronosporales, we acknowledge that alternative interpretations (including convergent evolution to a fold adapted for stability in the plant cell and/or well-suited to a particular function, such as secretion and/or translocation) remain possible. Some of the most notorious and agriculturally important pathogenic oomycetes contain RXLR:WY-domain effectors, suggesting that this structure has been critical for the success of these pathogens. This raises questions such as, why has this fold been preserved and what can it tell us about the function of these proteins? How can we use this knowledge to design novel disease management strategies? Future studies will help define the roles of the WY-domain fold in the virulence mechanisms of these pathogens, in particular how it engages with plant cell targets, and will help to unravel the extent of structural diversity in RXLR effectors. Our initial studies have laid the foundation for new, exciting discoveries addressing the function of oomycete effectors.
2014-10-01T00:00:00.000Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "0f7b07aac81fae4424c665833218b99eda2c356a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1002400&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f7b07aac81fae4424c665833218b99eda2c356a", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4909878
pes2o/s2orc
v3-fos-license
Hypothalamic growth hormone receptor (GHR) controls hepatic glucose production in nutrient-sensing leptin receptor (LepRb) expressing neurons Objective The GH/IGF-1 axis has important roles in growth and metabolism. GH and GH receptor (GHR) are active in the central nervous system (CNS) and are crucial in regulating several aspects of metabolism. In the hypothalamus, there is a high abundance of GH-responsive cells, but the role of GH signaling in hypothalamic neurons is unknown. Previous work has demonstrated that the Ghr gene is highly expressed in LepRb neurons. Given that leptin is a key regulator of energy balance by acting on leptin receptor (LepRb)-expressing neurons, we tested the hypothesis that LepRb neurons represent an important site for GHR signaling to control body homeostasis. Methods To determine the importance of GHR signaling in LepRb neurons, we utilized Cre/loxP technology to ablate GHR expression in LepRb neurons (LeprEYFPΔGHR). The mice were generated by crossing the Leprcre on the cre-inducible ROSA26-EYFP mice to GHRL/L mice. Parameters of body composition and glucose homeostasis were evaluated. Results Our results demonstrate that the sites with GHR and LepRb co-expression include ARH, DMH, and LHA neurons. Leptin action was not altered in LeprEYFPΔGHR mice; however, GH-induced pStat5-IR in LepRb neurons was significantly reduced in these mice. Serum IGF-1 and GH levels were unaltered, and we found no evidence that GHR signaling regulates food intake and body weight in LepRb neurons. In contrast, diminished GHR signaling in LepRb neurons impaired hepatic insulin sensitivity and peripheral lipid metabolism. This was paralleled with a failure to suppress expression of the gluconeogenic genes and impaired hepatic insulin signaling in LeprEYFPΔGHR mice. Conclusion These findings suggest the existence of GHR-leptin neurocircuitry that plays an important role in the GHR-mediated regulation of glucose metabolism irrespective of feeding. INTRODUCTION Growth hormone (GH) signaling plays a major role in regulating body composition and glucose metabolism [1]. Increased protein accretion in muscle and lipolysis in adipose tissue are biological consequences of GH action and together promote a lean phenotype [2]. Despite its ability to improve body composition, GH has also been described as a diabetogenic agent with the ability to increase hepatic glucose production [3]. Nevertheless, clinical trials using GH therapy to treat patients with obesity and type 2 diabetes demonstrated improvements in body composition, glucose tolerance, and insulin sensitivity [4,5]. In both human and mouse brain, GH and the GH receptor (GHR) are present in regions known to participate in the regulation of feeding behavior, energy balance, and glucose metabolism such as the hypothalamus, hippocampus, and amygdala [6e10]. Upon GHR activation, the signal transducer and activator of transcription 5 (Stat5) is recruited and regulates the transcription of genes directly controlled by GH [11,12]. In the arcuate nucleus of the hypothalamus (ARH), GHR is involved in the negative feedback loops that regulate GH production and secretion from somatotrophs of the pituitary [7]. Systemic administration of GH induces expression of the c-fos gene, a marker of neuronal activity, on the hypothalamic neuropeptide Y (NPY) and somatostatin neurons [13]. The majority of NPY mRNA-containing cells in the ARH express the GHR gene, suggesting that NPY neurons in the ARH mediate the feedback effect of GH on the hypothalamus. Feedback inhibition of GH production is predicated upon proper function of the GHR signaling cascade [14]. Long-lived GH receptor (GHR) knockout mice (GHR À/À ) are obese, with elevated leptin levels and increased insulin sensitivity [15]. Deletion of hypothalamic GH-releasing hormone (GHRH) leads to isolated GH deficiency and increases lifespan, while overexpression of GH in the CNS results in hyperphagia-induced obesity, highlighting the importance of GH signals in hypothalamic neurons [16,17]. In contrast, mice with modest increases in GH levels show improvements in glucose homeostasis with minimum effects on adiposity [18]. We recently reported that GHR À/À mice have reduced formation of both orexigenic and anorexigenic hypothalamic projections, while disruption of GHR specifically in liver, a mutation that reduces circulating IGF1, had no effect on hypothalamic development [19]. While some suggest that CNS effects of GH signaling are indirect, via increases in circulating IGF1 [20], these data show that GHR has a direct effect on the CNS; however, the precise role of GHR in hypothalamic neurons remains largely unknown. Nutrient sensing, leptin receptor expressing (LepRb) neurons sense and integrate signals relevant to nutrient homeostasis to control energy balance and metabolism [21]. Previous studies also indicate that leptin can modulate GH secretion and the GH response to GHRH [22]. LepRb neurons are widely distributed within the hypothalamic ARH, VMH, DMH, LHA, and some additional sites that also express GHR [10,23]. Recent transcriptome analysis of LepRb expressing neurons revealed that the Ghr gene is strongly enriched in LepRb neurons [24]. Given the crucial role of LepRb neurons in the regulation of energy and glucose metabolism, along with potential overlap between LepRb and GH-responsive cells, we hypothesized that LepRb neurons represent an important site for GH signaling to control metabolism. We therefore deleted GHR specifically in LepRb neurons and examined parameters of energy homeostasis and glucose metabolism in these mice. Animals GHR L/L and Lepr cre mice on the ROSA26 background were described previously [21,25]. Mice were bred in our colony at the University of Michigan. Procedures involved in this study were approved by the University of Michigan Committee on the Use and Care of Animals (IACUC). Animals were fed breeder chow diet containing 5 kcal %fat or high fat diet containing 45 kcal %fat (Research Diets, Inc). Most of the presented data relates to male mice, unless otherwise stated. Metabolic analysis Lean and fat body mass were assessed by a Bruker Minispec LF 90II NMR-based device. Blood glucose levels were measured on randomfed or overnight-fasted animals in mouse-tail blood using Glucometer Elite (Bayer). Intraperitoneal glucose tolerance tests were performed on mice fasted for 16 h overnight. Animals were then injected intraperitoneally with D-glucose (2 g/kg), and blood glucose levels were measured as before [26]. For an insulin tolerance test, animals fasted for 5 h received an intraperitoneal injection of human insulin (0.5 units/ kg; Novo Nordisk). Blood insulin and leptin levels were determined on serum from tail vein bleeds using a Rat Insulin ELISA kit and Mouse Leptin ELISA kit (Crystal Chem. Inc.). For food intake measurements mice were singly housed and food intake was measured for 6 consecutive days. Plasma Triglycerides, Low Density Lipo-protein (LDL) and Free Fatty Acids were assessed using spectrophotometric assay kits purchased from Wako Diagnostics (Richmond, VA). For peripheral leptin stimulation, mice were treated with either 5 mg/ kg recombinant mouse leptin (provided by Dr. A Parlow, National Hormone and Pituitary Program, Torrance, CA) or vehicle, injected i.p. and perfused 2 h later. For peripheral GH stimulation (12.5 mg/100 g BW, GroPep Bioreagents Pty Ltd, Australia), mice were injected i.p and perfused 1.5 h later. Western blotting Male mice were fasted overnight (18 h). For insulin stimulation, human insulin (5 U) was injected through the inferior vena cava. Liver was dissected and immediately frozen in liquid nitrogen after 5 min of insulin stimulation. For immunoprecipitation, liver extracts were incubated with rabbit polyclonal antibodies against Irs1 for 3 h at 4 C on a rocker. Then protein A-Sepharose was added and incubated for 1 h at 4 C. Phosphorylated or total protein was analyzed by immunoblotting with specific antibodies against Irs1 and phosphotyrosine (Millipore). Phosphorylated or total Akt were analyzed by immunoblotting with specific antibodies (Cell Signaling Technology). 2.4. Hyperinsulinemic-euglycemic clamp At 14 weeks of age, the right jugular vein and carotid artery were surgically catheterized, and male mice were given 5 days to recover from the surgery. After 5e6 h fast, hyperinsulinemic-euglycemic clamp studies were performed on unrestrained, conscious mice by the University of Michigan Animal Phenotyping Core using the protocol adopted from the Vanderbilt Mouse Metabolic Phenotyping Center [27], consisting of a 90-min equilibration period followed by a 120-min experimental period (t ¼ 0e120 min). Insulin was infused at 2.5 mU/kg/min. To estimate insulin-stimulated glucose uptake in individual tissues, a bolus injection of 2-[1-14C]deoxyglucose (Perki-nElmer Life Sciences) (10 mCi) was given at t ¼ 78 min while continuously maintaining the hyperinsulinemic-euglycemic steady state. At the end of the experiment, animals were anesthetized with an intravenous infusion of sodium pentobarbital, and tissues were collected and immediately frozen in liquid nitrogen for later analysis of tissue 14C radioactivity. Plasma insulin was measured using the Millipore rat/mouse insulin ELISA kits. For determination of plasma radioactivity of [3-3H]-glucose and 2-[1-14C] deoxyglucose, plasma samples were deproteinized and counted using a liquid scintillation counter. For analysis of tissue 2-[1-14C] deoxyglucose 6-phosphate, tissues were homogenized in 0.5% perchloric acid, and the supernatants were neutralized with KOH. Aliquots of the neutralized supernatant with and without deproteinization were counted for determination of the content of 2-[1-14C] deoxyglucose phosphate. Perfusion and histology Male mice were anesthetized (IP) with avertin and transcardially perfused with phosphate-buffered saline (PBS) (pH 7.5) followed by 4% paraformaldehyde (PFA). Brains were post-fixed, sank in 30% sucrose, frozen in OCT medium, and then sectioned coronally (30 mm) using a Leica 3050S cryostat. Six series were collected and stored at À20 C in cryoprotectant, until processed for immunohistochemistry as previously described [19,29]. For immunohistochemistry, freefloating brain sections were washed in PBS, blocked using 3% normal donkey serum (NDS) and .3% Triton X-100 in PBS and then stained with primary overnight in blocking buffer. For pStat3 and pStat5 immunostaining, sections were pretreated for 20 min in 0.5% NaOH and 0.5% H 2 O 2 in potassium PBS, followed by immersion in 0.3% glycine for 10 min. Sections were then placed in 0.03% SDS for 10 min and placed in 4% normal serum plus 0.4% Triton X-100 plus 1% BSA for 20 min before incubation for 24 h with a rabbit anti-pStat3 antibody (1:1000; Cell Signaling Technology, Danvers, MA) or a rabbit anti-pStat5 antibody (1:1000; Cell Signaling Technology, Danvers, MA). Sections were mounted onto Superfrost Plus slides (Fisher Scientific, Hudson, NH) and cover slips added with ProLong Antifade mounting medium (Invitrogen, Carlsbad, CA). Microscopic images were obtained using an Olympus Fluoview 500 Laser Scanning Confocal Microscope (Olympus, Center Valley, PA) equipped with a 20Â objective. Double-label in situ hybridization/immunohistochemistry Double-label ISH and immunohistochemistry (IHC) was performed as previously described [30,31]. Briefly, free-floating sections from control and deleted mice (n ¼ 3/group) were rinsed in DEPC-treated PBS and treated with 0.1% sodium borohydride for 15 min. Sections were treated with 0.25% acetic anhydride in 0.1 M triethanolamine (TEA, pH 8.0) for 10 min then washed in 2 Â SSC. The GHR riboprobe (847 bp of size) was generated from ARH cDNA using T3 CAGAGATGCAATTAACCCT CAC-TAAAGGGAGACCAAGTGTCGTTCCCCTGAA and T7 CCAAGCCTT CTAA-TACGACTCACTATAGGGAGACTTTGGAACTGGGACTGGGG primers. The 35 S-labeled GHR riboprobe was diluted to 10 6 cpm/mL in a hybridization solution containing 50% formamide, 10 mM TriseHCl (pH 8.0), 5 mg tRNA (Invitrogen), 10 mM dithiotreitol (DTT), 10% dextran sulfate, 0.3 M NaCl, 1 mM EDTA, and 1 Â Denhardt's solution. The sections were incubated overnight at 50 C in the hybridization solution containing the riboprobe. Subsequently, sections were treated with RNase A, submitted to stringency washes in SSC, and incubated in anti-GFP (1:5,000; Aves Labs) overnight at room temperature. The next day, sections were incubated for 1.5 h in secondary antibody (biotin-conjugated donkey anti-chicken, 1:1,000, Jackson Labs) and for 1 h in biotin-avidin complex (1:500, Vector Labs). Peroxidase reaction was performed using 3, 3 0 -diaminobenzidine tetrahydrochloride (DAB; Sigma) as chromogen, and sections were mounted onto SuperFrost plus slides and dried overnight at room temperature. Tissue was dehydrated in increasing concentrations of ethanol and slides were placed in X-ray film cassettes with BMR-2 film (Kodak) for 2 days and then dipped in NTB-2 autoradiographic emulsion (Kodak), dried, and stored in light-protected boxes at 4 C for 2 weeks. Finally, slides were developed with D-19 developer (Kodak), dehydrated in graded ethanol, cleared in xylenes, and coverslipped with Permaslip. 2.8. Images and data analysis All sections used for ISH/IHC were visualized with a Zeiss M2 microscope. Photomicrographs were produced by capturing images with a digital camera (Axiocam, Zeiss) mounted directly on the microscope using the Zen software. Cells were considered dual labeled if the density of silver grains overlying the cytoplasm (GFP-IR) of the cell was at least 3Â greater than the background level. Only one representative section and one side of the brain was counted per mouse per group; therefore, no correction for double counting was used. Adobe Photoshop CS6 image-editing software was used to integrate photomicrographs into plates. Only sharpness, contrast and brightness were adjusted. 2.9. Statistical analysis Unless otherwise stated, mean values AE SEM are presented in graphics, and significance was determined by a Student's t-test. A p-value less than 0.05 was considered statistically significant. Generation of Lepr EYFPDGHR -mice To inactivate GHR specifically in leptin receptor expressing neurons (LepRb), we crossed Lepr cre on the cre-inducible ROSA26-EYFP background together with GHR l/l mice, in which the GHR coding sequence was surrounded by LoxP sites [21]. Lepr EYFPDGHR -mice are born at the expected Mendelian ratio and are of normal size and appearance. To determine the potential neuronal groups where GHR and LepRb converge, we performed dual in situ hybridization (ISHH) for GHR and immunohistochemistry (IHC) for GFP on the same coronal brain sections of LepRb EYFP and Lepr EYFPDGHR -mice. The LepRb neurons were revealed by the expression of GFP. The percentage of duallabeled cells for both GFP-IR and GHR mRNA expression in LepRb EYFP and Lepr EYFPDGHR include ARH, DMH and LHA neurons ( Figure 1A, B and Suppl. Figure 1A). Consistent with restricted inactivation of the GHR in defined subpopulations of LepRb hypothalamic neurons, qPCR analysis revealed no alterations in overall hypothalamic GHR gene expression ( Figure 1C). Similarly, GHR expression in peripheral tissues including liver, muscle, pancreas, and pituitary remained unchanged in Lepr EYFPDGHR -mice ( Figure 1C). Furthermore, serum IGF-1 and GH levels were not significantly different between Lepr EYFPDGHR and controls (Suppl. Figure 1B, C), and IGF1 mRNA levels in liver were similar in Lepr EYFPDGHR and control mice ( Figure 1D). GH has been shown to activate several intracellular signaling pathways, including the JAK/Stat5 pathway [32,33]. Cells that exhibit pStat5-immunoreactivity after an acute GH stimulus are considered to be GH responsive [11]. We next determined the functional effects of LepRb neuron-restricted GHR deficiency on GH ability to activate the Stat5 pathway in control Lepr EYFP and Lepr EYFPDGHR -mice. In the basal state, about 10% of the ARH LepRb neurons contained immunoreactive pStat5 in fasted control and Lepr EYFPDGHR -mice ( Figure 2). Acute intraperitoneal GH treatment significantly induced pStat5 in control mice. By contrast, GH treatment of Lepr EYFPDGHR mice showed a significantly lower percentage of ARH LepRb neurons containing pStat5-IR cells ( Figure 2). As a measure of LepRb signaling [21], we measured leptin-stimulated accumulation of pStat3 in the hypothalamus of Lepr EYFPDGHR -mice. We found that acute leptin treatment promotes similar levels of hypothalamic pStat3 in control and Lepr EYFPDGHR mice; demonstrating that LepRb-Stat3 signaling was not impaired in Lepr EYFPDGHR mice (Suppl. Figure 2). Normal body weight in Lepr EYFPDGHR -mice To assess the impact of GHR deletion in LepRb neurons on energy balance, we compared body weight and body composition of control and Lepr EYFPDGHR male mice. Lepr EYFPDGHR -mice displayed no alterations in body weight relative to controls between 4 and 24 weeks of age ( Figure 3A), and both genotypes responded to high-fat diet (HFD) with similar increases in weight gain (Suppl. Figure 5A). Consistent with these findings, circulating serum leptin and adiponectin concentrations were indistinguishable between control and Lepr EYFPDGHR male mice on chow diet and increased to the same extent on HFD (Figure 3D and Suppl. Figure. 3 and 5D). Similarly, fat and lean body mass on both chow and HFD were comparable between groups Figure 5B, C). Furthermore, Lepr EYFPDGHR mice showed food intake similar to control male mice ( Figure 3E). Body length was indistinguishable between control and Lepr EYFPDGHR mice, a further indication of the intact function of the growth pathway despite the absence of GHR signaling in LepRb neurons (data not shown). Body weight, serum levels of leptin, and GH and IGF1 concentrations were also unaltered in female mice (Suppl. Figure 4 and data not shown). Expression of anorexigenic neuropeptides (e.g., POMC) and orexigenic neuropeptides (such as NPY and AgRP) did not differ between Lepr EYFPDGHR mice and control male mice ( Figure 3F). Taken together, these data indicate that energy homeostasis in mice is unaffected by selective, targeted disruption of the GHR gene in LepRb neurons. 3.3. Lepr EYFPDGHR -mice exhibit impaired glucose homeostasis and fail to suppress hepatic glucose production Next we investigated glucose homeostasis in Lepr EYFPDGHR male mice. Basal blood glucose and serum insulin concentrations were indistinguishable between Lepr EYFPDGHR mice as compared to controls ( Figure 4AeC). Despite normal fasting glucose levels, Lepr EYFPDGHR mice displayed significant glucose intolerance in response to an intraperitoneal glucose load on both chow and HFD ( Figure 4A, D). Insulin tolerance was subsequently tested to determine whether this glucose intolerance was associated with systemic insulin resistance. However, the glucose-lowering effect of insulin and the rate of glucose disappearance (calculated as the slope from time 0 to 30) during the insulin tolerance test (ITT) were similar in both groups ( Figure 4E, F). Insulin tolerance also did not differ between control and Lepr EYFPDGHR mice after 10 weeks on the HFD (data not shown). To assess the contribution of individual tissues to glucose metabolism, we performed a hyperinsulinemic-euglycemic clamp on Lepr EYFPDGHR and control mice at 14e16 weeks of age. Clamp studies allow for an accurate determination of insulin-dependent peripheral glucose uptake and liver glucose output in vivo [27]. During the clamp, the glucose infusion rate required to maintain euglycemia was significantly reduced in Lepr EYFPDGHR compared to control mice (p < 0.002) ( Figure 5A, B). Under basal conditions, whole-body glucose utilization, equivalent to endogenous hepatic glucose production (HGP), did not differ between control and Lepr EYFPDGHR mice ( Figure 5C and Suppl. Figure 6). The major difference in glucose turnover rate between the groups came during clamp conditions when HGP was reduced to a greater extent in control (64%) vs. Lepr EYFPDGHR (25%) (p < 0.005) ( Figure 5D). Steadystate serum insulin levels, whole body glucose clearance, and glycolysis were indistinguishable between control and Lepr EYFPDGHR mice ( Figure 5E and Suppl. Figure 6). Determination of tissue-specific glucose uptake rates showed similar rates of insulin-stimulated glucose uptake in skeletal muscle and adipose tissue in both groups of mice ( Figure 5F). These data demonstrate that the observed wholebody glucose intolerance in Lepr EYFPDGHR mice is driven mainly by changes in hepatic gluconeogenesis rather than by changes in glucose uptake by peripheral tissue. The levels of basal free fatty acids (FFAs) were not significantly different between control and Lepr EYFPDGHR mice ( Figure 6A). During the clamp, insulin-induced suppression of plasma FFA concentrations was less in Lepr EYFPDGHR mice ( Figure 6A), suggesting an impairment in the ability of insulin to suppress lipolysis in these mice. Fasted triglyceride (TG) levels were not significantly different between the control (GHR fl/fl 107.6 AE 21.2), and Lepr EYFPDGHR (86.9 AE 12.35 mg/dL) mice (Suppl. Figure 7A), and expression of ACC1, FASN, SREBP-2 and SREBP-1c was unaltered (Suppl. Figure 7C). In contrast, fasted LDL levels were significantly increased in the Lepr EYFPDGHR vs. control mice (GHR fl/fl 28.8.6 AE 1.77, and Lepr EYFPDGHR 41.02 AE 2.94 mg/dL) (Suppl. Figure 7B), Together To determine whether insulin regulation of hepatic expression of key gluconeogenic genes was intact, real-time PCR was performed in livers from fasted animals and following the hyperinsulinemiceuglycemic clamp. Clamp steady-state expression of the glucose-6-phosphatase protein (G6Pase) and phosphoenolpyruvate carboxykinase 1 (Pck1) was significantly greater in liver of Lepr EYFPDGHR mice than in control mice (p < 0.05) ( Figure 6B). Prior to the hyperinsulinemic clamp procedure, the basal levels of G6Pase, and Pck1 were comparable in both groups (data not shown). Hepatic signaling in Lepr EYFPDGHR -mice To explore the molecular basis for reduced hepatic insulin sensitivity, we next determined whether this effect was associated with impaired insulin signal transduction in the livers of Lepr EYFPDGHR mice. To test this possibility, we examined the levels of phosphorylation of two key components in the insulin signaling pathways [34], insulin receptor substrate 1 (IRS-1) and protein kinase B (Akt), in liver after i.v. injection of insulin, as described previously [26]. As shown in Figure 7A, B, insulin-stimulated phosphorylation of IRS-1 was significantly attenuated in the liver of Lepr EYFPDGHR mice. Consistent with these results, insulin-stimulated Akt Ser473 phosphorylation was significantly reduced in the liver of Lepr EYFPDGHR mice as compared with control mice DISCUSSION We have generated a new mouse model to dissect the role of CNS GHR signaling in LepRb expressing neurons. Our results identify for the first time a population of neurons responsible for the hypothalamic actions of GHR on hepatic glucose production (HGP). Specifically, loss of GHR in LepRb-expressing neurons of the ARH, DMH and LHA impairs the ability of insulin to regulate HGP and peripheral lipid metabolism. Importantly, these effects are mediated via mechanisms that are independent of hormonal changes or body adiposity. Recent analysis of the distribution of GH-responsive cells revealed the abundance of GH-responsive neurons in hypothalamic nuclei involved in the control of metabolism, such as the ARH, VMH, and DMH. This suggests that central GH signaling might be involved in energy expenditure and glucose homeostasis through a central mechanism [10]. GH overexpression in the CNS results in hyperphagia-induced obesity, insulin resistance, and increased circulating GH levels [17]. Additional studies reported obesity after expressing GH and GHreleasing hormone in the CNS, which has been attributed to lowering of endogenous GH levels [35,36]. However, these animal models did not dissect the effect of GH signaling in specifically defined neurons in the CNS, since they utilized CNS-wide promoters and had confounding effect of chronically altered GH levels. In contrast, we show that specific deletion of GHR signaling in LepRb neurons impairs insulin's ability to suppress hepatic glucose production independent of changes in IGF-1 or GH circulating levels. Moreover, LepRb neuronrestricted inactivation of the GHR does not interfere with the regulation of food intake or energy homeostasis under normal conditions or under high-fat diet, thus allowing determination of direct effects of central GHR signals on other tissues. Decreases in glucose levels are detected by glucose-sensing neurons that are found in several brain regions including the VMH, the LHA, the ARH, as well as in several hindbrain regions [37,38]. GH release contributes to glucose counter-regulation by shifting metabolism of non-neural tissues away from glucose utilization [39]. Hypothalamic GH releasing hormone (GHRH) neurons are glucose responsive, increasing activity in response to decreasing glucose levels [40]. However, the direct actions of GH are mediated through GHR [5], and very few GHRH neurons express GHR [41]. LepRb neurons regulate glucose homeostasis, and our data show that they co-express GHR in the ARH, DMH, and LHA neurons. Lean Lepr EYFPDGHR mice are not GH deficient but are hyperglycemic after a glucose load and have impaired HGP and lipid metabolism. Thus, one may speculate that the GHR in hypothalamic LepRb neuronal subsets of the ARH, DMH, and LHA directly or indirectly facilitates insulin signaling. Cell-type specific ablation of GHR will be necessary to unravel the functional significance of GHR expressing neurons in controlling glucose metabolism independently of body weight. Hepatic gluconeogenesis is a major contributing factor to hyperglycemia [42], and GH is reported to enhance hepatic gluconeogenesis [43]. This mechanism cannot explain impaired HGP in Lepr EYFPDGHR mice, since they have normal GH and IGF-1 levels. Consistent with the marked increase of HGP, insulin-induced suppression of hepatic gluconeogenic genes G6Pase and Pck1 expression is blunted and hepatic insulin signaling reduced, indicating that increased hepatic gluconeogenesis and impaired insulin signaling contributed to hyperglycemia provoked by deleting GHR from specific LepRb neuronal subsets. We cannot, however, rule out the possibility that reduced hepatic insulin signaling is a secondary effect [44], since Lepr EYFPDGHR mice are not insulin resistant. The ability of insulin to decrease plasma fatty acid concentrations during the hyperinsulinemic-euglycemic clamp was also reduced in Lepr EYFPDGHR mice, suggesting that insulin suppression of lipolysis was impaired, which might contribute to hepatic insulin resistance through direct or indirect generation of metabolites that alter the insulinsignaling cascade [45]. Increased plasma total cholesterol levels in Lepr EYFPDGHR mice also can be attributed to increased cholesterol uptake and export from the liver. Further studies will be necessary to determine the effects of GHR-LepRb neuronal subsets on lipid metabolism and the interaction with other hormones that regulate HGP. HGP can be stimulated by increased activity of the sympathetic input to the liver or decreased activity of the parasympathetic input to the liver [46,47]. The parasympathetic tone in rats might be physiologically relevant in controlling the basal HGP [48]. Vagus nerve innervation is important in mediating brain insulin control of hepatic glucose homeostasis [49]. Proper functioning of the vagus nerve is important for production of GHRH and IGF-1 [50]. It is reasonable to hypothesize that vagus nerve innervations are involved in central GHR signaling effect on glucose metabolism. The sympathetic nervous system (SNS) has been implicated in leptin actions [51], and in central insulin mediated lipogenesis [52]. Additional studies will be necessary to delineate the role of the two branches of the autonomic nervous system in the central GHR-mediated HGP. It has been shown previously that hypothalamic leptin and insulin signaling are required for the inhibition of HGP [53e56]. Indeed, intracerebroventricular infusion of insulin or leptin in rodents can potently suppress hepatic glucose production, whereas antagonism of insulin or leptin signaling in the hypothalamus can impair the ability of peripheral insulin to suppress HGP [55,56]. Evidence suggests that NPY neurons are involved in this process [57]. Interestingly, increased CNS NPY signaling can modulate hepatic lipoprotein metabolism [58]. The majority of neuropeptide Y (NPY) neurons in the ARH co-express GHR and LepRb [59], and NPY neurons mediate the feedback effect of GH on the hypothalamus [13]. The functional significance of GH signals on NPY neurons remains somewhat undefined [60]. Thus it is possible to speculate that NPY-LepRb neuronal circuitry is involved in the physiology of GHR responses, and might regulate its function. Previous studies indicate that LepRb-DMH neurons strongly connect to the PVH [61], and PVH regulates glucose homeostasis, probably via the sympathetic nervous system (SNS)-liver axis [62], thus suggesting a role for the DMH in glucose metabolism. Conversely, while it is unlikely that LepRb-LHA neurons mediate leptin's anti-diabetic actions in the regulation of glucose homeostasis [63,64], it can still be speculated that LepRb-LHA neurons provide an essential output for autonomic responses, since a significant subpopulation of LHA neurons are glucose inhibited [65], and their role in GH responses or GHR signaling is unclear. Future identification of these GHR-LepRb neuronal subsets will be required to understand how these are controlled differentially and to determine the respective roles of these neuronal populations and their hypothalamic circuitry in the control of glucose metabolism. Leptin signaling may be important in the maintenance of somatotropes, acting directly at the level of the pituitary [66]. Interestingly, somatotrope-specific Lepr knockouts showed reduced serum GH and increased fat mass and impaired Stat3 signaling, demonstrating the importance of leptin in the direct regulation of somatotrope function [67]. We and others did not detect expression of Lepr-cre in pituitary (data not shown), and the expression of GHR in the pituitaries of Lepr EYFPDGHR mice was intact. Furthermore, deletion of GHR from LepRb neurons had no effect on Stat3 signaling or serum GH levels. In support of this idea, growth and adiposity of Lepr EYFPDGHR mice were normal. GH secretion is consistently reduced in obesity [68]. The hyperinsulinemia associated with insulin resistance in obesity has been suggested to contribute to reduced GH secretion [32]. Obesity-induced leptin resistance and increased bioactive IGF-1 and FFA levels could suppress GH secretion from the pituitary by various mechanisms [41]. HFD fed Lepr EYFPDGHR mice showed significantly higher glucose levels in response to an intraperitoneal glucose load as compared to control glucose intolerant mice. This data suggests that deletion of central GHR signaling in LepRb neurons might exacerbate the GH-resistant state that is associated with obesity and contribute to diet-induced obesity complications. In support, a recent study demonstrated that chronic, peripheral GH injections significantly improved glucose metabolism and reduced liver triacylglycerol content of normal HFD fed mice, suggesting an effectiveness of GH therapy in the treatment of dietinduced obesity [69]. The activation of GHR induces Stat5 phosphorylation [33], and cells that exhibit pStat5-immunoreactivity after an acute GH stimulation are considered to be GH responsive [10]. A previous study demonstrated strong GH-induced pStat5 immunoreactive cells in the ARH, VMH, PVH, and some additional hypothalamic and extra hypothalamic areas [10,70]. We detected very low GH-induced pStat5-IR cells among LepRb neurons in the Lepr EYFPDGHR mice compared to controls, indicating that GH signaling is impaired. Hypothalamic Stat5 can mediate the direct negative feedback effects by GH [71]. Neuronal deletion of Stat5 results in obesity, insulin resistance, and glucose intolerance [72]. LepRb specific Stat5 KO mice have normal body weight regulation [73]; however, no data on the regulation of glucose metabolism was reported. Our data suggest that in specific neuronal populations, Stat5 signaling might be involved in GHR-mediated glucose homeostasis through a central mechanism. Overall, our findings define the physiological role of GHR signaling in distinct LepRb-expressing neuronal populations. We find no role for GHR in LepRb neurons in the regulation of food intake or body weight, but our data provide powerful genetic evidence for a direct role of central GHR signals in LepRb neurons in the regulation of glucose homeostasis and hepatic glucose production. GH treatment in obese type 2 diabetes patients decreases amounts of fat and improves insulin resistance [74]. We have identified LepRb-GHR neuronal populations as a crucial factor for the anti-diabetic actions of GH signaling. Understanding the molecular mechanisms operating in these neurons will yield new targets for treating obesity-driven metabolic diseases. Further identification, manipulation, and understanding of the function of specific GHR neuronal populations will be critical to address the physiological relevance of these findings for the development of metabolic diseases. AUTHOR CONTRIBUTIONS GC carried out the research and reviewed the manuscript. TL and MG performed research. JJK and MGM provided animal models and reviewed and revised the manuscript. NQ designed studies related to hyperinsulinemic-euglycemic clamp, carried out this aspect of the research, and interpreted the results. DAS and CE designed parts of the study, interpreted the results, and reviewed and revised the manuscript. RAM assisted in study design, provided animal models, and reviewed and revised the manuscript. MS designed the study, carried out the research, analyzed the data, wrote the manuscript, and is responsible for the integrity of this work. All authors approved the final version of the manuscript.
2018-04-03T01:33:33.549Z
2017-03-16T00:00:00.000
{ "year": 2017, "sha1": "c8db5fdd4c0d05a23ef5a4a3d6c711991dd1c0a4", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.molmet.2017.03.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8db5fdd4c0d05a23ef5a4a3d6c711991dd1c0a4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246240527
pes2o/s2orc
v3-fos-license
Nonconventional Quantized Hall Resistances Obtained with $\nu = 2$ Equilibration in Epitaxial Graphene $p-n$ Junctions We have demonstrated the millimeter-scale fabrication of monolayer epitaxial graphene $p-n$ junction devices using simple ultraviolet photolithography, thereby significantly reducing device processing time compared to that of electron beam lithography typically used for obtaining sharp junctions. This work presents measurements yielding nonconventional, fractional multiples of the typical quantized Hall resistance at $\nu=2$ ($R_H\approx 12906 {\Omega}$) that take the form: $\frac{a}{b}R_H$. Here, $a$ and $b$ have been observed to take on values such 1, 2, 3, and 5 to form various coefficients of $R_H$. Additionally, we provide a framework for exploring future device configurations using the LTspice circuit simulator as a guide to understand the abundance of available fractions one may be able to measure. These results support the potential for drastically simplifying device processing time and may be used for many other two-dimensional materials. INTRODUCTION Graphene has been extensively studied as a result of its great electrical and optical properties. [1][2][3][4] Epitaxial graphene (EG) on silicon carbide (SiC), which can be grown on the centimeter scale and is one of the many methods of synthesizing graphene, exhibits properties that render it suitable for large-scale or high-current applications such as the continued development of quantized Hall resistance (QHR) standards. [5][6][7][8][9][10][11][12][13][14][15] Though modern-day standards using millimeter-scale EG have been shown to have long-term electrical stability in ambient conditions, 16 these devices are, in most cases, only able to output a single value of quantized resistance ( 2 plateau) to a degree of accuracy which warrants possible use in metrology. The 3 corresponding value is: , where h is Planck's constant, e is the elementary charge, and RK is the von Klitzing constant. One milestone for graphene QHR standards would be the eventual accessibility of different resistance values that are well-quantized. One approach to reaching this goal includes creating quantum Hall arrays. [17][18][19] A major disadvantage to this approach is the requirement that many individual Hall bar devices be connected using a network of resistive interconnects, thereby increasing the total minimum device size and possibly lacking optimal contact resistances. The second approach involves building p-n junctions (pnJs) that operate in the quantum Hall regime, [20][21] as has been previously demonstrated in EG with lateral dimensions on the order of 100 μm. EG pnJs can be utilized to circumvent most of the technical difficulties resulting from the use of metallic contacts and multiple device interconnections. Research in developing materials for gating and preserving properties of large devices has seen limited success with amorphous boron nitride, 22 atomically-layered high-k dielectrics, [23][24][25][26] Parylene, [27][28][29] and hexagonal boron nitride, [30][31] whereas other materials have been more successful. 16,[32][33] For millimeter-scale constructions, one major issue was fabricating correspondingly large pnJs. One of the major challenges of mass producing such devices with more than one pnJ has been the required use of electron beam lithography, a costly and time-consuming technique, for the fabrication of junctions that are abrupt, with n-type and p-type regions separated by a width on the scale of several hundreds of nanometers or smaller. This scale is necessary to ensure that the pnJ is sharp enough for dissipationless equilibration of Landauer-Büttiker edge states. 20,34 Junctions with too large a width, when dealing with bipolar interfaces, may effectively become 4 resistive from non-quantization due to charge carrier values being in the neighborhood of the Dirac point. In this work, we demonstrate how standard ultraviolet photolithography (UVP) and ZEP520A were used to build pnJs that have junction widths smaller than 200 nm on millimeterscale EG devices. Quantum Hall transport measurements were performed and simulated for various p-n-p devices to verify expected behaviors of the longitudinal resistances in a twojunction device. 35 Furthermore, we use the LTspice current simulator [see notes] to examine the various rearrangements of the electric potential in the device when injecting current at up to three independent sites. We find that nonconventional fractions of the typical quantized Hall resistance, , can be measured, thus validating the simulations. EG Growth and Device Fabrication The growth of high-quality epitaxial graphene can be found in Refs. 9 Raman spectroscopy Raman spectroscopy was used to verify the behavior of the 2D (G') peak of the EG before and after the functionalization process and polymer photogating development. Spectra were collected with a Renishaw InVia micro-Raman spectrometer [see notes] using a 633 nm wavelength excitation laser source. The spot size was about 1 µm, the acquisition times were 30 s, the laser power was 1.7 mW power, and the optical path included a 50 × objective and 1200 mm -1 grating. Rectangular Raman maps were collected in a backscattering configuration with step sizes of 20 µm in a 5 by 3 raster-style grid. To avoid the effects of polymer interference, spectra were collected through the backside of the SiC chip. 38 LTspice simulations 6 The analog electronic circuit simulator LTspice was employed to predict the electrical behavior of the pnJ devices in several measurement configurations. 39 Verifying the charge configuration An optical image of the EG device, fabricated into a Hall bar geometry and processed with Cr(CO)3 and ZEP520A to establish two pnJs, is shown in Figure 1 (a). The first and third regions separated by the UVP-obtained junctions were intended to be p-type regions, as indicated by the gray letters, whereas the n region is preserved by a thick S1813 photoresist spacer layer (red letter). Raman spectra of the device's 2D (G') peak were acquired and shown for the n and p regions immediately after transport measurements to verify the polarity of the regions. Since the thick photoresist layers prevented spectra acquired in the usual backscattering geometry, the setup was modified such that the excitation laser was shown through the backside of the SiC chip to enhance the quality of the 2D (G') peak. 38 For the data in Figure 1 cm -1 ± 2.3 cm -1 and 2664.9 cm -1 ± 4.1 cm -1 , respectively with corresponding full-widths at half-7 maxima of 79.1 cm -1 ± 11.5 cm -1 and 64.9 cm -1 ± 9.3 cm -1 (all uncertainties represent 1σ deviations). Atomic force microscope (AFM) images, one of which is shown in Figure 1 (c), were used to determine the device's final thickness profile. The example profile in Figure 1 (d) was averaged over 1.1 μm and shows a height difference of about 1.4 μm between the n and p region. What became evident was that the pnJ width was not guaranteed to be sharp enough for dissipationless edge state equilibration. Therefore, careful treatment and analysis of the device's charge configuration was required to assess the viability of pnJs created with UVP. Assessing the quality of the charge configurations and pnJs To assess device quality, the charge configuration of the device needed to be known and the width of the pnJs needed to be estimated. It is also important to approximate how the carrier densities in the regions change with exposure to 254 nm, 17 000 µWcm -2 UV light (distinct from the UV light used in UVP), and this is primarily done by monitoring the longitudinal resistivity in all three regions of a p-n-p device during a room temperature exposure, with two polarities shown in the upper panel of Figure 2 (a). For the p region, the expected p-type doping mechanism resulting from the deposition of a ZEP520A layer on the whole device persists to the point where the carrier density crosses the Dirac point. This crossing is most evident during the room temperature UV exposure when the longitudinal resistivity of the device exhibits a similar value to when the exposure was started, but instead with a negative time derivative. The S1813 successfully prevents the n region from becoming a p region, as exhibited by the flat resistivity (and electron density). For varying distances between the device and the UV lamp, as well as how the devices behave after UV exposure and without functionalization, see Supplementary Data. Though the idea of using ZEP520A as a dopant for EG has been demonstrated, 32 accessing the p region with that mechanism is challenging due to the intrinsic EG Fermi level pinning from the buffer layer below. 6 However, the reduction of the electron density from the order of 10 13 cm -2 to the order of 10 10 cm -2 by the presence of Cr(CO)3 considerably assists the p region to undergo its transition. 16 It should be briefly noted that the temporary dip in resistivity near t = 30 000 s arises from another competing process to shift the carrier density, namely that of the applied heat, which as prescribed by other work, causes n-type doping in EG devices. 16 Transport measurements were performed at 4 K, allowing us to determine the low-temperature In equation (1) gauge whether an identical UV exposure will change the carrier density. For a device having a 300 nm-thick S1813 layer, the electron density remained the same. However, for a device having an approximate S1813 thickness of 41 nm, from reactive ion etching the S1813 layer, the n region changed by nearly one part in 10 10 cm -2 , but still had a region providing quantized Measuring nonconventional fractions of the ν = 2 Hall resistance With the essential determination that UVP is a viable method for creating large-scale pnJs, we now look to verify expected electrical behavior in a p-n-p device. 35 Circuit simulations were also implemented to assist in predicting varied configurations, but first started with the well-known case of injecting a current along the length of the device and measuring various resistances, Hall and otherwise. This verification procedure is presented in Figure 3. The device and its corresponding circuit simulation model are shown in Figure 3 Equipotential lines are drawn in red, orange, lavender, and blue, to clarify how the potential is expected to behave in the shown orientation. (Figure 4 (b)). Currents were injected at up to three distinct sites on the device, with all currents summing to 1 μA. For general notation, assume that I1, I2, and I3 are positive if they flow from the top to the bottom of the device. If a current is negative, assume that its source has been applied to the bottom of a region, with flow to the top, much like I3 pictured in Figure 4 (a). If one of the three currents is zero, then only two branches are utilized. After the measurement, the overall device resistance is determined from . 16 When every region of a pnJ device displays quantized Hall resistances, but has a different carrier concentration and polarity, the measured resistivity across one or several sets of pnJs depends on Landauer-Büttiker edge state equilibration at the junction. [48][49][50] In the case of EG, 18 where the Fermi level is typically pinned due to the buffer layer, and where the carrier densities take on values on the order of 10 11 cm -2 , 2 equilibration becomes most relevant, 6 unlike exfoliated graphene p-n-p devices. 51 With this knowledge, one can construct devices with more regions having opposite polarity, like the one shown in Figure 5 (a). Though it can be speculated that these atypical fractions arise from the redistribution of the electric potential throughout the device, this phenomenon is not intuitive with multiple currents in the quantum Hall regime. Because of this difficulty, using a circuit simulator including quantum Hall elements becomes vital for predicting which fractions of are measurable while each region displays the 2 plateau. CONCLUSION To conclude, 2 equilibration was achieved in millimeter-scale pnJ devices using only standard ultraviolet photolithography, with junction widths being on the order of 200 nm. Though one group has used a similar process for terahertz applications, they reported neither measured transport properties nor analyzed the reliability of their junction width. 52 The Notes Commercial equipment, instruments, and materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology or the United States government, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose. The authors declare no competing interests. Funding Sources All work performed as part of the duties of employees of the United States Government, along with its associated and guest researchers. demonstrates the necessity of using functionalization to successfully create a pnJ. ∆ as a function of while keeping the S1813 thickness fixed at zero. These trends are shown to elucidate the heavier influence of the final carrier density in EG on ∆ than the thickness of the S1813 layer. 3. Spacer layer behavior Figure S7. The Hall resistance and electron density was checked for three thicknesses of the S1813, two of which are indicated in the main text as sufficient enough a spacer to ensure the stability of the n region. To approximate the size of the pnJ width, a spacer layer thickness of 42.4 nm was measured with atomic force microscopy, and when that device was exposed to UV light, the regions intended to be maintained as n-type experienced a slight change in electron density. Based on the Hall measurements above, the electron density dropped from about 1.7 × 10 10 cm -2 to 0.98 × 10 10 cm -2 . The Hall plateau for after UV (in black) was still quantized at 9 T, suggesting that the pnJ width had an approximate upper bound of 200 nm (when cross referencing data from AFM profiles in Figure 2 of the main text). Modeling and simulation details 37 Figure S8. The longitudinal resistivity of the device with 42.4 nm thick S1813 was monitored at room temperature between the two Hall resistance measurements in Supplementary Figure 4. The very small change observed in the steady state corresponds well with the small changes observed in the electron density. NOTES ∂ Commercial equipment, instruments, and materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology or the United States government, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
2022-01-25T02:16:04.621Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "460037215b91d06207ffc70529ba76b81405a420", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "460037215b91d06207ffc70529ba76b81405a420", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270571865
pes2o/s2orc
v3-fos-license
From Personal Care to Coastal Concerns: Investigating Polyethylene Glycol Impact on Mussel’s Antioxidant, Physiological, and Cellular Responses Pharmaceutical and personal care products (PPCPs) containing persistent and potentially hazardous substances have garnered attention for their ubiquitous presence in natural environments. This study investigated the impact of polyethylene glycol (PEG), a common PPCP component, on Mytilus galloprovincialis. Mussels were subjected to two PEG concentrations (E1: 0.1 mg/L and E2: 10 mg/L) over 14 days. Oxidative stress markers in both gills and digestive glands were evaluated; cytotoxicity assays were performed on haemolymph and digestive gland cells. Additionally, cell volume regulation (RVD assay) was investigated to assess physiological PEG-induced alterations. In the gills, PEG reduced superoxide dismutase (SOD) activity and increased lipid peroxidation (LPO) at E1. In the digestive gland, only LPO was influenced, while SOD activity and oxidatively modified proteins (OMPs) were unaltered. A significant decrease in cell viability was observed, particularly at E2. Additionally, the RVD assay revealed disruptions in the cells subjected to E2. These findings underscore the effects of PEG exposure on M. galloprovincialis. They are open to further investigations to clarify the environmental implications of PPCPs and the possibility of exploring safer alternatives. Introduction Emerging contaminants (ECs) are gaining particular attention due to their increasing prevalence in natural ecosystems, driving the interest in understanding their contribution to environmental pollution [1][2][3][4].Especially in today's post-pandemic period, the extent of COVID-19, the strategies employed to manage the disease, and the treatments administered to patients have led to significant changes in the production and use of individual protective equipment, pharmaceuticals, personal care/hygiene products, and disinfectants, with consequent impact on their release and distribution in the environment [5][6][7].Being pseudo-persistent pollutants, pharmaceutical and personal care products (PPCPs) are hardly or even not removed by wastewater treatment systems [8,9].Particularly, personal care products (PCPs) encompass a wide range of items intended for personal hygiene, grooming, and cosmetic purposes.PPCP components can vary depending on their use and formulation.Common ingredients found in PPCPs include surfactants, emollients, preservatives, fragrances, UV filters, etc. [10][11][12].Some of these components exhibit characteristics such as persistence and bioaccumulation, posing potential health risks to the environment and living organisms [8,13]. Polyethylene glycols (PEGs) are polymers increasingly incorporated into a wide variety of PPCPs.Additionally, PEG serves as a stabiliser for lipid nanoparticles in mRNA vaccines, such as the Pfizer/BioNTech.In this context, Sellaturay et al. [14] demonstrated that a PEG allergy can lead to anaphylaxis following vaccination.Although PEG is included as an excipient in many drug formulations to improve their pharmacokinetic properties [15], the likelihood of hypersensitive reactions may vary according to its molecular weight, and, although rare, allergic responses to PEGs can be severe. The global production of PEG reaches millions of tons annually [16].Although precise data on the presence of PEG and its derivatives in surface water are lacking, Traverso-Soto et al. [17] stated that concentrations in wastewater may exceed 1 mg/L.Consequently, a notable quantity of PEG in surface waters has been observed to impact the bioavailability and toxicity of other environmental pollutants.Studies generally indicate a low toxicity of PEG in organisms [18][19][20].Nevertheless, instances of nephrotoxicity [15] and reports of adverse effects, such as damage to the central nervous system, heart, and lungs, as well as kidney failure, have been documented in ethylene glycol-treated subjects [21].In addition, in common carp (Cyprinus carpio), Hatami et al. [22] observed disruptions in cell membranes after PEG exposure. For the reasons mentioned above, the main purpose of the present study was to further investigate PEG-induced toxicity in marine non-target organisms, since aquatic ecosystems represent one of the major sinks of ECs.The Mediterranean mussel (Mytilus galloprovincialis) has been chosen as a model due to its biological features, such as widespread distribution, bioindicator capabilities, filter-feeding mechanism, resilience, etc., as well as for its suitable response to anthropogenic pressures [23][24][25].The effects of PEG exposure were assessed by examining the potential physiological, antioxidant, and cellular responses in the mussels' haemolymph, gills, and digestive gland (DG), as they represent the first line of defence (haemocytes and gills) and the primary detoxifying organ (DG).Potential alterations were analysed by investigating the viability of the haemolymph and DG cells, as well as the ability of hepatocytes to regulate their cellular volume.In aquatic organisms, the response of the antioxidant system is crucial for evaluating the effects of xenobiotics [26].In this context, biochemical enzymatic and non-enzymatic parameters are widely used as suitable endpoints of toxicity [27].Consequently, in both gills and DGs, superoxide dismutase (SOD) activity, lipid peroxidation, and carbonylated proteins content have been evaluated.The results of the present study may provide comprehensive knowledge of the adverse effects of PEGs on marine bivalve molluscs, as well as useful indications to further analyse a wider range of damage caused by the persistence of PEGs to aquatic communities, finally providing useful information on the correct use of PEG-based compounds. Experimental Design The toxicity test was performed on one hundred fifty specimens of M. galloprovincialis, purchased from a mussel commercial farm, FARAU S.r.l., Frutti di Mare, located in the "Lago Faro" in the reserve of "Capo Peloro Lagoon" (Messina, Italy).M. galloprovincialis samples, with a mean length of 5.5 ± 0.2 cm (expressed as the mean length ± S.E.), were rapidly and promptly transported to the Laboratory of Animals Ecophysiology at the University of Messina.The specimens were randomly divided into six aquariums containing 20 L of brackish, filtered lake water provided by the same mussel farm.Before the start of the experiment, the samples were acclimated for a week with continuous oxygen aeration and a water change every two days.The concentration of PEG was measured before and after every change of the water. The animals were then divided into three groups (ctrl 0 mg/L, PEG1 0.1 mg/L, and PEG2 10 mg/L) in duplicate.The experimental design is illustrated in Table 1.Throughout the 14-day experimental time, the animals were kept under the standard conditions of salinity (3.4 ± 0.2%), pH (7.6 ± 0.01), and temperature (18.0 ± 0.2 • C).The daily photoperiod was maintained at 12 h of light and 12 h of darkness.The water was changed three times per week with brackish water enriched with nutrients and continuous aeration was kept.PEG (CAS number 25322-68-3) was purchased from Sigma-Aldrich (Prague, Czech Republic).The chemical is presented in powder form (99% purity) with a size of 8000 and is dissolved in brackish lake water without using any solvent.The size was selected following the results obtained by Zicarelli et al. [28].Due to a lack of information about the exact concentration in the environment, the concentrations used in the experiments were chosen based on previous studies on PEG [17,22,28]. Collection of Samples and Cell Viability Assay The haemolymph samples were collected from the adductor muscle of four randomly selected animals by using a syringe with a 5 cm needle.DG cells were isolated using the method described in detail by Impellitteri et al. [23].The haemolymph and DG cells were used to evaluate cell viability. Two different colorimetric tests were used to assess cell viability: the Neutral Red retention assay (NR) and the Trypan Blue exclusion method (TB).In the NR assay, the haemolymph and DG cells were treated with a 0.8% NR solution diluted 1:1000 and kept incubated with the samples for 10 min following the protocol described by Moore et al. [29].The cells were considered viable if their lysosomal membranes, observed under a light microscope at 40× magnification, remained intact and red-coloured.In the TB assay, the cells were incubated in a 1:1 solution of TB at 4% for 5 min, and viability was calculated using the formula described by Tresnakova et al. [30].Since TB is unable to penetrate the membranes of living cells, all blue-stained cells were considered non-viable.The tests were conducted using Bürker and Neubauer chambers purchased from Fisher Scientific (Hempton, NH, USA). Regulation Volume Decrease (RVD) The RVD test was performed by placing a drop of DG cell samples on a slide treated with polylysine to facilitate cell adhesion.The samples were then observed using a Carl Zeiss Axioskop 20 (Wetzlar, Germany) light microscope at 100× magnification connected to a Canon 550D digital camera.For the RVD analyses, the samples were washed with an isotonic solution (1100 mOsm) containing NaCl 550 mM, KCl 12.5 mM, MgSO 4 8 mM, CaCl 2 4 mM, glucose 10 mM, MgCl 2 40 mM, and HEPES 20 mM.Three photos were taken in succession; then, the slide was washed with a hypotonic solution (800 mOsm) containing NaCl 350 mM, KCl 12.5 mM, MgSO 4 8 mM, CaCl 2 4 mM, glucose 10 mM, MgCl 2 40 mM, and HEPES 20 mM.Seventeen photos were taken in total: ten photos were taken over ten minutes (one per minute from IPO1 to IPO10), and four photos were taken over the following twenty minutes (one photo every five minutes from IPO11 to IPO14).Fifteen cells were selected from each experimental group, and the cell area was calculated using the ImageJ software Version 1.54i (NIH, Bethesda, MD, USA) for comparison between the cell area of the control group and the exposed animals. Biochemical Parameters The electrolytes present in the haemolymph and the aquarium water were evaluated for each experimental group using the multi-parametric analyser KONELAB 60 THERMO (Milan, Italy).The percentages of Na + , K + , Cl − , P, Mg 2+ , and Ca 2+ were measured.Additionally, haemolymph lactate dehydrogenase (LDH) was used as a marker of cell damage. Oxidative Markers The evaluation of oxidative stress biomarkers was performed as described by Filice et al. [31].The gills and digestive glands (N = 6 for each condition) were homogenised in cold 100 mM Tris/HCl buffer (pH 7.2) (Sigma Aldrich, Milan, Italy) containing a mixture of protease inhibitors.An aliquot of the homogenate was used to determine lipid peroxidation; the remaining part was centrifuged at 5000× g for 5 min at 4 • C and the supernatant was used to analyse both protein oxidation and superoxide dismutase (SOD) activity.Protein concentration in the supernatant was determined according to the Bradford method by using a commercial kit (Bio-Rad Laboratories, Hercules, CA, USA) and bovine serum albumin (BSA) as a standard. Protein Oxidation The oxidatively modified protein (OMP) levels were evaluated by measuring carbonyl group content performed through the traditional 2,4-dinitrophenylhydrazine (DNPH) method described by Levine et al. [33].Aliquots of the supernatant were incubated at room temperature for 1 h with 10 mM DNPH (Sigma Aldrich, Milan, Italy) in 2 M HCl and then precipitated with 2 volumes of TCA.The solution was centrifuged for 20 min at 7000 rpm; the pellet was washed thrice with ethanol-ethyl acetate (1:1; v/v) to remove DNPH excess and then dissolved in 6 M guanidine in 2 N HCl.The concentration of carbonyl groups was measured spectrophotometrically at 370 nm (aldehydic derivatives) and at 430 (ketonic derivatives) using the extinction coefficient of 22,000 M −1 cm −1 .The results were expressed as nmol per mg protein. SOD Activity The SOD activity was determined by the pyrogallol method of Marklund and Marklund [34], modified by Filice et al. [35].In brief, the inhibitory effect of SOD on the auto-oxidation of pyrogallol at pH 8.20 was assayed spectrophotometrically at 420 nm and 25 • C. The reaction was run in 50 mM Tris-HCl, 1 mM EDTA, and 0.2 mM pyrogallol (Thermo Fisher Scientific Inc., Milan, Italy) and monitored every 30 s for 5 min.One unit of SOD activity was defined as the amount of the enzyme that inhibits 50% of pyrogallol auto-oxidation.The results were expressed in U/mg protein. Data Analyses After checking the normality and homogeneity of the data using the Kolmogorov-Smirnov and Levene tests, the analysis of variance (one-way ANOVA), followed by the Tuckey post hoc tests for comparison, was applied to analyse the results.The statistical analyses were performed using the statistical software GraphPad Prism, Version 8.2.1. (GraphPad Software Ltd., La Jolla, CA, USA).Significant results were considered with a p-value < 0.05.The results are presented as mean ± standard error (S.E.). Cell Viability Table 2 shows the haemocyte viability.The TB exclusion test revealed significant differences in the cells from the PEG2-treated animals (94.3%) compared to the control (99%, p < 0.05).However, no significant differences were found between the control and treated animals when the NR retention test was used.The results are expressed as the percentage of the mean ± S.E.from 10 slides.Significant differences compared to the control are indicated by ** p < 0.01.The analyses were performed using a one-way ANOVA test. TB staining showed that DG cell viability was significantly reduced in the PEG2treated animals (93.3%) compared to the control (98.7%, p < 0.01).When tested with the NR assay, the DG cells exposed to PEG1 (97.3%) showed a significant reduction in viability compared to the control (99.4%, p < 0.05).The results are summarised in Table 3.The results are expressed as the percentage of the mean ± S.E.from 10 slides.Significant differences compared to the control are indicated by * p < 0.05 and ** p < 0.01.The analyses were performed using a one-way ANOVA test. Regulation Volume Decrease Figure 1 shows the results of the 14-days exposure on the DG cells.No significant differences were observed in PEG1 and PEG2 compared to the control.In the control groups, the cells reached the maximum swelling in IPO4 and showed the ability to restore their initial volume.However, the cells exposed to PEG1 reached the peak of swelling around IPO5 and the cells exposed to PEG2 reached their maximum swell around IPO6.Nevertheless, the cells belonging to the PEG2 group showed the inability to return to their original volume.Although the cells of PEG1 and PEG2 took longer to reach their peak compared to the control, the cells exposed to PEG1 increased their volume by 2.3% more than the control cells, and those exposed to PEG2 increased their volume by approximately 8% more than the control cells. original volume.Although the cells of PEG1 and PEG2 took longer to reach their peak compared to the control, the cells exposed to PEG1 increased their volume by 2.3% more than the control cells, and those exposed to PEG2 increased their volume by approximately 8% more than the control cells.).The analyses were made using a one-way ANOVA test; no significant differences were highlighted. Biochemical Parameters The biochemical parameters of the haemolymph did not show any significan differences between the control and experimental groups.However, as shown in Table 4 the values of Na + , K + , Cl − , and Ca 2+ increased in the experimental groups.The results are expressed as mean ± S.E.The analyses were performed using a one-way ANOVA test.No statistical differences have been observed. The amount of electrolytes dissolved into the water did not show significan differences among the groups.In particular, no significant changes in the values of Na + K + , Cl − , Mg 2+ , and Ca 2+ were observed in the water of both the experimental groups.The results are summarised in Table 5. ) represent control (0 mg/L), square (■) represents PEG1 (0.1 mg/L), and triangles ( ) represent PEG2 (10 mg/L).The values are the mean ± S.E. of the selected cells (n = 15).The analyses were made using a one-way ANOVA test; no significant differences were highlighted. Biochemical Parameters The biochemical parameters of the haemolymph did not show any significant differences between the control and experimental groups.However, as shown in Table 4, the values of Na + , K + , Cl − , and Ca 2+ increased in the experimental groups. The results are expressed as mean ± S.E.The analyses were performed using a one-way ANOVA test.No statistical differences have been observed. The amount of electrolytes dissolved into the water did not show significant differences among the groups.In particular, no significant changes in the values of Na + , K + , Cl − , Mg 2+ , and Ca 2+ were observed in the water of both the experimental groups.The results are summarised in Table 5.The results are expressed as mean ± S.E.The analyses were performed using a one-way ANOVA test. Oxidative Status Evaluation The activity of the SOD enzyme and the levels of LPO and OMP, as indexes of the oxidative status, were measured in the gills and DG of M. galloprovincialis exposed to PEG. In the gills, the activity of SOD was significantly reduced in the mussels exposed to the low concentration of PEG, while it showed values similar to the control group in the animals exposed to the highest concentrations tested.Compared to the control group, LPO, measured in terms of TBARS levels, increased in response to PEG exposure at both the concentrations tested.However, at the E2 concentration, the TBARS levels were lower than those in E1.No significant differences in both aldehydic and ketonic derivatives (OMP) were observed in the three experimental conditions (Figure 2).The results are expressed as mean ± S.E.The analyses were performed using a one-way ANOVA test. Oxidative Status Evaluation The activity of the SOD enzyme and the levels of LPO and OMP, as indexes of the oxidative status, were measured in the gills and DG of M. galloprovincialis exposed to PEG. In the gills, the activity of SOD was significantly reduced in the mussels exposed to the low concentration of PEG, while it showed values similar to the control group in the animals exposed to the highest concentrations tested.Compared to the control group, LPO, measured in terms of TBARS levels, increased in response to PEG exposure at both the concentrations tested.However, at the E2 concentration, the TBARS levels were lower than those in E1.No significant differences in both aldehydic and ketonic derivatives (OMP) were observed in the three experimental conditions (Figure 2).In the DG, exposure to PEG only affected LPO.Indeed, a significant increase in TBARS levels was observed in the mussels exposed to the lowest concentration tested.No In the DG, exposure to PEG only affected LPO.Indeed, a significant increase in TBARS levels was observed in the mussels exposed to the lowest concentration tested.No differences in SOD activity, nor in OMP levels, were observed among the groups (Figure 3).differences in SOD activity, nor in OMP levels, were observed among the groups (Figure 3). Discussion To investigate the toxicity of potentially toxic compounds contained in PPCP formulations, such as PEG, in aquatic ecosystems, M. galloprovincialis has proven to be a suitable sentinel organism [24,36,37]] and the use of haemolymph, DG cells, biochemical parameters, and oxidative stress resulting in being suitable endpoints for toxicological investigations [38,39].Our investigation showed a significant interaction between PEG and M. galloprovincialis, culminating in a notable decrease in the vitality of both haemocytes and hepatocytes.These cell populations play key roles in mussel's physiological processes, including immune response, metabolism, and detoxification [40,41].The observed decline in cell viability is evident in the NR and TB assays.The results related to the NR retention assay suggest the impairment of lysosomal membranes.Lysosomes, integral to the organism's defence mechanisms, are crucial for maintaining cellular homeostasis and responding to external stressors [30,38].Indeed, mussels exhibit remarkable resilience in coping with harsh and contaminated environments, partly owing to their ability to regulate lysosomal activity.Lysosomes, through their autophagic function, serve as the organism's initial line of defence, enabling the degradation and recycling of cellular components to mitigate the impact of xenobiotics and pathogens [30,38].Therefore, any deviation from the normal regulatory mechanisms of lysosomal activity serves as a sensitive indicator of cytotoxicity, reflecting the organism's compromised ability to maintain cellular homeostasis [2,8].In addition, under physiological conditions, healthy cell membranes do not allow TB to enter the cell.In contrast, in the present study, the significant results of TB exclusion in the cell membrane of both the haemocytes and DG cells suggest the likelihood of the impairment of cell membrane integrity as a consequence of exposure to the highest concentration of PEG (10 mg/L).Interestingly, our findings resonate with those reported by Hatami et al. [22], who Discussion To investigate the toxicity of potentially toxic compounds contained in PPCP formulations, such as PEG, in aquatic ecosystems, M. galloprovincialis has proven to be a suitable sentinel organism [24,36,37] and the use of haemolymph, DG cells, biochemical parameters, and oxidative stress resulting in being suitable endpoints for toxicological investigations [38,39].Our investigation showed a significant interaction between PEG and M. galloprovincialis, culminating in a notable decrease in the vitality of both haemocytes and hepatocytes.These cell populations play key roles in mussel's physiological processes, including immune response, metabolism, and detoxification [40,41].The observed decline in cell viability is evident in the NR and TB assays.The results related to the NR retention assay suggest the impairment of lysosomal membranes.Lysosomes, integral to the organism's defence mechanisms, are crucial for maintaining cellular homeostasis and responding to external stressors [30,38].Indeed, mussels exhibit remarkable resilience in coping with harsh and contaminated environments, partly owing to their ability to regulate lysosomal activity.Lysosomes, through their autophagic function, serve as the organism's initial line of defence, enabling the degradation and recycling of cellular components to mitigate the impact of xenobiotics and pathogens [30,38].Therefore, any deviation from the normal regulatory mechanisms of lysosomal activity serves as a sensitive indicator of cytotoxicity, reflecting the organism's compromised ability to maintain cellular homeostasis [2,8].In addition, under physiological conditions, healthy cell membranes do not allow TB to enter the cell.In contrast, in the present study, the significant results of TB exclusion in the cell membrane of both the haemocytes and DG cells suggest the likelihood of the impairment of cell membrane integrity as a consequence of exposure to the highest concentration of PEG (10 mg/L).Interestingly, our findings resonate with those reported by Hatami et al. [22], who observed similar disruptions in cell membranes in common carp (Cyprinus carpio) exposed to PEG at a concentration of 10 mg/L. In the present study, the M. galloprovincialis specimens were found to be affected by exposure to different concentrations of PEG.These findings highlighted the negative interaction between PEG and DG cells since this product impairs cell ability to regulate the volume.The ability of DG cells to regulate their volume when subjected to osmotic changes in physiological conditions has been widely investigated [42,43].In this context, the RVD analysis highlights how this mechanism could be altered by long-term exposure to xenobiotics.Our findings are in line with previous studies conducted on mussels exposed to chemicals involved in PPCP production [23,30,38].The loss of the ability to regulate cell volume is a sign of alteration in the cytoskeleton, damage to the protein channels that alters the physiological function of cells, and cell membranes [44,45]. In the present study, electrolyte evaluation showed that the ion content of the haemolymph was slightly affected by PEG, although the changes were not statistically significant.Further analyses, carried out at different and longer exposure times, are needed to better clarify this aspect.Our data are of relevance in relation to mussels' osmoregulatory capabilities that allow them to regulate internal osmolarity in relation to the surrounding aquatic environment [23].In general, shifts in electrolyte compositions serve as indicators of mussel's health status.In fact, haemolymph contains essential constituents such as Cl − , Na + , K + , 2+ , Mg 2+ , etc., which play roles in diverse physiological functions, including metabolism, enzymatic activities, shell development, osmoregulation, and the maintenance of the organism's internal balance [30]. In aquatic species, exposure to pollutants results in an alteration of oxidative homeostasis and a consequent increase in the production of reactive oxygen species (ROS).ROS can easily interact with macromolecules (i.e., DNA, protein, and lipid) leading to structural and functional changes that can be detrimental to animal fitness [46,47].In this context, the analysis of well-established biological markers represents a valuable tool for assessing the severity of these events.Among others, the activity of antioxidant enzymes and the levels of oxidation products (TBARS and OMP) are typically used to detect changes in the oxidative status of the whole organism and of specific target tissues [31,45,48,49].A PPCP-dependent modulation of oxidative biomarkers has been reported in several aquatic species, including Carassius auratus [50] and Oreochromis niloticus [51].Data about the influence of PEG on the oxidative status of aquatic animals are scarce.The few available data mainly refer to C. carpio, in which 21 days of exposure to 5 and 10 mg/L of PEG did not affect acetylcholinesterase (AChE) activity in plasma, and neither CAT activity nor lipid peroxidation in the liver [22].In our study, we observed that 14 days of exposure to 10 mg/L of PEG did not affect the oxidative status of M. galloprovincialis in the DG and gill tissues, with the exception of a slight significative increase in lipid peroxidation in the gills.On the contrary, exposure to the lowest PEG concentration (0.1 mg/L) negatively affected the activity of the SOD enzyme in the gills, while increasing lipid peroxidation in both tissues.These data are preliminary and require that other enzymes involved in the antioxidant response are investigated to fully describe mussel's oxidative status in the presence of PEG.However, the information obtained on SOD (in the gills) and on the levels of oxidation products (in both gills and DG) suggest that in M. galloprovincialis, a low concentration of PEG may inhibit the antioxidant defence system.This, by limiting the capacity to eliminate ROS, may result in lipid oxidative damage.The tissue-specific response that we observed is not surprising, since in a mussel the gills represent the first tissue to be exposed to water contaminants.We cannot provide a conclusive explanation concerning the absence of effects observed on SOD activity at the highest PEG concentration, particularly in DG.It is possible that the activation of other mechanisms of protection against ROS contributes to preserving the redox balance.Although caution is needed when comparing the effects of different chemicals, it is intriguing that a similar concentration-dependent behaviour has been observed in M. galloprovincialis following exposure to other PCP components.This is the case of Sodium Lauryl Sulphate which was found to affect the activity of antioxidant enzymes only at the lowest concentrations tested [52]. Conclusions In light of the increasing prevalence of emerging contaminants like PEG in aquatic environments, understanding their toxicological effects on marine organisms is imperative. Figure 1 . Figure 1.Regulation of volume decrease (RVD) in the digestive gland cells of M. galloprovincialis (n = 6) exposed to two different concentrations of PEG for 14 days.Rhombuses ( ■ Figure 2 . Figure 2. SOD activity, TBARS, and OMP in the gills of M. galloprovincialis exposed to PEG.The data are expressed as mean ± S.E. of absolute values (n = 6) of the individual experiments performed in duplicate.Statistics were assessed by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05 * CTRL vs. PEG1 or PEG2; ^ PEG1 vs. PEG2). Figure 2 . Figure 2. SOD activity, TBARS, and OMP in the gills of M. galloprovincialis exposed to PEG.The data are expressed as mean ± S.E. of absolute values (n = 6) of the individual experiments performed in duplicate.Statistics were assessed by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05 * CTRL vs. PEG1 or PEG2; ˆPEG1 vs. PEG2). Figure 3 . Figure 3. SOD activity, TBARS, and OMP in the DG of M. galloprovincialis exposed to PEG.The data are expressed as mean ± S.E. of absolute values (n = 6) of the individual experiments performed in duplicate.Statistics were assessed by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05 * CTRL vs. PEG1; ^ PEG1 vs. PEG2). Figure 3 . Figure 3. SOD activity, TBARS, and OMP in the DG of M. galloprovincialis exposed to PEG.The data are expressed as mean ± S.E. of absolute values (n = 6) of the individual experiments performed in duplicate.Statistics were assessed by one-way ANOVA followed by Tukey's multiple comparison test (p < 0.05 * CTRL vs. PEG1; ˆPEG1 vs. PEG2). Table 2 . Viability of the haemolymph cells of M. galloprovincialis (n = 6) exposed for 14 days to PEG.The tests conducted were the TB exclusion method and the NR retention assay. Table 3 . Viability of the DG cells of M. galloprovincialis (n = 6) exposed for 14 days to PEG.The tests conducted were the TB exclusion method and the NR retention assay. Table 4 . Biochemical characteristics of haemolymph of M. galloprovincialis (n = 6) exposed to two different concentrations of PEG (0.1 mg/L and 10 mg/L) for 14 days. Table 4 . Biochemical characteristics of haemolymph of M. galloprovincialis (n = 6) exposed to two different concentrations of PEG (0.1 mg/L and 10 mg/L) for 14 days. Table 5 . Biochemical parameters of the water from both the control and PEG1 and PEG2 experimental tanks. Table 5 . Biochemical parameters of the water from both the control and PEG1 and PEG2 experimental tanks.
2024-06-19T15:16:42.147Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "5d47f734885d35bd862da9af1566aa3eb4dddaba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/13/6/734/pdf?version=1718617069", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1afbdf8d7f1e8d67f150e0d4ff0fe23f7bdcc94f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
220679780
pes2o/s2orc
v3-fos-license
A residual-based deep learning approach for ghost imaging Ghost imaging using deep learning (GIDL) is a kind of computational quantum imaging method devised to improve the imaging efficiency. However, among most proposals of GIDL so far, the same set of random patterns were used in both the training and test set, leading to a decrease of the generalization ability of networks. Thus, the GIDL technique can only reconstruct the profile of the image of the object, losing most of the details. Here we optimize the simulation algorithm of ghost imaging (GI) by introducing the concept of “batch” into the pre-processing stage. It can significantly reduce the data acquisition time and create reliable simulation data. The generalization ability of GIDL has been appreciably enhanced. Furthermore, we develop a residual-based framework for the GI system, namely the double residual U-Net (DRU-Net). The imaging quality of GI has been tripled in the evaluation of the structural similarity index by our proposed DRU-Net. www.nature.com/scientificreports/ GIDL usually takes a large number of images of the object as ground-truth, and then they are multiplied with a series of random patterns to create corresponding GI images as the network inputs. However, among most GIDLs proposed so far, the same set of specific random patterns is used unchangingly in both the training and test set. This drawback may lead to unsatisfactory performance in practical applications with weakened generalization ability 29 . Here we optimize the simulation algorithm of GI with batch processing of different random patterns to create datasets similar to real physical condition. We apply this optimized simulation algorithm in DNNs for the GI scenario, and the generalization ability of existing GIDL is enhanced obviously. In order to improve the imaging quality of GIDL, we make DNN learn the residual image but not the image of the object. We divide the residual of GI into two and further develop a double residual (DR) framework by learning two different kinds of residual images. Through combining the DR framework with the U-Net, we propose a new GIDL, namely the double residual U-Net (DRU-Net). The image quality has been significantly improved with the triple SSIM index. Simulation Optimized simulation algorithm with batch processing. The configuration for CGI is shown in Fig. 1. Here we simulate it on the computer. A series of random patterns is created and multiplied with the image of the object. It is then summed up as a single pixel to accomplish the bucket measurement. Images can be reconstructed by the correlation between the bucket measurement point intensity in the test beam and the random patterns in the reference beam. The bucket measurement function is given by where �·� denotes the average of the multiple measurements. The pseudothermal light random patterns are generated through the simulation to train the network 28 . In the plain serial simulation algorithm, images are processed one by one to achieve the corresponding images of GI. And creating a different set of random patterns for each image will lead to huge input/output (I/O) overhead. Since DL usually needs datasets of thousands of images, it will take tens of hours to generate them with the plain serial simulation algorithm. In previous research, a general method is to make use of the same set of random patterns to generate datasets?. It can reduce both the random pattern generating time and I/O overhead at the cost of the reliability of data and network's generalization ability. Here we optimize the simulation with batch processing to create more reliable data with different sets of random patterns. We introduce the concept of batch in stochastic gradient descent (SGD) 30 into the pre-processing stage to make full use of the Compute Unified Device Architecture (CUDA). Images are divided into batches and then combined in the same batch into a 3-dimensional (3D) array. Each 3D array will be taken as the smallest operation unit and a different set of random patterns are created for the specific 3D array. Thus, images in the same batch share the same set of random patterns, whereas different sets of random patterns are used among batches. The GI images are achieved from the dot product of the 3D object distributions and their corresponding random patterns. By setting batch size as 256, 512 or even larger number, our optimized simulation algorithm significantly reduces the I/O overhead. We then normalized the GI simulation data according to the formula as In this way, the generalization ability of GIDL has been improved. Our optimized algorithm can deal with grayscale image as well as binary images. Both the plain serial simulation algorithm and optimized simulation algorithm are implemented by PyTorch, which is a useful python machine learning library. It has been optimized for matrix, hence most of the matrix calculation time can be saved. For the training set of 128 × 128 pixels images, it takes almost 28 h to process them with the plain serial simulation algorithm but only 23 minutes with our optimized simulation algorithm by the laptop. The calculation efficiency is increased by 70 times. Proposed double residual learning method. In existing GIDLs, images of the object are reconstructed end-to-end from GI images. This process can be expressed as www.nature.com/scientificreports/ where O stands for the output of the network, and F{·} represents the network that maps the GI image to the corresponding image of the object. This specific training process is given by where is the set of network parameters, L(·) is the loss function used to measure the distance between the network outputs and their corresponding images of the object, and the superscript i denotes the ith in-out pair. Here we have i = 1 . . . K , enumerating the total K in-out pairs. The last term, W(θ) , means the regularization of parameters in case of overfitting 29,31 . In recent years, we have witnessed the widespread applications of residual learning in many image processing fields 32,33 . Inspired by the DnCNN 34,35 , we make DNN learn the residual image but not the image of the object. When applied in GIDL, the residual, denoted as Res, can be written as DnCNN is a deep residual network able to handle several general image denoising tasks, including Gaussian denoising, SISR and JPEG image deblockin 34,35 . It can yield better results than other state-of-the-art methods. Using R to denote the DnCNN, the training process can be formulated as In this way, we can express the reconstructed image as The residual mapping is usually easier to optimize than the original mapping as the residual image has small enough pixel values compared to those of the object's image 33,34 . To the extreme, if the identity mapping is optimal, it would be easier for the network to push the residuals to zero than fit the identity mapping. However, GI is an exception as its residual is of too high value making it hard to learn. In GI images, such as shown in Fig. 2b, the correlation measurement leads to the blurred image with the averaged pixel value. It turns out that the residual between Fig. 2a,b is in the range of − 209 to 145. The interval length exceeds the upper limit of the grayscale. It means that the residual is much more complex than the image of the object. Thus residual learning does not appear adaptive to GI. Here we produce two different kinds of residuals, namely the up-residual (up-Res) and down-residual (down-Res), to reduce the range of the pixel value to make it consistent with the classical nature of residual learning. The learning process becomes easier by dividing the residual into two parts. Figure 2c,d show the up-Res and down-Res images, respectively. The main body of the DR framework consists of two CNNs. We refer to our proposed network as double residual U-Net (DRU-Net). The U-Net is a deep CNN with a U-shaped structure, where the max-pooling layers and the up-sampling layers are symmetrical to each other 36 . It was initially designed for image segmentation, but it can also be used in image denoising problem and ghost imaging 26 . Its variant ResUNet-a has also achieved significant improvement in remotely sensed data 37 . As schematically outlined in Fig. 3, one CNN is trained with up-Res images, and the other is trained with down-Res images. The DRU-Net reflects the statistical property of GI and makes the residual of GI easy enough to learn. After training, our network obtains much better results than both the U-Net and DnCNN. The DRU-Net not only reconstructs high-quality images with an amount of details restored but also shows strong generalization ability on the test set. The DRU-Net is made up of the up-CNN and down-CNN, which intends to learn the up-Res images and down-Res images, respectively. Here we define the up-Res and down-Res as follows We replace the negative pixel values by zeros to make it more adaptive to convolution operation. We feed those highly corrupted GI images to networks as inputs and use the two residuals as targets separately. In this case, the training process can be written as (3) O(x, y) = F{T GI (x, y)}, Res(x, y) = T(x, y) − T GI (x, y), www.nature.com/scientificreports/ where R up denotes the up-CNN, and R down denotes the down-CNN. We also convert the floating-point number into an integer so that we can save the residuals as images without explicit loss of information. Now the reconstructed image can be expressed here as The MSRA10K dataset 38 was used as the image of the object to train all three GIDLs in our study. We use the models of the U-Net and DnCNN in previous research 26,34,35 . We selected 5120 images and resized them into 128 × 128 pixels. After that, we divided them into batches and obtained corresponding GI images with the optimized simulation algorithm. According to Eqs. (8) and (9), we calculated the up-Res and down-Res image paired up with corresponding GI images to train the DRU-Net. During the training period, Adam optimizer 39 was adopted to minimize the mean square error (MSE). The batch size of the training set was 16. The learning rate was set to 0.00002, and we had the weight decay equal to 0.0003. We used PyTorch 1.3.1 and a laptop with an NVIDIA GTX 1080M graphic card to train our network. www.nature.com/scientificreports/ Figure 4 shows a comparison between the results of different GIDLs. The first column shows the four images of the object that are not included in the training set, while the second column shows the images reconstructed by GI simulation, the third shows the results produced by the DnCNN, the fourth shows the results of the U-Net and the fifth shows the results obtained by the DRU-Net. As clearly shown in Fig. 4, both the U-Net and DRU-Net improve the quality of imaging significantly. The DnCNN performs not well, which is owing to the residual complexity of GI. It is obvious that our proposed DRU-Net achieves much better performance than other methods. In the image of Cat and Yii, more facial details are restored with the DRU-Net compared to the U-Net. In the image of Torre di Pisa, the gray value of DRU-Net is almost close to the image of the object, while the gray value of the U-Net is higher than the image of the object thus its dark background appears quite bright. As to the image of Rose, the image reconstructed using the DRU-Net is almost the same as the image of the object. Results The DnCNN, U-Net and DRU-Net parameters for GI are shown in Table 1. The DRU-Net achieves the highest score under the measurement of both the PSNR and SSIM index and the lowest of the RMSE. The DnCNN performs poorly compared to the other two GIDLs. In the image of Yii, the RMSE of the DnCNN is higher than GI, which proves that the residual of GI is different from other image processing. The SSIM index can evaluate image quality more accurately than the PSNR 26,[40][41][42] . The average SSIM index of GI, DnCNN, U-Net, and DRU-Net are 0.183, 0.328, 0.482, and 0.555 respectively. The SSIM index of the image has been tripled with our proposed DRU-Net. Discussion To further examine the robustness and generalization ability of the DRU-Net, we let them reconstruct images with different sampling rates, as β = 24.4%, 61%, 122%, 183% . The results are shown in Fig. 5. Here we mainly focus on the U-Net and DRU-Net, dismissing the results of DnCNN because it performs relatively weak compared with the other two networks. The first row shows the reconstructed images of GI; the second row shows that of the U-Net, and the third shows the results of the DRU-Net. The quality of the reconstructed image continues to grow as the sampling rate increases. If β equals to 24.4% , the reconstructed image of the road sign with the U-Net or DRU-Net are either highly blurred. When β equals 61% , the profile of the object becomes clearer. However, the characters still cannot be reconstructed. When β reaches as high as 183% , the word "STOP" can be recognized clearly. In the reconstructed images with the DRU-Net, more details of the water were restored than the image with U-Net. Nevertheless, the DRU-Net still cannot accomplish a very high-quality image since its PSNR is only 15.21dB. The noise in the process of GI has severely corrupted the image, which causes significant information loss. Since now, most GIDLs make use of supervised learning, and the GI process is taken as a denoising problem. In our future project, we plan to carry out the www.nature.com/scientificreports/ unsupervised learning methods, such as inpainting algorithm [43][44][45] to "guess" the lost part of the object instead of learning since the significant information loss can hardly be viewed as denoising problem. In summary, we propose the concept of batch in the pre-processing stage and optimize the simulation algorithm of GI. This allows more reliable simulation data with different sets of random patterns to be acquired within less time. The generalization ability of GIDL has been enhanced with the optimized simulation algorithm.
2020-07-22T15:03:31.531Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "473e3364e320182b70b2fce352d5fa6c468f228a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-69187-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "473e3364e320182b70b2fce352d5fa6c468f228a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245131048
pes2o/s2orc
v3-fos-license
Possible realization of a phononic tsunami in a wedge-shaped sample Exploiting the theory of solitons in a nonlinear elastic medium we predict a novel phenomenon called a phononic tsunami, which is characterized by the dramatic increase of the local amplitude of phonon modes. To elucidate the possible experimental detection of this phenomenon we propose to use a wedge-shaped sample in which a sharp edge serves for the emulation of the shoaling effect and such a local enhancement can be observed. Together with eigenfrequencies of transverse and longitudinal phonon modes of a system we find the characteristic dispersion relations that can be considered as a hallmark of a phononic tsunami. We justify our predictions by means of analytical calculations and numerical simulations showing a possible realization of this nonlinear effect in such a geometry. Our results provide the framework for the implementation of new kind experiments aimed at realizing and investigating a phononic tsunami phenomenon in relevant materials. Exploiting the theory of solitons in a nonlinear elastic medium we predict a novel phenomenon called a phononic tsunami, which is characterized by the dramatic increase of the local amplitude of phonon modes. To elucidate the possible experimental detection of this phenomenon we propose to use a wedge-shaped sample in which a sharp edge serves for the emulation of the shoaling effect and such a local enhancement can be observed. Together with eigenfrequencies of transverse and longitudinal phonon modes of a system we find the characteristic dispersion relations that can be considered as a hallmark of a phononic tsunami. We justify our predictions by means of analytical calculations and numerical simulations showing a possible realization of this nonlinear effect in such a geometry. Our results provide the framework for the implementation of new kind experiments aimed at realizing and investigating a phononic tsunami phenomenon in relevant materials. I. INTRODUCTION A tsunami is a sporadic powerful water wave triggered by a shock external force (usually an earthquake) that can travel for thousands of kilometers from the disturbance across deep ocean. In the open ocean the tsunami does not have a significantly large amplitude, but its wavelength is extremely long. Due to these properties, it is difficult to detect a tsunami before it nears shore. However, on approaching the coast, when the effect of shoaling becomes crucial, a small amplitude open-ocean tsunami evolves into a large amplitude wave, with the bottom topography altering considerably its characteristics. Tsunami at intermediate depth is described by the dispersion relation ω 2 = gk tanh (kh) that can be obtained from the Airy wave theory 1,2 (here k is the wavenumber, h is the equilibrium water depth and g is the gravitational acceleration constant). A series of different approaches related to tsunami wave propagation have been discussed in literature. Most of them include nonlinearly dispersive water wave models based on the soliton theory, like the Korteweg de Vries, and Boussinesq equations. Later the Boussinesqtype model was reformulated by Peregrine for long waves in shallow waters of varying depth. However, it is worth noting that the consideration of the tsunami as a manifestation of soliton physics is the still debated and controversial topic. [3][4][5] Since sound propagation and heat diffusion can both be described as mechanical vibrations transmitted through a crystal exploiting the phonon picture, the emergence of similar complex phenomena and related topological effects, like a phononic tsunami in the lattice, can be anticipated. From the theoretical point of view some progress in that direction has been achieved already with the aim to show the possible diversity and plenty of extraordinary nonlinear phenomena occurring with phonons. It was shown by means of a diagram technique that the classical vibrational degrees of freedom of a solid, being sufficiently far from equilibrium, can be evolved into the phononic turbulence due to nonlinear interactions between long-wavelength modes 6 . Later on, it was found that in a cylindrical quantum wire embedded in another material acoustic phonon modes give rise another hydrodynamic-like nonlinear topological excitation known as a phonon vortex with nonzero angular momentum along the wire axis 7 . Moreover, based on two different approaches, the homogeneous Fermi-Pasta-Ulam-Tsingou model and the nonlinear Schrödinger equation, the formation of phononic rogue waves in phononic lattices was predicted 8 . Along with this prediction different theoretical studies and subsequent experimental observations show that hydrodynamic phonon transport occurs in materials with a high Debye temperature and large anharmonicity. Interestingly, that this hydrodynamic-like phenomenon can be described by the macroscopic transport equation similar to the Navier-Stokes equation 9 . Recent technological progress on newly developed phononic crystals and devices, combined with theoretical modeling, enabled control over material properties providing unique opportunities to manage and manipulate the phononic spectrum and other characteristics of these systems 10,11 . As a result, a large variety of experiments can be designed to verify the theoretical predictions of the above-mentioned effects in the field of nonlinear phononics 12 . A recent possible confirmation of the existence of novel nonlinear phenomena, namely phononic solitons, was obtained in traveling four-wave mixing experiments with incorporation of chirped input pulses into nonlinear phononic crystal 13 . Undoubtedly, this experiment opens up the possibility of the observing not only phononic solitons but also another closely related effect, the phononic tsunami. To this end, the goal of the present paper is to pro-arXiv:2112.07226v1 [cond-mat.mes-hall] 14 Dec 2021 vide a theoretical background and a description of the phononic tsumani phenomenon for its possible detection in relevant materials. It is worth note that the study of strongly nonlinear elastic waves propagating in a wedgeshaped waveguide and known as wedge waves is longstanding challenge [14][15][16][17][18][19] . After the theoretical prediction of wedge waves subsequent experiments confirmed presence of nonlinearities induced by the laser-based pumpprobe excitations [20][21][22] . In turn, we propose to use a wedge-shaped sample to emulate the tsunami shoaling when by means of the shock wave pulse applied to a small side of the wedge, the phonon excitations approach its sharp edge (see Fig. 1). Since phonons can demonstrate hydrodynamic-like behaviour (turbulence, hydrodynamic transport, rogue waves) we define such a phenomenon in spirit of an oceanwave tsunami. In other words, the phononic tsunami is characterized by the dramatic enhancement of the displacement field in a narrow edge of a wedge. It can be induced by the correspondingly applied shock wave in the specific geometry of the wedge-shaped sample. In this case the narrow edge of a wedge replicates the shoaling effect for the ocean tsunami wave, which runs from deep to shallow water. The energy of the tsunami wave is concentrated at the edge tip and as a result high displacement amplitude is achieved. Exploiting the equation for modeling of solitons in a nonlinear elastic media, we find dispersion relations that can be considered as a hallmark of the phononic tsunami occurrence 23 . The paper is organized as follows. In sec. II we present the model and determine the eigenfrequencies for transverse and longitudinal phonon modes of a wedgeshaped sample. The equation describing propagation of the phononic tsunami is presented and discussed in Sec. III, where we also find corresponding dispersion relations. Numerical simulations of the phenomenon based on the introduced equation for the wedge-shaped geometry are presented in Sec. IV. In Section V we study analytically the dynamical behavior of a phononic tsunami induced the excitation of the specific form of a short Gaussian pulse. The results obtained are summarized in Section VI. II. MODEL AND EIGENFREQUENCIES OF PHONON MODES The system under consideration is modeled as a wedge with the length l, the height h, the width w and the angle θ = arctan h l (Fig. 1). Before the description of a phononic tsunami phenomenon the identification of the eigenfrequencies of all possible phonon modes for this geometry is required. One has to distinguish the longitudinal and transverse modes and their eigenfrequencies corresponding to the different boundary conditions. This information is necessary for the possible experimental identification of a phononic tsunami by means of dispersion relations describing its propagation in a wedge-shaped sample (see Section III). From the formal point of view, we suggest using the same strategy as it is usually proposed for the ocean tsunami early warning system. We introduce the coordinate system as shown in Figure 1. To characterize transverse phonon modes in a wedge we define the displacement field as a vector u directed along the z-axis, yet independent on z (i.e. u has only one component u z = u(x, y)). What concerns the longitudinal phonon modes the vector u of corresponding displacement field belongs to the x-y plane. Without loss of generality one can assume that it has only single component u x = u(x, y). Finally, to simplify the model we consider it without defects. 24,25 . A. Eigenfrequencies of transverse phonon modes When the phonon wavelength is larger than the width of a wedge, the transverse phonon modes dominate and the biharmonic equation should be applied for their description 26,28 where the material dependent parameters ρ and D = Ew 3 12(1−ν 2 ) denote mass density and flexural rigidity of the material, E is the Young's modulus, ν is Poisson's ratio, and ∆ 2 u = ∇ 4 u is the Bi-laplacian operator. The displacement field for transversal phonon modes is calculated within the approximation that the width w of a wedge is much smaller than its length and the height. Due to this assumption the consideration of transverse phonon modes for the three-dimensional geometry (see Fig. 1) is reduced to the quasi-two-dimensional system (triangle). Corresponding boundary conditions depend on the fact whether the wedge edges are clamped (fixed) or free. In the case of fixed edges the boundary conditions are u| x=0,x=l = 0, ∂u ∂x x=0,x=l = 0, u| y=0,y=h = 0, ∂u ∂y y=0,y=h = 0. (2) In turn, the boundary conditions for the wedge with the free edges imply It is important to emphasize that one has to distinguish between conditions u = 0 for free edges and u = 0 for the clamped edges. In the first case u = 0 means that edges of a system under consideration rest on a fixed support but are not clamped to it. In the second case the condition u = 0 expresses the fact that the edges of a wedge undergo no transversal displacement in the deformation, i.e., the edges are rigidly fixed. We are searching for a solution in the form that after the straightforward substitution Eq. (1) transforms to stationary biharmonic equation with Here ω t is the eigenfrequency of transverse phonon modes and ϕ is the arbitrary phase. The case of vibration of the wedge with free edges (the boundary conditions are given by Eq. (3)) allows the analytical solution, which acquires the form: where A and ϕ are the arbitrary amplitude and phase. The set of nonzero integer numbers m and n numerates the harmonics of the displacement field along the x-and y-axis respectively. Corresponding eigenfrequencies for m and n with m > n are given by the expression Eq. (5) for the wedge-shaped geometry under consideration is solved by subtracting of the two solutions of the same equation but for a rectangular membrane with the reversed indices m and n. Since the displacement field on diagonals which is equidistant from the center must have the same expression (because of symmetry) this procedure gives a solution which vanishes along the diagonal as long as m and n are both even or odd. The lowest frequency corresponds to m = 3 and n = 1 In the case of the wedge vibrations with clamped edges (boundary conditions are given by Eq. (2)), analytical solution of the biharmonic Eq. (1) does not exist and numerical methods should be applied. However, for the elongated wedge, when h l, one can reduce the two-dimensional biharmonic equation to the quasi-onedimensional one and find the solution of Eq. (5) in the form 26 with the eigenfrequencies obeyed the transcendental equation The minimal eigenfrequency can be found by means of the expansion of the left-side part of Eq. (10) in series with respect to the ω t up to the third order of smallness. This gives In Fig. 2 one can see the results of a numerical solution of Eq. (10) for the eigenfrequencies as a function of combination its length and the square root of the width (l/ √ w) with the numerical values for parameters ρ = 2650 kg/m 3 , E = 76.5 GPa and ν = 0.07 appropriate to the quartz (Fig. 2a). For the sake of visibility we plot a 3D dependence vs. geometrical parameters to show explicitly its dependence on the length and the width of a quartz wedge (Fig. 2b). B. Eigenfrequencies of longitudinal phonon modes If a wedge is no more thin and its width larger than phonon wavelength (see the section above) the longitudinal phonon modes are localized in a "bulky" system. In this case for the eigenfrequency problem the wave equation can be used where λ and µ are Lamé parameters. Since we are interested in eigenfrequencies only, for both cases of clamped and free boundary conditions separated considerations are not required 27 . To this end without loss of generality one can consider the case of fixed edges: The previously introduced constraint that the displacement field has only single component u x = u(x, y) should imply an additional boundary condition in the form of a certain stress at the inclined edge of the wedge to satisfy this assumption. However, as stated above, the subject of our consideration is the eigenfrequencies only, thus such a condition will not affect the result. The solution of Eq. (12) with the boundary conditions Eq. (13) is similar to Eq. (6) where ω l is the eigenfrequency of longitudinal phonon modes, A and ϕ are the arbitrary amplitude and phase and the set of nonzero integer numbers m > n numerates the harmonics of u(x, y, t) along the x-and y-axis respectively, like for the transverse modes (see subsection A above). However, in this case, the eigenfrequencies are given by the expression that differs from Eq. (7) for the eigenfrequencies of transverse phonon modes. The minimal frequency in this case is III. DISPERSION RELATIONS FOR A PHONONIC TSUNAMI A. The governing equation The occurrence of phononic tsunami in a wedge-shaped sample can be described by means of a nonlinear partial differential equation 29 It was firstly introduced for the description of solitons in a nonlinear elastic medium and was derived in the frameworks of the microscopic scalar model considering the interacting particles with quartic polynomial potential. Within this approximation dimensionless coefficients α and β can be interpreted as the re-scaled parameters that take into account the effect of anharmonicity, while dimensional coefficients γ 1 and γ 2 (dim γ i = L 2 ) are responsible for the harmonic contribution to the potential. The coefficients γ 1 and γ 2 are chosen to be positive in order to stabilize the relative displacements between atoms of the medium from equilibrium. In turn α and β can be positive or negative, resulting in additive or competitive contributions to the polynomial potential, correspondingly. The parameter c can be considered as the longitudinal velocity of sound in a wedge-shaped sample. Also, within the derivation procedure c plays role of the normalization factor in the definition of parameters α, β, γ 1 and γ 2 . The displacement field u has a single component that is directed along the z-axis and it does not depend on z, whereby the width of a wedge is not small in comparison with its length and height (see Figure 1). It should be noted that Eq. (17) does not include the terms that are responsible for the long-range interaction. The model under consideration and the corresponding Hamiltonian, which is the starting point for the derivation of Eq. (17), stipulates a short-range elasticity only. This means that the emergence of the nonlinear phenomenon like a phononic tsunami starts within a singleconnected component, created by neighboring particles of the elastic media. If one were use the Hamiltonian with long-range kernel then one could trigger such a kind of instability in several connected components, spatially separated in the space that in the end can lead to even more stable nonlinear phenomenon manifestation. Therefore, results that will be discussed below remain the same on the qualitative level. The presence of the higher derivatives of the displacement in Eq. (17) may look rather surprising and the optional complication of the model. The emergence of a phonon tsunami implies the occurrence of sufficiently high strain levels in the wedge-shaped sample. In this case the continuum mechanics described by Eqs. (1) and (12) fails and the discrete structure of a media should be taken into account. However, the discreteness of a crystal structure has the consequence that the relation between stress and strain in it can acquire a nonlocal character. Such a nonlocal relation between stress and strain leads to the spatial dispersion of phonon modes in the media. This results in the need to take into account higher order derivatives of the displacement field function. Using the continuum approximation when the crystallographic fractional coordinates can be considered as continuous variables, where the typical wavelength is much larger than the distance between the coupled objects and where no longer need to refer to the displacement of each interacting object, one can expand the displacement around the given atom in the lattice under consideration in Taylor series up to the fourth order and obtain Eq. (17) (see details of the derivation in Ref. 29). Moreover, the geometric nonlinearity of the system induced by the wedge-shaped sample dictates the strong necessity of the nonlinear potential for our model and consequently stresses out the importance of the higher order derivatives in Eq. (17) for the description of a phononic tsunami in such a geometry. It is noteworthy that Eq. (17) can be derived also in the framework of nonlinear elasticity theory 36 . Using the long wave limit of the equation of motion of a simple crystal with nearest and next nearest neighbor central and non-central force interactions between atoms, equations of motion for the stress tensor can be expanded in series up to fourth derivative terms for the displacement vector components. When γ 1 = γ 2 = 0 and α = β = 0 (i. e. the lack of anharmonicity), we arrive to the well-known wave equation Eq. (12). It is worth note also that recently the simplified version of Eq. (17) with α = 0, β = 0 and γ 2 = 2γ 1 was used for the prediction of new class of phononic metamaterials, in which the phonon band dispersion can be changed from an acoustic to an optical type by modulating a uniform stress 37 . In this work it was demonstrated theoretically how to stop and switch signals in a tunable metamaterial by changing the dispersion relation of an entire band. B. Dispersion relations Based on the hydrodynamic-like behavior of phonons (see Introduction) one can extend to our model the concept of the Airy wave linear theory. It is well known that the latter is the cornerstone for the derivation of the dispersion relation for an ocean tsunami at the intermediate depth. To avoid confusions, we would like to recall that this hydrodynamic similarity is based on the formal analogy between an ocean tsunami and its phononic counterpart. The constitutive equation is still Eq. (17), obtained within the nonlinear elasticity theory. Based on this approach we calculate dispersion relations of a phononic tsunami far away from the narrow edge of a wedge-shaped sample. Under such a condition, the nonlinear effects induced by the geometry of a system can be neglected and anharmonic effects are vanishing α = β = 0. This implies the linear structure of the wave packet along the x-axis and allows to seek the solution of Eq. (17) in the form: Substitution of Eq. (18) into Eq. (17) yields an ordinary differential equation which solution is u 0 (y) = C 1 e −q1y + C 2 e q1y + C 3 e −q2y + C 4 e q2y , (20) where C i are constants and Clamped edges The boundary conditions for the case of clamped edges given by Eq. (2) are transformed to a more amenable form: Taking into account boundary conditions Eq. (23) one can rewrite the solution of Eq. (20) in the form where C is an arbitrary constant. The dispersion relation ω = ω(k) for a phononic tsunami is determined by the solvability condition for the constants C i in Eq. (20) and is given by the implicit expression where ω and k are included in Eqs. (21) and (22). It is interesting to note that, due to the presence of the hyperbolic tangent function, the dispersion relation given by Eq. (25) formally reminds of the analogous "simplified" characteristic of a tsunami wave in the ocean ω 2 = gk tanh (kh) (see Introduction). The numerical solution of Eq. (25) is shown in Figure 3. The remarkable feature of the dispersion relation for the given set of parameters is the appearance of a gap in momentum space. Such a kind of the dispersion relation with k-gap is not a new one and was predicted for different liquids and supercritical fluids 30 , plasma 31,32 and for certain holographic models of quantum field theory 33,34 . Direct experimental evidence for the k-gap has been obtained for the phonon spectra in a monolayer dusty plasma 35 . Therefore, we can consider gapped momentum states in corresponding experiments as a fingerprint of the phononic tsunami. Free edges In the case of free edges, where the boundary conditions are imposed, the y-component of the displacement field is described by the function where C is an arbitrary constant. The dispersion relation has the form of a multi-valued function that assumes two distinct values for the given value of k It is worth noting that at qualitative level Eq. (28) is nothing else than the first two terms in the series ex- pansion of the dispersion relation of the ocean tsunami ω 2 = gk tanh (kh) (see Introduction). The dispersion relation given by Eq. (28) is shown in the Figure 4. Another striking feature of our analytical consideration is that the y-component of the displacement amplitudes represented by Eqs. (24) and (27) are similar to the expression for the deviation of the water surface from the undisturbed state, obtained within Airy wave theory for a ocean tsunami (see e.g. Ref. 38). IV. NUMERICAL RESULTS FOR A STATIONARY CASE To justify our predictions we performed numerical calculations of the stationary Eq. (17) for boundary conditions corresponding to free edges to show the possibility of realizing of a phononic tsunami in a wedge-shaped sample: Due to highly nonlinearity of Eq. (29) we cannot apply the same procedure with the subtraction of two solutions with indices reversed as it was performed for Eqs. (1) and (12) to satisfy the boundary conditions for the inclined wedge edge. Generally speaking, the boundary conditions should be written in the following way where d/dn denote differentiation along the outward normal to the slope of a wedge-shaped sample 26 . However, according to our numerical simulations the imposing of Eq. (30) lead to significant increase of the calculation time. To avoid such a problem and speed up numerical solution the inclined edge is treated as the stress free (see details of the numerical simulation in Appendix A). This assumption does not change essentially the results on qualitative and quantitative level and preserves our further conclusions (see below) regardless of the boundary conditions selection for the inclined edge of a wedge. First of all we show the importance of the system geometry and the shoaling effect for the observation of a phononic tsunami. To this end we solved numerically Eq. (29) for a rectangular cuboid with the same dimensions as in the case of a wedge. Asymmetric structure of a displacement field in Figure 5a is connected to the significant difference between coefficients γ 1 = 0.01 m 2 and γ 2 = 5 m 2 and the simultaneous lack of the "stabilizing" parameters α = 0 and β = 0 and, as a consequence, an amplification of nonlinear effects. Introduction of α = 1 and β = 1 leads to the smoothing on nonlinear effects (Fig. 5b) and the transformation to the symmetric shape of the displacement field. In the case of a wedge-shaped sample one can see from Figures 5(c) and 5(d) that there are solutions with the significant enhancement of the displacement in a wedge, which can be attributed to the emergence of a tsunami phononic wave. Moreover, Figure 5 (c) and (d) show the amplitude of u approximately 10 nm, which agrees with the experimental results for the peak value of the displacement arising from of phonon solitons occurrence in a 1D phononic crystal waveguide 13 . Justification of the observed data in the above-mentioned experiment was done within the dynamic Euler-Bernoulli equation (Euler-Bernoulli beam theory) with the subsequent reduction to the nonlinear Schrodinger equation. It is interesting to note that under certain conditions Eq. (17) also can be transformed to the nonlinear Schrodinger equation. (17). The parameter t 1/2 is the pulse half width at half maximum and f 0 is its strength, applied to the vertical edge of the wedge. When t 1/2 is close to zero one can induce a phononic tsunami. The vanishing value of t 1/2 mimics a sudden displacement of the ocean needed for the generation of tsunami wave. Very recently, the similar strategy was proposed for the inducing another interesting topological effect in phononic systems. By means of appropriately tuned AC acoustic sources located on the boundary of the body the dissipative-free motion of lattice defects can be observed 39 . To illustrate the phononic tsunami emergence together with its dynamical evolution and to avoid extremely extensive numerical simulations of the time-dependent Eq. (17), one can proceed to the analytical consideration. With this in mind, we adopt several reasonable assumptions. Firstly, due to a wedge-shaped geometry for the imitation of the shoaling effect of the tsunami wave we expect the dramatic growth of the displacement amplitude near the narrow edge of the horizontal side. Secondly, within formulated theoretical approach one can assume the initial existence of the nonlinear wave excitation in a system. Using the similarity of the ocean tsunami behavior near the coast these two assumptions allow to determine the boundary conditions for Eq. (17) for the right edge of the wedge and put u| x=l tends to be large and d 2 u dx 2 x=l → 0. From the physical point of view such conditions correspond to the significant enhancement of the amplitude with the decrease of the curvature of the displacement field. The latter is equivalent to the "vertical sea wall" of the ocean tsunami near the coastline. Finally, the last assumption is that the rest of boundaries are treated as the stress free. These simplifications allow to naively decompose the displacement field u(x, y) on x and y-axis independently, using the ansatz Here A and B are the amplitudes of the displacement field along the x-and y-axis respectively and V g ≡ ∂ω ∂k is the group velocity. The function F (t) represents the result of double integration of f (t) over t, where erf (z) is the error function. The specifics of this anzats is that a tsunami wave arises "from nothing", following the terminology of Ref. 40. The existence of such an class of solutions is not new and was introduced, for instance, for the construction of the Boussinesq equation 41 for the description of shallow-water waves. By means of the ansatz Eq. (31) we proceed to the ordinary nonlinear differential equation of the 4th order instead of Eq. (17): The ansatz Eq. (31) eliminates in Eq. (17) the term with the coefficient γ 2 . Roughly speaking, it means that the interaction between atoms of the elastic media along x-axis is significantly stronger than along y and z directions. Simultaneously to preserve the total contributions of other coefficients we suppose that second-, third-, and fourth-order elastic constants are nonzero and, therefore, cannot be ignored. Eq. (6) can be solved in terms of the incomplete elliptic integral of the third kind Π(z, ν, m) and the elliptic Jacobi function sn (z, m). The list of solutions with the details of derivation are given in Appendix B. The choice of an appropriate solution is governed solely by the behavior of u 0 (x). The plot of this function must contain pronounced spikes, identified with the unique feature of tsunami known as the dramatic increase of the amplitude of the wave near the coastline. As it is shown in the Appendix B the selection procedure is based on the behavior of the composite function Π (sn (x, m) , ν, m). (r 1 − r 3 ) (r 2 − r 4 ) and the modulus of the Jacobi elliptic sine q = 1 − q 2 with q = (r1−r2)(r3−r4) (r1−r3)(r2−r4) . Figure 6 shows the plot of the function u 0 (x) with the characteristic pronounced growth at the narrow edge of a wedge. Therefore, the final expression for the function u takes the form Eq. (35) shows the dynamical behavior of the displacement field in a wedge that can be attributed to a phononic tsunami occurrence, namely the shoaling effect with the significant increase in the amplitude at the narrow edge (see Fig. 7). Comparing displacement fields at the initial moment of time t = 0 shown in Figure 7 (a) and after applying the input Gaussian pulse at t = l/c in Figure 7 (b) one can see the peak value of u ≈ 6 · 10 −4 m in the narrow edge of a wedge-shaped sample. This value is much larger than obtained within numerical simula- The advantage of the analytical solution given by Eq. (35) is that it can be used for the description and the fitting of data in future experiments on the detection of a phononic tsunami phenomenon in a more effective way without extensive time-dependent numerical simulations of Eq. (17). VI. CONCLUSIONS In this article the way of realization of a phononic tsunami in a crystal lattice as a novel nonlinear phenomenon is proposed. The experimental conditions for the observation of such an effect are also delineated. We outline the particular potential of wedge-shaped samples use to generation of phononic tsunami waves. The dispersion relations that have been calculated within a theory of solitons in a nonlinear elastic medium can be considered as a hallmark of a phononic tsunami. The occurrence of such a kind of nonlinear phononic excitations in wedgeshaped geometry has been demonstrated by numerical simulations and by the analytical calculations. Such predictions can be verified by means of measurements of the displacement field induced by Gaussian short pulse in relevant materials including phononic crystals and acoustic metamaterials. Another experimental possibility is the determination of the dispersion relations predicted as a signature of the phononic tsunami in corresponding systems. VII. ACKNOWLEDGEMENTS This work was supported by the CarESS project. The numerical simulation of Eq. (29) was performed in COMSOL Multiphysics. For the solution of the fourthorder partial differential equation the general form PDE module was applied. For further implementation of the finite-element method we rewrite Eq. (29) Introducing new functions P = ∂ 2 u ∂x 2 and Q = ∂ 2 u ∂y 2 we reduce Eq. (A1) to the system of differential equations of the second order which is adapted now for the numerical solution in COM-SOL Multiphysics. The Dirichlet boundary conditions u = 0, P = 0 and Q = 0 were imposed for a wedge except the inclined edge that is treated as the stress free. u 0 (x) = r 1 + r 1 − r 2 1 + δ x − (r 1 − r 2 ) 2λ 1 − δ 1 + δ Π sn (λx, q ) , (1 + δ) As one can see for all types of solutions we have similar structure that is characterized by the presence of the composite function Π (sn (x, m) , ν, m). The selection procedure is based on the plot of this function in a dependence of the coordinate x and the characteristic of the incomplete integral of the third kind ν (Fig. 8). Figure (8) clearly demonstrates the dramatic enhancement with pronounced spikes occurs when ν > 0. Thus, among the plenty of solutions we need to choose those which have
2021-12-15T02:15:52.539Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "27f9aee71e6725f647f3b9804116956f7b19a5e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "27f9aee71e6725f647f3b9804116956f7b19a5e2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
243798633
pes2o/s2orc
v3-fos-license
Review of testing for foreign horse and pig DNA in meats in Croatia Several years after the food industry scandal when horsemeat was found in products sold in Europe as beef products in 2013, Croatia began testing food for the presence of foreign protein. For the time being, these tests are not part of routine monitoring, but the result of examining the situation on the market in the city of Zagreb. Namely, in recent years, central Croatia has been trying to establish itself as a tourist destination, and Zagreb hosted hundreds of thousands of tourists from all over the world before the COVID-19 pandemic. The eating habits of the various groups that came to Zagreb were different, and the larger hotel chains recognized the seriousness of the services and sought help to ensure that the food offered was consistent with their declarations and would not conflict with religious requirements. One of these requirements was the testing for foreign proteins such as horse and pork in foods where they were not declared. Although horse and pork are safe for human consumption, they are not part of the eating habits in all countries. The Dr. Andrija Štampar Teaching Institute for Public Health introduced methods for detection of horse and pig DNA in food samples. Introduction The general principle of food law is to provide consumers with a basis to enable them to be properly informed when choosing the food they consume and to prevent actions that may mislead consumers. The European Union therefore adopted Regulation (EU) No 1169/2011 [1]. The objectives of Regulation (EU) No 1169 are a high level of protection of the health and interests of consumers, with special emphasis on health, economic, environmental, social and ethical circumstances [1]. Europe was rocked by a food industry scandal in 2013 when horse meat was found in products sold as beef products, according to the declaration. Undeclared or misdeclared other meats such as pork were also found in the products. In some products, horsemeat accounted for 100% of total meat. The affair began on January 15, 2013, when it was revealed that horse DNA had been found in frozen burgers sold in Irish and British supermarkets. Shortly after this announcement, the scandal spilled over into many European countries, where products were also found to contain undeclared horse meat in various meat products. Numerous falsely declared products have been withdrawn from supermarket shelves across Europe [2]. Although horse meat is safe for human consumption, it is not part of the eating habits of all European countries. The falsification of declarations and fraud in the food industry usually have financial reasons, as prices for quality meat are high. Horse meat is much cheaper than other types of meat in some countries. There is a similar problem with pork. Indeed, there are large groups of people who do not want to eat pork for religious reasons [3]. Proper labelling of the type of meat contained in food products is important for economic, safety, legal and health reasons. Misdeclared meat is of questionable origin and there is no guarantee that it is safe for consumption. Some people do not consume the meat of certain animal species due to religious customs and laws [4]. In Croatia, food labelling is regulated by the Consumer Information Act (NN 56/13) and Regulation (EU) No 1169/2011 on informing consumers about foods. Nevertheless, the legislation in the Republic of Croatia regarding the examination of the presence of foreign proteins is very open for the time being. Namely, according to Art. 6 of the Regulation on Meat Products NN 62/2018, a meat product that has a prominent animal species in its name must contain at least 75% of the meat derived from that animal species, calculated on the total amount of meat used in the production process [4,5]. This formulation certainly does not satisfy sensitive groups of people regarding eating habits. The question arises as to what amounts of foreign protein such groups would tolerate. Due to such questions, the British Food Standard Agency has developed guidelines for the limits of acceptability of the amount of foreign protein in the product. Currently, a foreign protein level of 0.1-1% is considered acceptable. Back in 2014, the recommendation to the food industry was that this would be technically feasible in terms of good manufacturing practice and acceptable to most consumers. It is considered that a limit of up to 0.1% foreign protein should not be reported but should be monitored regularly. Where control samples show 0.1-1% foreign protein, the reasons for this presence should be investigated and corrective action taken. For products containing 1% or more of a foreign protein, this should be declared or the recall of such products should be encouraged [6]. Testing for the presence of foreign proteins, such as those from horse and pig, can be performed by various methods based on immunological assays, chromatography, and other chemical methods [7]. Most of these methods are limited due to their sensitivity and easy denaturation of proteins by a rise in temperature [8]. Methods based on DNA analysis, such as polymerase chain reaction (PCR), are more robust as well as sensitive and specific. Compared to proteins, DNA is a more stable and resistant molecule. It is resistant to various processes used in the food industry such as food processing at high temperatures and pressures, presence of other chemical compounds, etc. PCR can, therefore, analyse processed foods, as the DNA molecule is not destroyed by food processing such as case for proteins. The PCR method can also be used to successfully identify certain types of meat present in meat mixtures [7,8]. The Dr Andrija Štampar Teaching Institute for Public Health offers services for testing foods for the presence of DNA of horse and pig origin. These DNAs can be successfully detected in various food samples, from fresh meat (e.g. mixed minced meat), to various meat products such as sausage and salami and even in ready-to-eat meals. [9] First, DNA must be isolated from the sample. Then, all necessary amplification reagents are added to the isolated and purified DNA. If foreign DNA is present in the sample, it is amplified, which is noticeable by an increase in fluorescence [10]. Materials and Methods In the last five years from 2016-2020, a total of 43 samples of different types of sausages and salamis were tested for the presence of DNA derived from horses and 51 samples were tested for the presence of DNA derived from pigs. Table 1 shows the number of samples tested. DNA extraction DNA was extracted according to the protocol using the foodproof Sample Preparation Kit III (Biotecon Diagnostics). The extraction buffer was added to 200 mg of homogenized sample in 2 ml microcentrifuge tubes and vortexed for 30 seconds. Proteinase K (80 µl) was added to the suspension containing sample and extraction buffer. The mixture was incubated at 72 °C for 30 min and the tube was mixed 2-3 times by inverting the tube during incubation. Then centrifugation was done at 12000 x g for 10 min to remove the insoluble material. The supernatant was transferred to a new microcentrifuge tube with 400 µl Binding Buffer and 200 µl isopropanol and mixed gently but thoroughly by pipetting up and down. The mixture (650 µl) was pipetted into the upper reservoir of a combined filter-collection tube and centrifuged at 5000 x g for 1 min. The collection tube was discarded and filter tube was transferred to a new collection tube. The remaining mixture was added to the same filter-collection tube and centrifuged again at 5000 x g for 1 min. The flow through and collection tube were discarded, and the filter tube was added to a new collection tube. Wash buffer (450 µl) was added to the upper reservoir and centrifuged at 5000 x g for 1 min. The flow-through was discarded and the collection tube was reused for a new step in which 450 µl wash buffer was washed and centrifuged at 5000 x g for 1 min. The flowthrough was discarded again and the collection tube reused for a centrifugation of 10 sec at maximum speed to remove residues of wash buffer. The dried column was transferred to a clean 1.5 mL microcentrifuge tube. Pre-warmed (70 °C) elution buffer (200 µl) was added to the glass fibre fleece and left at room temperature for 5min to ensure the elution buffer was completely absorbed. Finally, the column was centrifuged at 5000 x g for 1 min to elute the purified DNA, which was used directly or stored at -20 °C for further analysis. Polymerase Chain Reaction (PCR) amplification The extracted and purified DNA was amplified by real-time PCR (PicoReal 24, Thermoscientific). Amplification of DNA was performed with Thermo Scientific PikoReal Software 2.2. The assay is a qualitative duplex real-time PCR, which means that the detection of specific genes and internal control is performed simultaneously using specific primers marked by fluorescent colours. Specific genes for porcine and horse animals and the internal control were detected by FAM (porcine/horse specific gene) and HEX/VIC (internal control) detection channels. For Porcine detection Lyokit, 25 µL extracted and purified DNA, 25 µL negative control (PCR-grade H 2 O), and 25 µL positive control (control template) were added into each PCR tube which already contained lyophilized reagents. The PCR cycler program included pre-incubation in two steps: 4 minutes at 37 C and 10 minutes at 95 C; 5 seconds at 95 C and 60 seconds at 60 C. For Horse Species Detection Kit, a reaction mixture was prepared by combining 4µL of primer/probe mixture, 10 µL of real-time PCR mastermix, 6 µL of extracted DNA sample into each PCR tube, so the reaction volume for each tube was 20 µL. Positive and negative controls were used. The program included pre-incubation at 95 for 10 minutes and amplification in two steps: 15 seconds at 95 C and 40 seconds at 61C. Figure 1 shows the meat products in which the DNA of horse origin was examined according to year. From 2017 to 2019, tests were carried out on samples of permanently cured meat products declared as 100% pork or beef of domestic or imported origin. In 2017, two meat products were positive for horse DNA. During 2018 and 2019, no horse DNA-positive meat products were found. Samples of domestic and imported origin of brands from individual retail chains and domestic producers were examined. Horse DNA was not found in any of the samples. In 2020, semi-durable meat products such as various salamis, hot dogs, and the like were tested. Only in one sample was DNA of horse origin found in traces. Examination of presence of pork DNA Figure 3. Cleaning processes control In 2019, some Halal certified companies decided to control their cleaning processes to achieve greater trust of customers in their products. Production lines in smaller companies cannot separate products according to individual types of meat. Therefore, cross-contamination of the products can occur. Sometimes in products declared as 100% non-pig type of meat, traces of unwanted pig DNA were found. Various cleaning and disinfection procedures were tried, until testing showed no traces of unwanted pig DNA. Unfortunately, during 2020, these controls abated due to the pandemic. Conclusion Foods that are produced or imported before being placed on the market must meet specified standards. For this reason, regular controls are needed to make sure that consumers are consuming exactly the type and quality of meat that is declared. Meat companies have or want to obtain certificates that are mandatory in some countries due to ethical rules, religions or eating habits, and the companies do not rely only on their national regulations or EU regulations, but go a step further and control the presence of foreign proteins in their production.
2021-11-06T20:07:18.268Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "1b7f5914896d6b9bd7bff9b326444fa3034d7f75", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/854/1/012046", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1b7f5914896d6b9bd7bff9b326444fa3034d7f75", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
245038528
pes2o/s2orc
v3-fos-license
DESIGN OF THE CEMENTED DOUBLET – SOFTWARE APPLICATION From optical point of view, an imaging application includes the sensor (CCD or CMOS) and the objective. The simplest objective consists of a cemented doublet. The paper proposes a design algorithm of the doublet and describes a software application based on this algorithm. The results provided by the original software are validated by means of a professional application in optical system analysis. Introduction Robotics widely use images provided by different types of cameras. From optical point of view, an imaging application includes the sensor (CCD or CMOS) and the objective. The objective, depending on the required image quality may be an assembly made of one or more basic entities, which are the singlet, the doublet and the triplet. The image quality can be evaluated in terms of geometrical aberrations, parameters resulted from wavefront analysis and Fourier parameters. There are several software applications, which perform professional analysis of image quality, such as OSLO, Code-V and Zemax [1, 2, 3]. Regarding the design of optical basic entities there are no standards. Different optics schools, worldwide, recommend different algorithms. They are based on minimization of longitudinal spherical aberration and longitudinal/lateral chromatic aberration [4,5,6,7,8,9]. The present research refers to the cemented doublet, which is frequently used because it is the simplest assembly, which ensures a good quality of the image. The cemented doublet consists of a positive lens cemented together with a negative one (Fig. 1). The three radii allow three mathematical conditions to pose. The first condition refers to the optical power or effective focal length. The second one aims to use an expression of the spherical aberration, which should be minimized. The third condition contains the minimization of the chromatic aberration, defined in different forms. Algorithm for the Design of the Cemented Doublet The algorithm proposed in this paper is based on the minimization of longitudinal spherical aberration and longitudinal marginal chromatic aberration [3,10]. The input data for the design is: -f' (effective focal length) -D (aperture) -S (object abscissa) -ne, nF', nc' (refractive and diffractive parameters of the glasses). The steps for the calculus of the doublet are the following: 1. calculus of total curvatures of the lenses, resulted from the condition of axial achromatization: where a and b are Abbe numbers, and dna and dnb are the main dispersions of the glasses (dn=nF'-nC'). 4. calculus of the curvature c3 from the condition of marginal achromatization: where Da,b are the lengths of the marginal rays through the lenses and da,b are the paraxial rays, approximated through the center thicknesses. The following intermediate data is necessary: where ' is the angle of the image ray and the optical axis and ' is the emergence angle. The coordinates of the third incidence point provide r3: r3 = (x3 2 +y3 2 )/2/x3. (18) 5. check of the reference optical characteristics and residual aberration. Software Application The algorithm was written within the software development environment Python. The interface of the application ( fig. 3) allows the user to type the input data (effective focal length, aperture and object abscissa) and to read the results (r1, r2, r3, d1, d2, effective focal length and image abscissa) for two solutions (the second degree equation (3) provides two solutions for c1). The program ran for the input data: -f' = 100 -D = 15 -s = -∞ -glass sorts: N-LAK7 and N-LASF45 from the catalogue Schott and displayed two solutions, of which is chosen the one with global bi-convex shape: -r1 = 148.99 -r2 =-32.44 -r3 = -77.96 -d1 = 1.7 -d2 = 1.5. The resulting doublet was verified regarding the reference optical characteristic and the image quality with the program OSLO EDU. Figure 4 shows the window Surface data, displaying the input data, the effective focal length and image abscissa: -f' = 99.79 -s'F' = 99.37. Figures 5, 6 and 7 summarize the results of analysis from geometrical, wave theory and Fourier optics point of view. Figure 5 displays the geometric aberrations. One can notice the insignificant spherical residual aberration, as well as the residual lateral color. Figure 6 shows the results of the wavefront analysis. The parameters P-V OPD < 0.25 and RMS OPD < 0.07 qualify the system as diffraction limited. Figure 7 prints the modulation transfer function and foresees the behavior of the doublet regarding the resolution. MTF is ~0.8 up to approximately 40 cycles/mm, which recommends the use of the assembly in applications both traditional (the human eye is the image receptor) and imaging (a physical sensor captures the image). All three figures resulted from analysis confirm the high quality of the assembly, thus validating the proposed algorithm and software. Conclusions The design of an optical entity, such as the cemented doublet, which is the subject of this paper requires an elaborate mathematical algorithm. The algorithm uses expressions of the primary longitudinal spherical aberration, longitudinal chromatic aberration and paraxial/ extra-axial ray tracing. The original software application is conceived in the environment Python. The numerical results were validated with the analysis program OSLO EDU, which provided very high image quality parameters.
2021-12-12T17:41:33.599Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c4a2e69bbb4b3b07b43cfe680fb9fc364156dcbb", "oa_license": "CCBYSA", "oa_url": "https://rm.reviste.ubbcluj.ro/rm_2021_1_pag_13_oprean/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9053b8ca9d6bc419bf60099d277112c0c98e97dc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
3748700
pes2o/s2orc
v3-fos-license
Towards Automatic&Personalised Mobile Health Interventions: An Interactive Machine Learning Perspective Machine learning (ML) is the fastest growing field in computer science and healthcare, providing future benefits in improved medical diagnoses, disease analyses and prevention. In this paper, we introduce an application of interactive machine learning (iML) in a telemedicine system, to enable automatic and personalised interventions for lifestyle promotion. We first present the high level architecture of the system and the components forming the overall architecture. We then illustrate the interactive machine learning process design. Prediction models are expected to be trained through the participants' profiles, activity performance, and feedback from the caregiver. Finally, we show some preliminary results during the system implementation and discuss future directions. We envisage the proposed system to be digitally implemented, and behaviourally designed to promote healthy lifestyle and activities, and hence prevent users from the risk of chronic diseases. INTRODUCTION According to the World Health Organisation (WHO) poor diet and physical inactivity are public health issues and their associated health problems are getting beyond the healthcare capabilities [25]. These issues are strong contributors to overweight and obesity epidemic, which escalates to chronic diseases in the long-term. Research shows that the risk of developing chronic conditions can be reduced by adhering to a healthy lifestyle (e.g., a balanced diet and sufficient physical activity). There is a shift towards scalable solutions to promote healthier lifestyles outside the clinical settings. User adherence to the assigned plan is an indicator of the effectiveness of healthy lifestyle program. Yet, promoting deliberate lifestyle is not straightforward and maintaining a change in behaviour is a hard task to achieve. On the other hand, relatively few diet and physical activity applications have been tested in research environment to determine their effectiveness in health promotion [7]. Moreover, a small segment of such programs consider the delivery of practical and empathic health behaviour change support by considering cognitive, emotional and behavioural aspects of behaviour change. There is a need for a tailored system feedback that goes in parallel and fulfils users' preferences, while keeping the interaction simple. Healthy lifestyle promotes optimal health and prevents health problems such as obesity and eating disorders [25]. This could also prevent long-term health problems, such as heart disease, cancer and stroke. There is a need to reinforce the adaption of long-term healthy eating behaviour. With this research we investigate formulating health promotion techniques with prioritisation based on user data. To create sustainable healthy lifestyle, personalised feedback is always favoured over a one-size-fits all approach. Using modern sensor technology and proper algorithms, we can detect if a user is active, neutral or passive, and show dimensions of data about their activities. However, to guide users along their journey and create awareness, commitment to lifestyle goals it is necessary to offer interactive coaching support. Thus, based on the user activity data acquired, the caregiver entails the delivery of practical and empathic health behaviour change support, which is more personal and responds better to user feelings. A human in the loop can be effective in all lifestyle promotion domains, including physical activity exercise and food intake [31]. This paper provides an overview of an interactive machine learning to classify lifestyle promotion data. We first consider the state of the art view of trends in the field. For example, systems with goals and actions intended for health and wellbeing and applies some form of machine learning techniques. We highlight the importance and challenges associated with this emerging trend in lifestyle promotion. Finally, we discuss the case of CoachMe [11] a bot and web application for lifestyle promotion to a subject by comprising data corresponding to objective behaviour. We discuss the integration of an interactive machine learning algorithm into the system to classify users based on their activity performance. MACHINE LEARNING IN LIFESTYLE PROMOTION With the increasing burden of sedentary lifestyle and overweight on our health, promoting healthier lifestyle becomes a necessity to prevent people from escalating into chronic diseases, such as obesity and diabetes. Developing systems to promote health and provide valuable information about user's habit becomes increasingly effective. Machine learning is a fast-growing trend in the healthcare domain since it has the potential to be a powerful tool for human empowerment, touching everything from how we eat to how we diagnose diseases. Moreover, this can help health experts to identify trends that can lead to improved diagnoses and treatment, such as patient's health history and behavioural information data. Therefore, machine learning can identify aspects about user activities, such as behavioural pattern or the efficacy of the application. Understanding user preferences and observing their behaviour, we can interact with users at the right time, through the right channel, with the right tone, and the most relevant content. Machine learning can assist in developing more effective diagnoses and treatment, preventing prescription errors. This paper focuses on using a supervised machine learning algorithm to classify users and help caregiver to personalise their intervention feedback. Combined human interaction into machine learning algorithms, the interactive machine learning highlights how to optimise human effort in machine learning model training [28]. Moreover, its methods are useful to analyse human behaviour and deduce health and wellbeing information. Recognising specific human behaviour analysis, such as eating pattern, daily physical activity are extremely import for healthcare providers to understand how to support their patients [3]. Moreover, human-machine collaboration is critical for the development of cost-effective and potentially cost-saving solutions. Companies like Google and Microsoft have partnered with a variety of healthcare organisations to implement machine learning solutions for complex problems, including medication adherence and cancer treatment [23]. These studies generally focus on human behaviour monitoring for health assessment purposes. BACKGROUND Interactive machine learning often involves complex iterations, where data is provided by an expert and then identify features to represent the data. Systems that learn interactively from end-users are becoming widespread. Involving users in such systems can increase the learning and ensure accurate output. Amershi et al., [4] presented case studies to demonstrate how interactivity results in a tightness between the system and the user. The paper also explores new ways for learning systems to interact with their users. Interactive machine learning is increasingly applied in social network to create custom groups. In another work by Amershi et al., [5] the authors discuss a novel end-user iML system "ReGroup" to help create on-demand groups in online social networks. The system interactively learns a probabilistic model and uses it to suggest additional members and group characteristics for filtering. Another work by Fogarty et al., [13] presented Cue-Flik, a Web image search application that allows end-users to quickly create their own rules for re-ranking images based on their visual characteristics. The study represents a promising approach to Web image search and an important study in enduser interactive machine learning. Interactive machine learning provides huge support in health informatics together with the human-in-the-loop approach to solve computationally hard problems. Human experts can reduce an exponential search space through heuristics. In a study by Holzinger et al., [20] the authors discuss and evaluate iML and human-in-the-loop approach in enabling a human to manipulate and interact with an algorithm. The study selected the Ant Colony Optimization (ACO) framework and highlighted its importance in solving practical problems in health informatics, such as protein folding [21]. Interactive machine learning is an iterative process of running learner, analysing results, modifying data and repeating [18]. It has become the key components in several health and wellness applications. Several systems have been introduced to ease the process of building and deploying such applications. Microsoft released Azure machine learning, where the process is formalised as a data flow to ease applying machine learning algorithms to real world tasks [24]. Many types of interactive machine learning systems exist, we discuss systems applied in lifestyle promotion (e.g., healthier food choices and physical exercise). In this regard, Ge et al., [15] proposed a novel food recommender system to provide personalised recipe suggestions. The generated recommendations are in compliance with user preferences expressed by user ratings and tags which detects user preferred food ingredients. The study concluded that using tags in food recommendation can enhance prediction accuracy. For example, match the predicted preferences with the user's preferred recipes. Another work by Ge et al., [16] on food and lifestyle change developed a mobile platform to support people with healthier food decision making and reduce the risk of chronic diseases. The author stated that the application could be directly used in the kitchen and support meal decision making. An important step in healthy food recommender systems is the interaction design process which defines the user and system interaction. Therefore, according to Elahi et al., [9] a proper interaction design may improve user experience and result in higher usability. The work focused on the interaction design of a food recommender mobile app. The app captures user's long and short term preferences for food recipes. The long-term preferences are captured by asking user to tag familiar recipes, while user selects the ingredients to include in the recipe is used for short-term preferences. The app provides personalised recommendations based on these preferences. A review by Clifton et al., [6] have introduced ways in which machine learning and software engineering could contribute to health informatics. The review discussed the contribution of both areas in health informatics domain. It is also important to highlight the correlation between machine learning and quantified self movement. Quantified self is a signal of the potential growth in the area of personal health data tracking. This movement includes user activists, such as self-tracked data on everything from diet and physical activity to results of medical test [22]. Based on the user tracked data, the algorithm can generate suggestion module in the form of feedback or indications about user performance. Ainsworth et al., [1] have developed MYBEHAVIOR that learns user behaviours via its machine learning model, then provides suggestions involve small changes to the existing behaviour. The study mainly focused on persuasion and behaviour change theories and provided users with common actions related to user's lifestyle and has been frequently done before [1]. A new version of MYBE-HAVIOR application [29] used mobile sensing technology to develop personalised health feedback by combining behaviour tracking with recommender system algorithms. The system automatically learns user's physical activity and dietary behaviour and suggests changes towards a healthier lifestyle. The system used a sequential decision making algorithm, multiarmed Bandit [17], to generate suggestions that maximise calorie loss and are easy to adopt [29]. The result showed significant increase in physical activity and decrease in food calories. Developing successful machine learning applications requires a substantial amount of data and algorithmic configurations. A study by Domingos et al., [8] summarised twelve key lessons learned when developing machine learning systems, which includes pitfalls to avoid, important issues to focus on, and answer to common questions. To develop an effective lifestyle promoting systems, machine learning could detect user emotional state and provide an appropriate feedback. However, users are heterogeneous, therefore it is extremely hard to target all user's emotional state with a single system. We believe a human expert in the loop could enhance system intelligence and effectiveness. The enhancement could be feedback on user activities and personalised recommendations. In the next section we discuss the possibility of integrating an interactive machine learning algorithm with CoachMe application [11] to promote user diet with a simplistic approach and a human in the loop. This model could be extended to other areas of personal health and wellness. For instance, preventive medicine, medication adherence and smoking cessation. Applying a machine learning algorithm should be in compliance with behaviour theories to provide more effective recommendations and results. Behaviour analysis with learning theories assesses whether a person has the required skills to perform the behaviour. The next step is to increase or decrease the targeted behaviour. To illustrate, if a user is asked to go for a trekking trip, but the weather is cold or the user haven't done trekking before, the probability that he/she will follow the suggestion is very low [29]. With the ageing population and associated healthcare cost, machine learning can provide effective health or medical related recommendations to optimise healthcare cost and reduce the affluence of chronic people to care centres. Paez et al., [27] described a healthy lifestyle promotion system to provide valuable information about their habits. The system was developed around big data paradigm with bio-signal sensors and machine learning algorithms to provide a more personalised recommendations. PERSISTING CHALLENGES The current healthcare system lacks functions to help people preventing chronic diseases. Caregivers are short resource for providing personalised health recommendations for patients. The inherent coupling of the human and machine in interactive machine learning underscores the need for collaboration across the fields of human-computer interaction and machine learning. To solve these problems, intelligent software system is in demand. Lifestyle promoting applications integrated with machine learning can act as outlets for smart, persuasive feedback to the user and have a positive impact on healthy behaviour. These applications should address the challenges, including long-term engagement, contextualisation and individualisation, to provide optimal support for heterogeneous users. Unfortunately, no existing application can fit every user's need, hence providing different level of support becomes essential. Future work should ensure long-term usage of systems in health promotion and disease prevention, since short-term promotion cannot produce significant change in chronic conditions. Contextualisation and individualisation can be beneficial as they can make the system less intrusive and more efficient. For instance, users should choose what form of activity they would prefer (e.g., blend into the environment of user with reference to physical activities and daily diet). Another challenge is interdisciplinary collaboration when designing the mobile health intervention system, as there is no mature frameworks to guide researchers from different domain to design and implement the system. RESEARCH GOAL This paper discusses iML application and provides an overview of CoachMe application for lifestyle promotion. The application is focused on diet and physical activity tracking. This work highlights the following research questions: How to support caregivers to better understand patient's level of preparedness and provide a more tailored plan ? How to detect and classify patients based on their activity performance ? How can a human-in-the-loop be beneficial in lifestyle promotion ? The use case scenario includes the endusers, the patients (e.g., person improving healthy habits) and caregivers who is limited to providing data, answering domain-related questions, or giving feedback about the learned model. The iML is expected to provide automation and detect users performance with respect to an activity. We are mainly concerned with promoting healthier lifestyle through dietary recommendation and adherences. In such situation the patient suffers from poor dietary habits or barriers to adhere to a healthy lifestyle. To introduce changes in his lifestyle, it is advisable to focus on the preventive aspects of change, such as weight reduction, low diet in fat and sodium, and moderate physical activity [12]. However, we should begin by understating user preparedness to change. For example, their level of adherence and respond to various healthcare provided activities. Therefore, introducing machine learning techniques on top of such systems could significantly increase caregivers understanding about patients by visualising the outcome of their data. This work contributes by developing a prototype system to promote active lifestyle and provide users with a simple tool (a Telegram bot application) to them and increase their selfmotivation. The developed system with iML is based on the knowledge of user's preferences over a set of items. The input will be based on features that makes each user unique, namely their age, gender, BMI, and activity type they follow. This paper extends our previous work on CoachMe [11] which evaluated the state of the art from a behaviour recognition perspective and ways to promote long-term patient adherence to a healthier lifestyle. We intend to use a simple classification method to identify patterns in patient activity and classify patients based on similarity in their activity performance. This is to help the caregiver to discover how each patient performs with respect to a given activity and hence provide a more tailored activity that, according to the caregiver, better fits with patients' performance. CoachMe Architecture The high level architecture contains the components that form the overall CoachMe system. The components and their interaction is represented in Figure-1. The architecture collects and processes data to promote diet and physical activity. With this in mind, we propose a simple and coherent activity monitoring solution that takes into account simplicity and non-invasive interaction. The architecture allows monitoring of non-chronic and healthy people, and is intended merely for lifestyle promotion and has no medical application within the study scope. From the technological point of view, the architecture consists of the following main components: Bot Application for Users The bot application is an AI based conversational dialog engine which generates responses based on a collection of known conversations. The CoachMe bot application is a retrieval-based model which provides predefined. In this model, all the possible responses of the bot are predefined and rule-based. The bot uses the message and context of conversation to select the respond from a predefined list of responses. The chatbot in this project provides user with a custom keyboard to access/report their daily activity using the Telegram Bot API. The user can view the dietary plan and submit a compliance about a certain plan with a single button click. The daily plans are predefined by the caregiver and there is a certain plan per user. The output of this research will validate whether using bot for simplicity in user-caregiver interaction instead of auto generated recommendations will create a difference in adhering users to follow a healthy and sustainable lifestyle. In addition, by understanding participant's engagement with the bot, we could decide the targeted user group as potential candidates for evangelising the human-bot interaction. An example of typical daily plan on the bot UI can be seen in Figure-2. Figure 2. A Dietary Plan Provided by CoachMe Bot. Each time a user clicks the new plan he will get a list of activities to follow. Afterwards, the user has to submit his compliance to each of the activities by clicking the compliance button. As the system receives more user compliance the user classification pattern increases. The data utility module can be used to train the chatbot. Web Application for Caregiver The caregiver provides specific tailored activities to meet user's ability. The system provides timely notification through a Telegram bot application to trigger the user. The user can decide the amount of activity to follow, or skip the plan. The application will act as interoperable and is applicable to other domains in the context of lifestyle promotion and disease prevention. Messaging Platform The web application allows the caregiver to deliver information to all involved patients in the system through the interoperability and messaging platform using push notification technology. On the other hand, the user can access the notification sent by caregiver via the Telegram bot application. Interactive Machine Learning Design As shown in Figure-3, we create two prediction models for classifying user type. When a user first comes to the caregiver, he needs to register in the system by providing information, such as gender, age, BMI, education, and health condition. Then the caregiver provides a tailored plan for Pre-Prediction Model Post-Prediction Model the user. We use the tailored plan as the label, as well as the user's profile as parameters, to train a KNN classifier, which should be updated once a new user's data is available. This is the process of creating the pre-prediction model. After the user starts to perform the provided plan, he needs to report his performance through the Telegram bot. The performance is stored and utilised as one of the parameters in the post-prediction model. When the caregiver decides to refine the user's plan, we get a label for this user. Thus we are able to train the second classifier. Caregiver Recommendation and User Feedback The goal of caregiver recommendations and user feedback is to try and see the value provided by the solution, continues engagement, change behaviour, and maintain the behaviour change. Since even the simple act of tracking has been shown to have an impact, users can start with this simple behaviour and build upon it. The caregiver recommendation is the daily plan provided to the user, who can check it and later can provide his feedback represented by his activity performance. Caregiver The caregiver has a significant role in adhering users to a healthy lifestyle. First, the caregiver provides tailored activities to meet user's ability. The system provides timely notification to trigger the user through a Telegram bot application. The caregiver can also interact with the user through a messaging window where they can communicate and exchange information about patient's condition. The system, using the machine learning model, will provide the caregiver with a ranking of patients on the system and tell the caregiver how their patients are performing with respect to the given activity. The caregiver provides a weekly plan for the user. This plan consist of food and other activity suggestions. Moreover, the majority of these activities could be of the user's most frequent activities and the rest could be infrequent activities. The common user behaviour is used to promote user lifestyle in short-term. To target long-term health, the system explores infrequent activities that the user has to repeat in the future, leading to sustained activity. Activity Recognition The system provides users with recommendations set by the caregiver who selects and compiles them from a predefined activity pool. The system provides personalised and contextualised activities based on specific user parameters and previous performance. Activity recognition is an important component of CoachMe platform. Activities, such as walking more, eating more vegetables, or drinking more water can be accommodated. These activities are used to decide if the user is actively following an activity, is neutral and doesn't change, or is deteriorating and decreasing in terms of performance. The activity is labelled with the caregiver and contains certain degree of importance, later the activity is detected based on the given label. Temporal Detection What is the best moment to trigger the user to perform a specific dietary activity. For example, the best moment to notify user to perform healthy dietary activity could be before meal preparation. Based on the activity type, the caregiver decides the application context and the best moment to trigger the user to perform it. Emotion Recognition Human emotions have diverse effects on the immune system of the person and can have direct impact on the quality of life. To illustrate, positive emotions contribute to helping fight against cardiovascular incidents. On the other hand, negative emotions, for example, a high level of depression, may increase the risk of suffering from a stroke. That said, we plan to add a functionality for users to tell the caregiver how they feel at that moment. We will classify emotions as happy, sad, angry or neutral. This could support the caregiver with the recommendation and future activity assignments. Understanding user emotion is essential to provide them the right support, therefore, emotion recognition is a parameters within our iML model. Feedback Analysis The feedback mechanism can analyse users respond over the activity. The user provides the caregiver with their compliance data, based on which the system triggers a feedback and provides it to the caregiver. The feedback consists of user performance with respect to the activity. Suggestion Generation After classifying user behaviours, the system generates suggestions based on users past activities, this includes their food intake and compliance to the overall plan. The generated suggestion considers user's skills to perform a behaviour. For example, if the suggestion asks a user to go for bike ride, and the user has no access to a bicycle, the user will not follow the suggestion. On the other hand, if the user has performed a behaviour before, the skills can be assumed present. Theoretical Foundation This study references the Fogg's Behavioural Model to apply theoretical principles into technology design. With CoachMe system, we focus on promoting low-effort actions that can be triggered even when motivation is low [14]. Thus, CoachMe suggests (e.g., cues or triggers) a frequent behaviour (e.g., a particular walk) that the person often does in a particular life context. This can increase the frequency of behaviour that a person already does. However, the system with the caregiver can suggest a new behaviour (e.g., go jogging) that would burn more calories and the person is able to perform. In order to successfully sustain such behaviour it has to be repeated frequently until it becomes a habit [14]. User Types Recent approaches on user motivation and adherence towards healthy lifestyle have witnessed an increasing number of applications that treat users as a monopolistic group in their design. This is a bad strategy since an approach that works for an individual may actually demotivate behaviour in others. With this work, we develop a model to classify users as Active, Neutral, or Passive, based on their overall performance and other parameters. This classification is based on the gamer's type strategy discussed by Orji et al., [26]. We will employee personalisation approach to better persuade a particular type of users. Activity Clustering On the web application side, the caregiver creates and clusters similar activities together. For example, similar food items are clustered based on their ingredients. Based on user feedback, the system could detect if the user is repeatedly having high-calorie intake or skipping the given plan and cluster him together with other users with the same level of adherence. Based on user's daily diet, CoachMe system will provide the user with an activity that matches with their ability which could be an infrequent activity. The user will take up some of these infrequent dietary activities and make them frequent in the future. Model Evaluation This work adapts an interactive machine learning (iML) where there is a human involved in the loop represented by the caregiver. There is evidence that humans often still outperform machine learning techniques [19]. For example, a promising technique in diagnostic radiologic imaging to fill the semantic gap is to adopt an expert-in-the-loop approach, to integrate the physicians high-level expert knowledge into the retrieval process by acquiring his relevance judgments regarding a set of initial retrieval results [2]. One drawback of iML-approaches is that methodologically correct experiments are very difficult to replicate, since human agents are subject and hence cannot be copied in contrast to data and algorithms. However, still iML could help equip algorithms to support caregivers in understanding various user behaviours and adherences to the given activity. The importance of iML becomes apparent when the use of hybrid solutions is not enough or difficult. Scenario Description To better understand how the system works We provide a scenario describing user interaction with the system. The recommendations mechanism by caregiver is in accordance with user's feedback about the provided recommendations. This work has a great potential for promoting an active lifestyle to improve individual as well as the population's health. PRELIMINARY RESULTS The machine learning model chosen in this study will be mainly for decision making. It is to classify users with unknown classes and based on sets of rules or types of models it classifies the new user into existing classes. This could support caregivers with valuable information about the habits and daily patterns of a user and permits the caregiver to recommend a more tailored activity to the user. This recommendation system can be used as enabler in health intervention bringing some new functionalities. For instance, based on users activity data, a recommender system can guide him about the necessary actions needed to be taken. So far, we have developed the prototype system and currently working on integrating the iML part into the system. We are collaborating with caregivers at an ambulatory clinical centre and UX designer and the prototype design was build based on their feedback. For example, the caregiver suggested the inclusion of activity clustering into the system to form a weekly plan, whereas initially it was for a single activity and for a daily plan. Moreover, both the caregiver and UX designer perceived the inclusion of Telegram bot as positive. Since most patients are already active users of Telegram application or similar chatting applications, which removes the barrier of introducing this technology. Hence, we ensure the consistency of the work from medical and design point of view. The system considers various types of recommendations that ranges from: Food suggestions on the best dietary changes to improve user health. Recommendation for harmful substances, such as drugs and smoking. • Recommendation for relaxation, such as yoga. This work contributes a new approach to social access control using end-user iML to help caregiver track and provide personalised recommendations to their patients. The interaction cycles in iML are more rapid, focused, and incremental than in traditional machine learning. This increases the opportunities for users to impact the learner and, in turn, for the learner to impact the users. As a result, the contributions of the system and the user to the final outcome cannot be decoupled, necessitating an increased need to study the system together with its potential users [4]. Formative user studies can help identify user needs and desires, and inspire new ways in which users could interact with machine learning systems. User studies that evaluate interactive machine learning systems can reveal false assumptions about potential users and common patterns in their interaction with the system. DISCUSSION The CoachMe system has as goal the design and development of a platform targeting people at all stages of disease continuum (health promotion, managing prevention, managing at risk, and managing complications) to increase their quality of life before more acute episodes. In this work, we have focused on the main architecture description with details about the various components involved and some preliminary results to be considered to demonstrate the functionalities of the developed architecture. We discussed both the iML classifier and the inclusion of Telegram bot in the interaction. This is an effective way to adhere users to a long-term plan. Users spend the vast majority of their time in apps that provide them with chatting services. The reason is because using conversation is easier than heavy interfaces in the application [30]. It is crucial to create a user experience that engages and satisfies users need quickly. More work is needed to improve the training and test datasets and find a better way to train the model. Finally, social network and gamification are also two important aspects that should be considered in future work, since they can increasingly contribute to the motivation and user engagement. Hence, future work will pay particular attention on integrating some aspects of these two techniques into the CoachMe platform. CONCLUSION Interactive ML is becoming increasingly involved in health informatics and lifestyle promotion, as there is more need to aid caregivers with user behaviour and performance. In this paper we presented machine learning application in lifestyle promotion, we then discussed the high level architecture of CoachMe system with details about the components and preliminary scenarios to demonstrate the functionalities of the architecture. We discussed the application of iML machine learning algorithm to detect user activity performance and classify users accordingly. As a preliminary result, we developed a web application for the caregiver and provided users with a Telegram bot application to report their daily compliance. The iML model is a promising classification approach to detect active, neutral and passive users based on their performance. The classifier is part of the web application to identify users with high or low adherence to common recommendations given by the caregiver. The major strength of this study is combining the iML algorithm with Telegram bot and a human actor in the loop • • to achieve a personalised intervention. Future work should focus on testing and validating the approach with real users. Finally, interdisciplinary work between medical professionals, engineers and psychologists is key for the success of such applications.
2018-03-03T10:30:56.000Z
2018-03-03T00:00:00.000
{ "year": 2018, "sha1": "8c7497698e7e8e073180cff07f53c2a8ff8707da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8c7497698e7e8e073180cff07f53c2a8ff8707da", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
262173571
pes2o/s2orc
v3-fos-license
Comparative Assessment of Continuous Flow Photocatalytic Oxidation Reactors for Organic Wastewater Degradation . Introduction Water is an essential natural resource covering more than 70% of the earth's surface.Since the past few decades, there has been an uninterrupted rise in the demand for raw materials related to plastic, dye, textile & fertilizer industry due to growing urbanization, technological advancements, and exploitation of non-renewable resources.Energy-related crises and environmental pollution have already reached an awful situation.Industrial wastes are non-biodegradable as compared to municipal refuse.These industrial wastes include heavy metals, grease, fat, oil, ammonia, phenol etc.Large quantities of pesticides and other chemicals have been released through agricultural and pharmaceutical effluents which are responsible for some severe diseases that are detrimental for human endocrine, causing water non potable.These contaminants in wastewater due to increased human activities should be separated or converted harmless before discharged into streams.Thus, there is a sudden need to develop novel environment friendly technologies which contributes to the complete elimination or degradation of environmental contaminants.Advanced oxidation processes (AOPs) are environmentally friendly techniques for the removal of various types of contaminants such as chlorinated hydrocarbons, petroleum-based products, pesticides, hydrocarbons, insecticides, volatile organic compounds (VOC), aromatics and other organic compounds from air and water.With one unpaired electron, AOPs produces reactive oxygen species such as superoxide radicals, hydrate radicals, hydroxyl radicals etc.They readily & vigorously react with a variety of chemical species, that would be highly complex to degrade otherwise [1].The most suitable advanced oxidation processes (AOPs) for wastewater& water treatment are, Fenton, sonolysis, ozonation, UV photolysis, photocatalysis, wet air oxidation etc. [2].Among these, photocatalysis is the most promising approach for addressing the difficulties created by organic contaminants in the environment. Photocatalysis Photocatalysis, a green technology, is a process of photoreaction which accelerates in the presence of a catalyst along with a light source.Photocatalysts are substances that alter the rate of a chemical reaction in the presence of a source of light, and those reactions are termed as photocatalysis.Photocatalytic reactions can be basically classified into two types based on the phase of the reactants and the semiconductor material.If both the semiconductor & the reactants are in the same phase, it is termed as homogenous photocatalysis, for e.g., gas, solid, or liquid.When both are in dissimilar phases, such reactions are termed as heterogeneous photocatalysis [3].The materials are typically characterized into metal or conductor, semiconductor and insulator with respect to their band gap.The energy disparity between the valence band and the conduction band is referred to as the band gap.Semiconductors are often used as photocatalysts due to their ability to conduct electricity even at room temperature, in the presence of light.When the photocatalyst are exposed to light of sufficient wavelength, electrons get excited from valence band to conduction band by the absorption of energy from the photons.A hole pair is generated in the valence band.Thus, an electron-hole pair get generated.The excited electron enables reduction whereas hole enables oxidation.Thus, a photocatalyst allow a redox reaction and finally degrade the pollutants.Photocatalysis is an important tool for environmental detoxification by way of visible light induced photocatalysis.The photocatalytic activity (PCA) in a photo generated catalysis mainly relies upon the capacity of the catalyst to generate electron-hole pairs, which contributes to free radical formation.These radicals act as an effective oxidizing agent in wastewater and water remediation.The development of titanium dioxide-based water electrolysis made its practical applications viable [4]. Photocatalytic Reactors for Wastewater Treatment Advanced oxidation process is typically classified into Homogeneous and heterogeneous technologies based on the phase of reagents.The process is mainly based on various parameters like dosage, pH of solution, iron salt, temperature & mixing rate.The main disadvantage is pH control & high chemical intake.The major parts of the photocatalytic reactor involve photocatalyst (semiconductor), additives, auxiliaries, light irradiations etc. Semiconductor nanoparticles are widely used as an effective photocatalyst due to their wide band gap.Heterogeneous photocatalyst with sheets and flower like morphologies are highly predominant, due to their capacity to provide large surface area and high catalytic activity.The design and concept of photocatalyst mainly depends on working mechanism (dynamic or static), photocatalyst morphology (bulk or powders) and volume of liquid [5].Reactors used for wastewater treatment are classified into two: fixed bed type reactors & slurry bed type reactors.In slurry type, suspended photocatalyst are employed.Suspended photocatalyst provides a very simple design and high surface area, whereas its disadvantages are the poor catalyst recovery and less light penetration.In fixed bed type reactors, immobilized photocatalyst are employed.The main advantages of immobilized photocatalyst is the simple operation and not requiring catalyst recovery.The main disadvantage is that of less surface area.Since intensity of light is directly proportional to the total irradiated surface area, the above key challenge can be reduced by reducing the thickness of supported photocatalytic layer to thin.The slurry type reactors are typically batch -type, whereas fixed bed types are of continuous in operation.Different types of reactors have been used in the photocatalytic water treatment, which includes downflow contactor reactor, cascade photoreactor, annular slurry photoreactor etc. [6]. Continuous Flow Photoreactors Continuous reactors (also known as flow reactors) are those reactors in which reactants are supplied continuously into the reactor and a continuous stream of product can be obtained.Continuous flow photocatalytic reactor generally uses an immobilized thin film photocatalyst in which the photocatalyst is fixed on a support.It has an advantage of simple operation strategy and catalytic recuperation.However, uniform dispersion of the photocatalyst within the wastewater remains a challenge.The various effects of operational parameters are studied to obtain the most suitable design for a reactor to enhance the photocatalytic degradation.According to Coronel et al., [7], the TiO 2 supported activated carbon was employed for the photocatalytic degradation of cyanide in a continuous flow UV reactor, and a considerable degradation efficiency is obtained.asshown in figure 1 (a).The photo-reactor was fabricated with glass and covered with a wooden box to separate the fluid of external conditions.Photocatalytic and adsorption tests were conducted independently & 97% of CN degradation efficiency was obtained within 24 h due to the combined absorption & photocatalytic oxidation as compared to their individual performance at the end of the study [7].Zeitoun et al., [8] on the study examined the performance efficiency of a developed photocatalytic membrane reactor (PMR) for the degradation of organic dye waste by membrane distillation process.The setup consists of a continuous stirred type photocatalytic feed tank, & it consists of a slurry titanium dioxide article that enacted by using UV irradiation at 365 nm & a poly -vinylidene fluoride (PVDF) layer membrane cell.The experimental operation was differentiated into two stages.The PVDF layer was manufactured & characterized to study its shape, surface charge & hydrophobicity, an electron surface zeta potential filter, contact point tests & a magnifying lens separately during the first stage.The effects of using various TiO2 photocatalyst concentrations & nourish (e.g., concentration, colour) were also examined.PMR can accomplish pure permeation & 100% dye removal efficiency is acquired under specific circumstances [8].Ghanbari et al., [9] investigated the photocatalytic efficacy of removing Cr (VI)and azo dyes by developing a novel fixed bed continuous reactor.N-Fe-co-doped TiO2/SiO2 nanocomposites were prepared.To obtain the best degrading efficiency, operating parameters, including PH, flow rate, angle against sunlight, were designed.A complex composition of pollutants, including BY-51, Cr (VI), BB-41 & BR-29, were assessed under the influence of visible light and sunlight.The efficiencies of degradation were discussed in table 1.Under natural climatic conditions, the novel photoreactor and nanocomposites showed promising activity for the photocatalytic remediation of water pollutants [9].As shown in figure 1(b), Vaiano et al., [10] designed a continuous flow photocatalytic packed-bed reactor.The reactor, when exposed to UV-LED light contributes to the degradation of two harmful anionic azo dyes.Anatase TiO2 pellets were used as the catalytic material.The effect of the liquid flow rate on the performance of the reactor was studied in the range of 0.5-2.1 mL/min.The peak efficiency of the reactor was attained through a liquid flow rate of 0.5 mL/min in distilled water.Complete decolorization of EBT was obtained in tap water under the same conditions.Methyl orange (MeO) degradation was 70%.The major benefit of this technology is that, by using a packed-bed reactor, the toxicity of EBT can be completely removed.MeO required the use of powdered activated carbon filtration in order to fully remove the toxicity [10].For the evaluation of newly synthesized UiO-66(Ti)-Fe3O4-WO3 magnetic photocatalysts and to examine their photocatalytic efficiency for the degradation of ammonia, Bahmani et al., [11] developed a flow-loop thin-film slurry reactor.The synthesized heterojunction possesses high stability and acceptable reusability which are the best part of the system.The system offers about 91.80% degradation efficiency under LED light source [11].Petala et al., [12] designed, a continuous flow annular photoreactor for evaluating the photocatalytic effectiveness of immobilized Ag3PO4 photocatalyst for the degradation of micro-pollutants with as a supporting material as shown in figure 1(c).Degradation (75%) was also observed to primarily alter flow rates, micro-pollutant concentrations, and molecules.The system shows stability when exposed to inorganic ions, is one of the key advantages.But, failed to function effectively when humic acid is introduced to the feed [12].Abdel et al., [13], developed a continuous flow photoreactor (heterogenous) to destroy organic pollutants in wastewater.The degradation of Phenol and MO under UV light by PS/ZnO (NCs) membrane (I) was 72% and 16.5%, & under visible light was 30% and 11%, respectively.Under UV, the PS/ TiO2/SiO2 (NCs) membrane (II) was able to degrade phenol by 18.1% and 40.3%.The degradation of MO and phenol reached 97% and 95% under visible light, under membrane(ll), and the performance was increased by increasing the oxygen concentration, through the addition Of H2O2.The major disadvantage is that the performance of NCs can only be increased by the addition of H 2 O 2 i.e., by increasing O 2 content [13].Sacco et al. [14] proposed a continuous flow micro-reactor that utilizes UV-LED irradiation for the photocatalytic degradation of crystal violet dye.Using Spherical Zeolites Pellets (ZEO) with immobilized Zinc Oxide (ZnO) as catalyst, have benefits such as maximum photocatalytic exposure to light sources, uniform illumination of the entire solution volume, and improved mas transfer phenomena.Higher dye removal efficiency 93% was obtained under UV-LEDs by using this setup [14].A chemical-less Visible-UV photochemical continuous-flow reactor was developed by Moussavi et al., [15] for the direct oxidation of contaminated water for the removal of ammonium to N2 gas.The reactor consists of a 400 mm-tall Pyrex tube column with a 30 mm inner diameter & the reactor was installed vertically.The photoreactor has a working volume of 135 mL.A 5.7 W low-pressure mercury UV was used as the light source.The operations were conducted in continuous as well semi-batch mode.100% oxidation of ammonium was detected for 50 mg/L sample [15].A novel double-cylindrical-shell photoreactor for the degradation of rhodamine B (RhB) and methyl orange was designed & fabricated by Li et al. [16].The reactor was immobilized with monolayer TiO2-coated silica gel beads.Inner quartz glass tube of the reactor was coated with TiO2 immobilized silica gel particles on the exterior surface as shown in figure 2 (a). Compared to slurry and thin-film photoreactors for the degradation of Rhodamine B and MeO, the novel photoreactor exhibited better repetitive operating performance, reduced energy consumption & a higher efficiency [16].Rezaei et al., [17] introduced a continuous flow immobilized TiO2 photoreactor on the same year and it consists of four quartz tubes contained in an aluminum tube as shown in figure 2 (b).Four UV lamps, each with a maximum wavelength of 254 nm were placed at the axis of the quartz tubes.For the fluid flow along the length of the reactor, twelve stainless steel circular baffles coated with particles are mounted inside the reactor in a zigzag pattern.This design offers a high mass transfer coefficient.Under optimum processing conditions, 75.50% of phenol was observed to degrade [17].Van et al., [18] developed a Wall and fixed bed type reactors for the deactivation of Escherichia coli as illustrated in Figure 2 (c).The immobilization of TiO2 in an annular reactor can be done in two different ways, either on the surface of the glass rings used in the packed bed reactor or on the interior reactor.The effect of increase in film thickness & its effects on the degradation efficiency were studied.The main drawbacks of the arrangement are its less photocatalytic activity as compared with a slurry system & resistance to the inhibition by organic matter [18].Behnajady et al., [19] designed a tubular continuous flow photoreactor which immobilized on glass plates for the photocatalytic degradation of C.I. Acid Red 27 (AR27), an anionic mono azo dye of the acid class, in aqueous solution as depicted in figure 3 (a).photoreactor consists of four quartz tubes, connected serially from top to bottom by transparent polyethylene tubes.Each quartz tube contains three glass plates with P25 TiO2.As irradiation sources, four low pressure mercury UV lamps were used.The removal efficiency rises linearly with light intensity.It was shown that final COD was extremely low with an increasing flow rate [19].A fixed-film continuous-flow bioreactor with three components, a top, transparent tubes, and bottom.A green Sulphur bacteria called Chlorobium thiosulfatophilum was utilized to extract hydrogen sulphide from synthetic industrial wastewater and transform it into elemental Sulphur.21.2 mL total volume of reactor were formed with active part of twenty 150 mm x 3 mm ID Tygon tubes & the recovery rates of elemental Sulphur ranged between 75-95% and 82-100%, respectively.The high bacterial density and light intensity (light/volume) also contribute to higher efficiency of the reactor [20].Kobayakawa et al. [21], packed with 2 immobilized on large silica gel beads [21]. A tubular continuous flow photoreactor for the degradation of oxalic acid was developed by Kobayakawa et al., [21].It comprises of an 8mm diameter Pyrex glass tube filled with titanium dioxide photocatalyst immobilized on 2 mm diameter silica gel beads shown in figure 3 (b).The tubular reactor was filled with fine powder attached to 30-40 mesh silica gel that was prepared by blending a concentrated slurry.Two sets of 10 W, 20 W black, fluorescent UV lamps were used for irradiation.The light tubes were spaced 20 cm apart in flat arrays.As a result of using layers of silica gel beads coated with higher efficiency in the breakdown of oxalic acid was achieved.High resistance to the flow of solution is identified as the major disadvantage.This is due to the densely packed granular particles of silica gel [21].Ali & Kim, 2018 designed a continuous flow photo-reactor.Cylindrical anionized TiO2 nanotubes (TNTs) worked as an excellent photocatalyst against methylene blue dye.The colour removal efficiency with a UV light irradiation is about 89%.The usage of visible light shows better results.Also, TNTs & modified TNTs shows better catalytic reusability & treatment of wastewater in textile industry [22].Chekir et al., [23] developed a tubular photocatalytic reactor under solar irradiation.In this study, the catalytic efficiencies of TiO2 & ZnO for the degradation of methylene blue is observed among which shows 98% degradation efficiency.The concept of solar oxidation & its feasibility is well explained with the reactor [23].Visible light [13] Different types of continuous flow reactors such as photocatalytic membrane reactor, fixed bed continuous reactor, continuous flow annular reactor, double cylindrical shell photoreactor, tubular continuous flow photoreactor etc. are discussed in table 1.Most of the photoreactors works with TiO2 and its composites as catalyst.By the comparative analysis of different type of continuous flow photoreactors, photocatalytic membrane reactor with as catalyst shows the best results for the degradation of organic dye compounds.Various intensities of light sources such as UV, Visible light, sunlight, fluorescent light etc. were used for different reactor types. Conclusion Photocatalysis is one of the most promising technologies for eliminating pathogenic microorganisms & degrading toxic industrial pollutants.Various researches are carrying out to implement this technology in large scale with time and cost effective.The main issue is to design an optimal reactor for the process.TiO2 & its composite photocatalysts shows best results as compared to others.Continuous flow photocatalytic reactors have various advantages over others on the catalytic recovery.Various continuous photocatalytic reactors were discussed and among that the recent technique, photocatalytic membrane reactor (PMR) showed 100% organic dye removal efficiency.By optimizing the reactor design, the remaining challenges can be rectified in the future. Publisher's Note AIJR remains neutral with regard to jurisdictional claims in institutional affiliations. Figure 1 : Figure 1: (a) Arrangement of immobilized beds of Activated Carbon or 2 -AC during the adsorption processand photocatalytic degradation of the cyanide in continuous flow photoreactor developed by Coronel[7].(b)Laboratory scale setup developed by Vaiano[10].(c) A continuous flow annular reactor which consist of Ag3PO4 immobilizes on pellets 2 developed by Petala[12]. Figure 2 : Figure 2: (a) Schematic diagram of silica gel beads coated by 2 immobilized with double photoreactive shell (DCS) and photocatalyst system developed by Li [16].(b) Arrangement & dimension of the photoreactor designed by Rezaei [17].(c) Schematic representation of the experimental setup for photocatalytic disinfection of E.coli suspensions in wall, slurry & fixed-bed reactor types developed by Van [18].
2023-09-24T16:10:34.167Z
2023-09-15T00:00:00.000
{ "year": 2023, "sha1": "8a3d72756bd59f10cc40d0115979f1bc51655745", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21467/proceedings.156.20", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "d0f6ff92862cb2569aa3860d51948fee54133cc3", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [] }
22719105
pes2o/s2orc
v3-fos-license
Health care at a premium The Ontario government's reintroduction of health care premiums has proved the most controversial feature of its recent budget. Any new tax generates controversy — and a premium is a tax in all but name. This premium has proven especially controversial both because it broke an explicit election T he Ontario government's reintroduction of health care premiums has proved the most controversial feature of its recent budget. Any new tax generates controversy -and a premium is a tax in all but name. This premium has proven especially controversial both because it broke an explicit election promise and because it touched a deep vein of ambivalence. Ontarians, like all Canadians, value public health care, but they have been told that lower taxes are an economic imperative. The premium is caught in the endless debate about the sustainability of, and reform to, our health care system. The Ontario health care premium applies to individuals, is indexed to income and creates no new eligibility criteria for receipt of publicly funded health care. The premium rises from zero for those whose taxable income is less than $20 000 to $900 for those whose income is over $200 000. Overall, it will raise $2.4 billion in revenue, about 8% of annual public health care spending in Ontario. The only other provinces that use premiums as a financing mechanism are British Columbia and Alberta, where they are levied on households rather than on individuals and vary according to household structure (e.g., single, family). They are constant over most of the income range, however, with subsidies provided only to those with a low income. Both provinces raised their premiums in 2002, Alberta by 29% and British Columbia by 50%. Although the Ontario premium increases in absolute terms as income grows, the proportion of income it represents falls as income rises; it is therefore a regressive tax. Income indexing makes it less regressive than British Columbia's and Alberta's premiums, as well as Ontario's previous fixed health care premium, which was abolished in 1989 in view of the burden it imposed on the poor. But the structure of the new premium perpetuates the regressive redistribution of wealth in Canada over the last decade. The large personal and corporate tax cuts of the last decade have disproportionately benefitted wealthy Canadians, 1 while new public revenues have invariably been raised from regressive sources outside the progressive income tax system: user fees, property taxes, "sin" taxes (liquor, cigarettes, gambling) and health care premiums. Canadian governments are systematically substituting regressive taxes for progressive taxes. Health care premiums need not be regressive, but in the Canadian experience they are. Perhaps this is because a progressive system of premiums is a hard sell given the well-known inverse correlation between income and health care need. A "premium," after all, suggests a closer correspondence between payment and risk status than does a tax. The Ontario government felt compelled by its dire fiscal situation to reintroduce health care premiums. The government could not, given the deficit it had inherited, honour its elec-tion promises to invest simultaneously in education and health care, balance the budget and hold the line on taxes. The government judged the best course of action (politically and otherwise) to be to invest in health and education while raising new revenue via the premium. The Canadian public, after all, has indicated a willingness to pay higher taxes dedicated to health care. Although the premium revenue has been explicitly linked to health care, the strategy must be seen in part as a move to create fiscal room to invest in education and to protect spending on programs other than health, many of which were fiscally starved under Harris's Common Sense revolution. Is the premium a sign of the much-claimed unsustainability of medicare? Here we must distinguish between economic sustainability and budgetary sustainability. Economic sustainability refers to our ability to maintain current and anticipated levels of health spending given the size of our overall economy. Canada's health care system is, from a purely economic perspective, eminently sustainable. Canada spends between 9% and 10% of its national income on health care, 2 well within the range of other nations. Spending on medicare has constituted a surprisingly stable share of national income for 3 decades. 1 Moreover, health spending on medicare services has risen more slowly than spending in those components of the system with mixed public-private financing or predominately private financing. 2 The international evidence is incontrovertible: single-payer publicly financed health care is far more economically sustainable than is a multi-payer system with substantial private finance. 1 Budgetary sustainability, which is at risk, refers to the ability of those organizations charged with paying the bills to do so with the budget available Ontario Premier Dalton McGuinty: It's not a tax, it's just a premium. CP Photo / Brent Foster to them. Since the federal and provincial governments underwent a painful -and necessary -fiscal retrenchment in the early 1990s to restore their fiscal probity, they have spent a large portion of their fiscal dividend on tax cuts rather than on program spending. The combined public revenues forgone by cuts to federal and provincial personal and corporate income taxes between 1996/97 and 2003/04 is estimated to be $170 billion; in 2003/04 alone the public sector revenue forgone is estimated to be $49.9 billion, more than 60% of current public expenditure on health care. 1 As a society, we have systematically constrained the income of the key organizations -the federal and provincial governments -most responsible for financing health care. The result, not surprisingly, has been to make it impossible to sustain publicly financed health care without making cuts into non-health program spending. But there nothing immutable, no iron law of economics, behind this phenomenon: it is the result of po-litical choices. Given economic sustainability, budgetary sustainability is fundamentally a political, not an economic matter. These fiscal choices may reflect the trade-offs that Canadians want to make -they did elect the governments after allthough polling data suggest that matters are a bit more clouded. In this respect, it is worth noting that, although it promised not to increase taxes, the Ontario Liberal government was elected campaigning against Conservative promises of more tax cuts. Canadians can have publicly financed health care if they want it. Economic history shows that societies can publicly finance social and health programs without sacrificing economic performance. 3 What we can't simultaneously have is public health care, tax cuts and balanced budgets. Finally, although the health care premium has been promoted as an essential part of health system reform, it likely will not play an instrumental role in that reform. It may buy more services (depending on fee and wage settlements), but there is little evidence that the billions of additional public dollars pumped into health care since 1997/98 has bought any meaningful change. More reform was probably accomplished in the mid-1990s, during the short period of fiscal retrenchment, than in any time in recent history. In fact, the new money will make it easier to avoid the hard changes our system requires. Real reform needs political will and strategies to challenge the status quo.
2018-04-03T02:31:47.368Z
2004-06-22T00:00:00.000
{ "year": 2004, "sha1": "afa99e82a5a8b2af84e4426d793c771e53848260", "oa_license": null, "oa_url": "http://www.cmaj.ca/content/170/13/1906.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "2f8e58bd3b133dbd4a8cb380aa2aafe439e4ea46", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
12148858
pes2o/s2orc
v3-fos-license
A renewed focus on primary health care: revitalize or reframe? The year 2008 celebrated 30 years of Primary Health Care (PHC) policy emerging from the Alma Ata Declaration with publication of two key reports, the World Health Report 2008 and the Report of the Commission on the Social Determinants of Health. Both reports reaffirmed the relevance of PHC in terms of its vision and values in today's world. However, important challenges in terms of defining PHC, equity and empowerment need to be addressed. This article takes the form of a commentary reviewing developments in the last 30 years and discusses the future of this policy. Three challenges are put forward for discussion (i) the challenge of moving away from a narrow technical bio-medical paradigm of health to a broader social determinants approach and the need to differentiate primary care from primary health care; (ii) The challenge of tackling the equity implications of the market oriented reforms and ensuring that the role of the State in the provision of welfare services is not further weakened; and (iii) the challenge of finding ways to develop local community commitments especially in terms of empowerment. These challenges need to be addressed if PHC is to remain relevant in today's context. The paper concludes that it is not sufficient to revitalize PHC of the Alma Ata Declaration but it must be reframed in light of the above discussion. Introduction In 1978, member States of the World Health Organization (WHO) attending the meeting in Alma Ata supported the policy of Primary Health Care (PHC) [1]. This policy shifted the focus of health improvements from mere provision of health services to the larger context of the relationship between health and social and economic development. Thirty years later, WHO and its regional affiliates called for PHC to be revitalised [2]. The purpose of this paper is to identify the major challenges to this call and investigate the commitments necessary to achieve this objective. It argues that today it is not sufficient to seek ways of revitalizing PHC. Rather it is necessary to reframe this policy to incorporate present issues arising from the national and global context. Background PHC analyzed the reasons for health improvements beyond the technical biomedical intervention paradigm. It argued that other factors were equally important determinants. PHC in 1978 was underpinned by the concept of social justice and identified the main principles of equity and social justice as key to health improvements. It also highlighted the role of prevention, multisectoral collaboration, appropriate technology and sustainability. The need to improve the lot of those living in abject poverty was a major emphasis. PHC was a statement of values as much as a strategy for health care. The present call to "revitalize" PHC is to once again bring these values "to life; to animate" them [3]. It can be argued, however, that PHC in the global context of health care and health needs more than revitalization. It is necessary to re-"frame or shape" [3] PHC so that these principles can be translated from rhetoric into reality. The struggle to put policy to practice in PHC can be seen in the debate between Comprehensive and Selective PHC. The former argued that health improvements including those related to major diseases needed to be addressed in a context where health care delivery takes account of the principles and approaches described above. The latter argued to achieve PHC, it was necessary to focus on disease control targeting on diseases, which were more prevalent in terms of morbidity and mortality, and were cost effective [4]. The debate continues today. Comprehensive PHC has shown some remarkable successes, although it has not been a history of smooth progression. Notable examples of good programs have been seen in the NGO (nongovernment organizations) sector. These programs are often small scale projects run by charismatic leaders. Illustrations include Jamkhed in India which became a model for comprehensive PHC. It provided evidence of the value of Community Health Workers (CHWs) and a community development approach to health. Other examples can be found in the book by Taylor-Ide and Taylor [5]. On a national scale evidence is more restricted. The world's two most populated countries returned to PHC principles to address the health needs of the poor. China was the country that inspired PHC thinking through its attention to rural health care and the use of local people called "barefoot doctors" (CHWs) to give first line health care. After a period of market oriented reforms in health care and the resulting deterioration of the health of rural people who are the nation's population, China is now committing huge additional resources to revitalising rural networks based on PHC [6]. India, which was one of the first countries to create a national community health worker scheme after the Alma Ata conference and subsequently saw the scheme disappear within 10 years, has now begun to revive the program in the context of the National Rural Health Mission [7]. Thailand having adopted a "Basic Needs" approach in the 1970s established a health system based on an alliance between, government and NGOs that integrated PHC programs into other development programs. This alliance has produced both better health and economic improvements [8]. The year 2008 celebrated 30 years of PHC policy. Two major reports, the World Health Report 2008 [9] and the Report of the Commission on the Social Determinants of Health from WHO [10] provided key contributions to this celebration. Both reaffirmed the relevance of PHC in terms of its vision and values in today's world. In addition, a number of articles in the special issue of The Lancet [11] and in the Global Social Health Policy Forum written by public health researchers and activists summarise the influence of PHC on health policy [12]. However, at the risk of an understatement, the world has changed radically since 1978. The world in 2008, can be broadly described as one characterised by globalisation, rapid communication and an increasing gap between rich and poor. In the context of health and health care it can be described as one which has seen a shift from major concerns about communicable diseases to chronic diseases (from targeted single interventions to concerns about environment, life style and behaviour); ideological changes (as dictated by neoliberal economics and new public management) along with dominance of Bretton Woods institutions over the UN organisations resulted in developing countries embracing market oriented health sector reforms [13]; and a shift from medical professional monopoly on decisions and resource allocation to a much wider role for lay people [14]. This situation presents large challenges and demands serious rethinking about the PHC vision. Challenge One: Agreeing upon definitions There is a challenge of getting a consensus among those involved in health care delivery and policy that health improvements must be seen in the context of linking health care and human development. In 1978, the idea of seeing health as a reflection of the wider socioeconomic determinants was questioned by many of those working in the field of health. In putting forward the PHC policy, WHO used the personal experiences of people to give evidence of the link between health and development. Drawn mainly from less developed country experiences, those who contributed to this analysis were often charismatic doctors whose leadership and life improved the health of the poor with whom they worked. Coinciding with the widely reported achievements in health improvements in the newly created Peoples' Republic of China that viewed health as integral to development, the arguments that health was more than medicine and services gained credence among providers and policy makers. A book entitled "Health by the People" edited by Dr. Ken Newell, head of the division that crafted the PHC policy published proof of these accomplishments [15]. However, these views and arguments were not shared by the majority of those involved in health care. Many believed that there was little hard evidence to support the view that the socio-economic environment was as critical to health improvements as medicine and service delivery. As a result, although the arguments of social justice and equity were received with sympathy, implementation of policy mostly focused on service provision. One example is selective vs. comprehensive debate discussed above. Another was the confusion of the concepts "Primary Health Care" and "primary care". Those concerned with health care delivery could embrace the PHC vision in the context of service delivery for which they were responsible. As a result their focus became providing first line health services, "primary care", for communities but not engaging in the wider analysis of conditions in which the poor health problems were created nor seriously engaging in activities to promote equity and community participation. The concept of primary care for many of these people was interchangeable with Primary Health Care and has continued to be so. A recent special issue of Lancet on 30 years of celebration of Alma Ata is a good example of the confusion in the understanding of differences between these two concepts [11]. A lack of a wider context for dialogue of the causes of and solutions to poor health creates considerable confusion for both policy planners and program implementers. In an attempt to clarify the relationship between Primary Health Care and primary care the Report of the Commission on the Social Determinants of Health states: "The Alma Ata Declaration promoted Primary Health Care (PHC) as its central means toward good and fair global health-not simply health services at the primary care level (though that was important) but rather a health system model that acted also on the underlying social, economic and political causes of poor health". [[10]: pg.33]. A commitment to reject the duality between Comprehensive PHC and Selective PHC and an agreement for a standard definition of PHC and the attributes it encompasses is necessary to create solid frameworks for policy analysis and health promotion. Challenge Two: Ensuring equity A second challenge is addressing the equity implications of the market oriented reforms introduced in number of developing countries. The Report of the Commission on the Social Determinants of Health highlighted equity, both in terms of distribution but also in terms of power and politics. Both PHC and the Report of the Commission call for universal coverage. They highlight problem of market oriented approaches and give evidence of its failure to meet objectives for improving health for the poor. The reduced access to health care as a result of the Structural Adjustment Programs (SAP) of the 1980s provides the most graphic example illustrated by the reduction of life expectancy in Africa [16]. In the field of health, these programs combined with neo-liberal emphasis on the role of the market economy to improve efficiency and effectiveness has resulted in the promotion of a health system reforms (HSR). These market oriented reforms include decentralization, public private partnerships, promotion of the private sector, and introduction of user charges [17,18]. Although often couched in the PHC principles of equity and participation, they respond to the demands of efficiency at the cost of equity considerations [13]. As a result, short-term gains, in many cases, have overridden longer-term concerns that address the root cause of poverty and poor health [19]. Equity implications of the market oriented reforms are well documented. A classic example is the introduction of user charges. User charges for health care were introduced as a part of the structural adjustment programs in number of countries. However, the expected benefits in terms of efficiency and equity were not forthcoming. The negative consequences in terms of access and utilisation of health services were observed especially among the lower socio-economic groups across countries. Given the highly regressive nature of user charges and the lack of effective exemption mechanisms, it is therefore not surprising to observe number of countries in Sub-Sahara Africa have abolished user fees or are in the process of doing so [20]. At the global level, the global public private partnerships (i.e. Global alliance on vaccine initiative (GAVI), Global funds for Aids, TB and Malaria (GFATM) have funded technology for health focusing on profit rather than people and have re-enforced vertical disease program approach. This approach has been criticised for distorting national priorities, weakening the comprehensive integrated health systems approach and supporting re-verticalization of planning [21] as energies are directed towards implementing specific vertical disease programs. Challenge Three: Supporting community participation and empowerment A final major challenge is to examine and seek ways forward to develop local community commitments. Community participation was identified as a key principle of PHC. There was little distinction between participation as community mobilisation (having community people accept professionals' assessment and activities for health improvements) and community empowerment (transforming attitudes and behaviours that enable community/individuals take decisions about their own lives). In recent years, recognition of the differences has increased and the term participation has increasingly been replaced by empowerment, calling attention to the importance of power and control over decisions, especially resource allocation. The direct link between community participation and empowerment has not been easy to establish [22]. However the link has been strengthened by a recent systematic review undertaken by the Working Group on Community Based Primary Health of the American Public Health Association [23]. In a paper published in Global Public Health their findings show that community involvement including house to house visits by health staff, group meetings for education and support on health issues, outreach workers providing health services in the community and a community level (CHW) health worker to support community based health management has measurable effects on improvement of child survival. They also highlight empowering communities (meaning community people gain skills, information and confidence to make decisions about their choices affecting their own lives) as an overarching strategy that underpins these gains. Issues around participation and empowerment also have been promoted in the context of governance of health service provision. A growing literature argues that concerns about accountability of public expenditures should be placed in the hands of those intended beneficiaries of those services. These issues centre on both the accountability of services to perform to the satisfaction of the users and the accountability of finances to be used in the way in which they have been allocated. Concerns are developed in discussion about "voice" whereby service users have the ability and capacity to demand the providers perform to user satisfaction. Evidence from the implementation of the Bamako Initiative shows how accountability can catalyze improvement in efficiency and effectiveness of local service delivery [24]. Commitments to meet this challenge continually demand professionals to hold serious dialogues with those for whom they provide service and care. To date this dialogue has often been delayed by several factors. Firstly, there is the existence of attitudes of professionals who tend to disregard opinions and views of those outside the profession. Secondly, there is the historic view that health interventions can only be verified by outcome measures. This view ignores the vital role of process in sustaining the improvements that bio-medicine and technology contribute. The World Health Report 2008 discusses in detail the role of service providers yet does not address the second issue of process. Commitment to addressing both issues is critical to PHC in the present context. Conclusion Above we have identified the challenges and necessary commitments that need to be addressed if PHC is to remain relevant. Revitalizing PHC principles without developing a framework to address concrete measures for health improvements is not sufficient. The challenges discussed above need to be examined in a systematic and integrated way to produce flexible policy options and solutions that can be implemented. To do this, particularly in a time of financial crisis, requires a willingness to dialogue and appreciate a range of different and often contradictory views working toward consensus. It is clear that more of the same will not answer increasingly risky situations emerging from not only lack of money but also climate change, international political tensions and growing anxiety about resource availability due to rapidly expanding global populations. This does suggest that perhaps revitalizing PHC is not sufficient. What is needed is a reframing of the concept in light of the above discussions around the issues identified.
2015-03-19T23:44:59.000Z
2010-07-30T00:00:00.000
{ "year": 2010, "sha1": "194b8b6db2b22a419502a6833eeaae153ed504bd", "oa_license": "CCBY", "oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/1744-8603-6-13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76924713664ece7744619d8049c5080451b8d51e", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
18989838
pes2o/s2orc
v3-fos-license
Using Gaussian Processes for Rumour Stance Classification in Social Media Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted. INTRODUCTION There is an increasing need to interpret and act upon rumours spreading quickly through social media during breaking news, where new reports are released piecemeal and often have an unverified status at the time of posting. Previous research has posited the damage that the diffusion of false rumours can cause in society, and that corrections issued by news organisations or state agencies such as the police may not necessarily achieve the desired effect sufficiently quickly [Lewandowsky et al. 2012;Procter et al. 2013a]. Being able to determine the accuracy of reports is therefore crucial in these scenarios. However, the veracity of rumours in circulation is usually hard to establish [Allport and Postman 1947], since as many views and testimonies as possible need to be assembled and examined in order to reach a final judgement. Examples of rumours that were later disproven, after being widely circulated, include a 2010 earthquake in Chile, where rumours of a volcano eruption and a tsunami warning in Valparaiso spawned on Twitter [Mendoza et al. 2010]. Another example is the England riots in 2011, where false rumours claimed that rioters were going to attack Birmingham's Children's Hospital and that animals had escaped from London Zoo [Procter et al. 2013b]. Previous work by ourselves and others has argued that looking at how users in social media orient to rumours is a crucial first step towards making an informed judgement on the veracity of a rumourous report [Zubiaga et al. 2016;Tolmie et al. 2015;Mendoza et al. 2010]. For example, in the case of the riots in England in August 2011, Procter et al. manually analysed the stance expressed by users in social media towards rumours [Procter et al. 2013b]. Each tweet discussing a rumour was manually categorised as supporting, denying or questioning it. It is obvious that manual methods have their disadvantages in that they do not scale well; the ability to perform stance categorisation of tweets in an automated way would be of great use in tracking rumours, flagging those that are largely denied or questioned as being more likely to be false. Determining the stance of social media posts automatically has been attracting increasing interest in the scientific community in recent years, as this is a useful first step towards more in-depth rumour analysis: -It can help detect rumours and flag them as such more quickly [Zhao et al. 2015]. -It is useful for tracking public opinion about rumours and hence for monitoring their wider effect on society. [Derczynski et al. 2015;Liu et al. 2015]. Work on automatic rumour stance classification, however, is still in its infancy, with some methods ignoring temporal ordering and rumour identities (e.g. [Qazvinian et al. 2011]), while others being rule-based and thus with unclear generalisability to new rumours [Zhao et al. 2015]. Our work advances the state-of-the-art in tweet-level stance classification through multi-task learning and Gaussian Processes. This article substantially extends our earlier short paper [Lukasik et al. 2015a], fistly by using a second dataset, which enables us to test the generalisability of our results. Secondly, a comparison against additional baseline classifiers and recent state-of-the-art approaches has been added to the experimental section. Lastly, we carried out a more thorough analysis of the results, including now per-class performance scores, which furthers our understanding of rumour stance classification. In comparison to the state-of-the-art, our approach is novel in several crucial aspects: (1) We perform stance classification on unseen rumours, given a training set of already annotated rumours on different topics and from different time periods. (2) The temporal ordering of tweets on a given rumour is respected, both during training and stance classification. (3) Generalisability to new datasets is a core aspect of our methodology, which is built on the premise that patterns of stance should exhibit similar characteristics across different rumours. Based on the assumption of a common underlying linguistic signal in rumours on different topics, we build a transfer learning system based on Gaussian Processes, that can classify stance in newly emerging rumours. The paper reports results on two different rumour datasets and explores two different experimental settings -without any training data and with very limited training data. We refer to these as: -Leave One Out: all tweets pertaining to a target rumour are only used for testing, i.e. method performance on a completely unseen rumour is reported; -Leave Part Out: the first few tweets of a target rumour (as annotated by journalists) and added to the training set of the Gaussian Process classifier, together with tweets pertaining to older rumours. The rest of the tweets on the target rumour are used for evaluation. Our results demonstrate that Gaussian Process-based, multi-task learning leads to significantly improved performance over state-of-the-art methods and competitive baselines, as demonstrated on two very different datasets. The classifier relying on Gaussian Processes performs particularly well over the rest of the baseline classifiers in the Leave Part Out setting, proving that it does particularly well in determining the distribution of supporting, denying and questioning tweets associated with a rumour. Estimating the distribution of stances is the key aspect for which our classifier performs especially well compared to the baseline classifiers. RELATED WORK This section provides a more in-depth motivation of the rumour stance detection task and an overview of the state-of-the-art methods and their limitations. First, however, let us start by introducing the formal definition of a rumour. Rumour Definition There have been multiple attempts at defining rumours in the literature. Most of them are complementary to one another, with slight variations depending on the context of their analyses. The core concept that most researchers agree on matches the definition that major dictionaries provide, such as the Oxford English Dictionary 1 defining a rumour as "a currently circulating story or report of uncertain or doubtful truth". For instance, DiFonzo and Bordia [DiFonzo and Bordia 2007] defined rumours as "unverified and instrumentally relevant information statements in circulation." Researchers have long looked at the properties of rumours to understand their diffusion patterns and to distinguish them from other kinds of information that people habitually share [Donovan 2007]. Allport and Postman [Allport and Postman 1947] claimed that rumours spread due to two factors: people want to find meaning in things and, when faced with ambiguity, people try to find meaning by telling stories. The latter factor also explains why rumours tend to change in time by becoming shorter, sharper and more coherent. This is the case, it is argued, because in this way rumours explain things more clearly. On the other hand, Rosnow [Rosnow 1991] claimed that there are four important factors for rumour transmission. Rumours must be outcome-relevant to the listener, must increase personal anxiety, be somewhat credible and be uncertain. Furthermore, Shibutani [Shibutani 1969] defined rumours to be a recurrent form of communication through which men [sic] caught together in an ambiguous situation attempt to construct a meaningful interpretation of it by pooling their intellectual resources. It might be regarded as a form of collective problem-solving. In contrast with these three theories, Guerin and Miyazaki [Guerin and Miyazaki 2006] state that a rumour is a form of relationship-enhancing talk. Building on their previous work, they recall that many ways of talking serve the purpose of forming and maintaining social relationships. Rumours, they say, can be explained by such means. In our work, we adhere to the widely accepted fact that rumours are unverified pieces of information. More specifically, following [Zubiaga et al. 2016], we regard a rumour in the context of breaking news, as a "circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism and/or anxiety so as to motivate finding out the actual truth". Descriptive Analysis of Rumours in Social Media One particularly influential piece of work in the field of rumour analysis in social media is that by Mendoza et al. [Mendoza et al. 2010]. By manually analysing the data from the earthquake in Chile in 2010, the authors selected 7 confirmed truths and 7 false rumours, each consisting of close to 1000 tweets or more. The veracity value of the selected stories was corroborated by using reliable sources. Each tweet from each of the news items was manually classified into one of the following classes: affirmation, denial, questioning, unknown or unrelated. In this way, each tweet was classified according to the position it showed towards the topic it was about. The study showed that a much higher percentage of tweets about false rumours are shown to deny the respective rumours (approximately 50%). This is in contrast to rumours later proven to be true, where only 0.3% of tweets were denials. Based on this, authors claimed that rumours can be detected using aggregate analysis of the stance expressed in tweets. Recent research put together in a special issue on rumours and social media [Papadopoulos et al. 2016] also shows the increasing interest of the scientific community in the topic. [Webb et al. 2016] proposed an agenda for research that establishes an interdisciplinary methodology to explore in full the propagation and regulation of unverified content on social media. [Middleton and Krivcovs 2016] described an approach for geoparsing social media posts in real-time, which can be of help to determine the veracity of rumours by tracking down the poster's location. The contribution of [Hamdi et al. 2016] to rumour resolution is to build an automated system that rates the level of trust of users in social media, hence enabling to get rid of users with low reputation. Complementary to these approaches, our objective is to determine the stance of tweets towards a rumour, which can then be aggregated to establish an overall veracity score for the rumour. Another study that shows insightful conclusions with respect to stance towards rumours is that by Procter et al. [Procter et al. 2013b]. The authors conducted an analysis of a large dataset of tweets related to the riots in the UK, which took place in August 2011. The dataset collected in the riots study is one of the two used in our experiments, and we describe it in more detail in section 3.3. After grouping the tweets into topics, where each represents a rumour, they were manually categorised into different classes, namely: (1) media reports, which are tweets sent by mainstream media accounts or journalists connected to media, (2) pictures, being tweets uploading a link to images, (3) rumours, being tweets claiming or counter claiming something without giving any source, (4) reactions, consisting of tweets being responses of users to the riots phenomenon or specific event related to the riots. Besides categorisation of tweets by type, Procter et al. also manually categorised the accounts posting tweets into different types, such as mainstream media, only on-line media, activists, celebrities, bots, among others. What is interesting for the purposes of our work is that the authors observed the following fourstep pattern recurrently occurring across the collected rumours: (1) a rumour is initiated by someone claiming it may be true, (2) a rumour spreads together with its reformulations, (3) counter claims appear, (4) a consensus emerges about the credibility of the rumour. This leads the authors to the conclusion that the process of 'inter-subjective sense making' by Twitter users plays a key role in exposing false rumours. This finding, together with subsequent work by Tolmie et al. into the conversational characteristics of microblogging [Tolmie et al. 2015] has motivated our research into automating stance classification as a methodology for accelerating this process. Rumour Stance Classification Qazvinian et al. [Qazvinian et al. 2011] conducted early work on rumour stance classification. They introduced a system that analyzes a set of tweets associated with a given topic predefined by the user. Their system would then classify each of the tweets as supporting, denying or questioning a tweet. We have adopted this scheme in terms of the different types of stance in the work we report here. However, their work ended up merging denying and questioning tweets for each rumour into a single class, converting it into a 2-way classification problem of supporting vs denying-orquestioning. Instead, we keep those classes separate and, following Procter et al., we conduct a 3-way classification [Zubiaga et al. 2014]. Another important characteristic that differentiates Qazvinian et al.'s work from ours is that they looked at support and denial on longstanding rumours, such as the fact that many people conjecture whether Barack Obama is a Muslim or not. By contrast, we look at rumours that emerge in the context of fast-paced, breaking news situations, where new information is released piecemeal, often with statements that employ hedging words such as "reportedly" or "according to sources" to make it clear that the information is not fully verified at the time of posting. This is a very different scenario from that in Qazvinian et al.'s work as the emergence of rumourous reports can lead to sudden changes in vocabulary, leading to situations that might not have been observed in the training data. Another aspect that we deal with differently in our work, aiming to make it more realistically applicable to a real world scenario, is that we apply the method to each rumour separately. Ultimately, our goal is to classify new, emerging rumours, which can differ from what the classifier has observed in the training set. Previous work ignored this separation of rumours, by pooling together tweets from all the rumours in their collections, both in training and test data. By contrast, we consider the rumour stance classification problem as a form of transfer learning and seek to classify unseen rumours by training the classifier from previously labelled rumours. We argue that this makes a more realistic classification scenario towards implementing a real-world rumour-tracking system. Following a short gap, there has been a burst of renewed interest in this task since 2015. For example, Liu et al. [Liu et al. 2015] introduce rule-based methods for stance classification, which were shown to outperform the approach by [Qazvinian et al. 2011]. Similarly, [Zhao et al. 2015] use regular expressions instead of an automated method for rumour stance classification. Hamidian and Diab [Hamidian and Diab 2016] use Tweet Latent Vectors to assess the ability of performing 2-way classification of the stance of tweets as either supporting or denying a rumour. They study the extent to which a model trained on historical tweets can be used for classifying new tweets on the same rumour. This, however, limits the method's applicability to long-running rumours only. The work closest to ours in terms of aims is Zeng et al. [Zeng et al. 2016], who explored the use of three different classifiers for automated rumour stance classification on unseen rumours. In their case, classifiers were set up on a 2-way classification problem dealing with tweets that support or deny rumours. In the present work, we extend this research by performing 3-way classification that also deals with tweets that question the rumours. Moreover, we adopt the three classifiers used in their work, namely Random Forest, Naive Bayes and Logistic Regression, as baselines in our work. Lastly, researchers [Zhao et al. 2015;Ma et al. 2015] have focused on the related task of detecting rumours in social media. While a rumour detection system could well be the step that is applied prior to our stance classification system, here we assume that rumours have already been identified to focus on the subsequent step of determining stances. Definition of the Task Individual tweets may discuss the same rumour in different ways, where each user expresses their own stance towards the rumour. Within this scenario, we define the tweet level rumour stance classification task as that in which a classifier has to determine the stance of each tweet towards the rumour. More specifically, given the tweet t i as input, the classifier has to determine which of the set Y = {supporting, denying, questioning} applies to the tweet, y(t i ) ∈ Y . Here we define the task as a supervised classification problem, where the classifier is trained from a labelled set of tweets and is applied to tweets on a new, unseen set of rumours. Problem formulation Let R be a set of rumours, each of which consists of tweets discussing it, ∀ r∈R T r = {t r 1 , · · · , t r rn }. T = ∪ r∈R T r is the complete set of tweets from all rumours. Each tweet is classified as supporting, denying or questioning with respect to its rumour: y(t i ) ∈ {s, d, q}. We formulate the problem in two different settings. First, we consider the Leave One Out (LOO) setting, which means that for each rumour r ∈ R, we construct the test set equal to T r and the training set equal to T \ T r . This is the most challenging scenario, where the test set contains an entirely unseen rumour. The second setting is Leave Part Out (LPO). In this formulation, a very small number of initial tweets from the target rumour is added to the training set {t r 1 , · · · , t r r k }. This scenario becomes applicable typically soon after a rumour breaks out and journalists have started monitoring and analysing the related tweet stream. The experimental section investigates how the number of initial training tweets influences classification performance on a fixed test set, namely: {t r r l , · · · , t r rn }, l > k. The tweet-level stance classification problem here assumes that tweets from the training set are already labelled with the rumour discussed and the attitude expressed towards that. This information can be acquired either via manual annotation as part of expert analysis, as is the case with our dataset, or automatically, e.g. using pattern-based rumour detection [Zhao et al. 2015]. Our method is then used to classify the stance expressed in each new tweet from the test set. Datasets We evaluate our work on two different datasets, which we describe below. We use two recent datasets from previous work for our study, both of which adapt to our needs. We do not use the dataset by [Qazvinian et al. 2011] given that it uses a different annotation scheme limited to two categories of stances. The reason why we use the two datasets separately instead of combining them is that they have very different characteristics. Our experiments, instead, enable us to assess the ability of our classifier to deal with these different characteristics. 3.3.1. England riots dataset. The first dataset consists of several rumours circulating on Twitter during the England riots in 2011 (see Table II). The dataset was collected by tracking a long set of keywords associated with the event. The dataset was analysed and annotated manually as supporting, questioning, or denying a rumour, by a team of social scientists studying the role of social media during the riots [Procter et al. 2013b]. As can be seen from the dataset overview in Table II, different rumours exhibit varying proportions of supporting, denying and questioning tweets, which was also observed in other studies of rumours [Mendoza et al. 2010;Qazvinian et al. 2011]. These variations in the number of instances for each class across rumours posits the challenge of properly modelling a rumour stance classifier. The classifier needs to be able to deal with a test set where the distribution of classes can be very different to that observed in the training set. Thus, we perform 7-fold cross-validation in the experiments, each fold having six rumours in the training set, and the remaining rumour in the test set. The seven rumours were as follows [Procter et al. 2013b]: -Rioters had attacked London Zoo and released the animals. -Rioters were gathering to attack Birmingham's Children's Hospital. -Rioters had set the London Eye on fire. -Police had beaten a sixteen year old girl. -The Army was being mobilised in London to deal with the rioters. -Rioters had broken into a McDonalds and set about cooking their own food. -A store belonging to the Miss Selfridge retail group had been set on fire in Manchester. 3.3.2. PHEME dataset. Additionally, we use another rumour dataset associated with five different events, which was collected as part of the PHEME FP7 research project and described in detail in [Zubiaga et al. 2016[Zubiaga et al. , 2015. Note that the authors released datasets for nine events, but here we remove non-English datasets, as well as small English datasets each of which includes only 1 rumour, as opposed to the 40+ rumours in each of the datasets that we are using. We summarise the details of the five events we use from this dataset in Table III. In contrast to the England riots dataset, the PHEME datasets were collected by tracking conversations initiated by rumourous tweets. This was done in two steps. First, we collected tweets that contained a set of keywords associated with a story unfolding in the news. We will be referring to the latter as an event. Next, we sampled the most retweeted tweets, on the basis that rumours by definition should be "a circulation story which produces sufficient skepticism or anxiety". This allows us to filter potentially rumourous tweets and collect conversations initiated by those. Conversations were tracked by collecting replies to tweets and, therefore, unlike the England riots, this dataset also comprises replying tweets by definition. This is an important characteristic of the dataset, as one would expect that replies are generally shorter and potentially less descriptive than the source tweets that initiated the conversation. We take this difference into consideration when performing the analysis of our results. This dataset includes tweets associated with the following five events: In this case, we perform 5-fold cross-validation, having four events in the training set and the remaining event in the test set for each fold. EXPERIMENT SETTINGS This section details the features and evaluation measures used in our experiments on tweet level stance classification. Classifiers We begin by describing the classifiers we use for our experimentation, including Gaussian Processes, as well as a set of competitive baseline classifiers that we use for comparison. A Gaussian Process defines a prior over functions, which combined with the likelihood of data points gives rise to a posterior over functions explaining the data. The key concept is a kernel function, which specifies how outputs correlate as a function of the input. Thus, from a practitioner's point of view, a key step is to choose an appropriate kernel function capturing the similarities between inputs. We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection. 2 Instead, the marginal likelihood of the data can be used for hyperparameter selection. The central concept of Gaussian Process Classification (GPC; [Rasmussen and Williams 2005]) is a latent function f over inputs x: f (x) ∼ GP(m(x), k(x, x )), where m is the mean function, assumed to be 0 and k is the kernel function, specifying the degree to which the outputs covary as a function of the inputs. We use a linear kernel, k(x, x ) = σ 2 x x . The latent function is then mapped by the probit function Φ(f ) into the range [0, 1], such that the resulting value can be interpreted as p(y = 1|x). The GPC posterior is calculated as ) 1−yj is the Bernoulli likelihood of class y. After calculating the above posterior from the training data, this is used in prediction, i.e., The above integrals are intractable and approximation techniques are required to solve them. There exist various methods to deal with calculating the posterior; here we use Expectation Propagation (EP; [Minka and Lafferty 2002]). In EP, the posterior is approximated by a fully factorised distribution, where each component is assumed to be an unnormalised Gaussian. In order to conduct multi-class classification, we perform a one-vs-all classification for each label and then assign the one with the highest likelihood, amongst the three (supporting, denying, questioning). We choose this method due to interpretability of results, similar to recent work on occupational class classification [Preotiuc-Pietro et al. 2015]. Intrinsic Coregionalisation Model. In the Leave-Part-Out (LPO) setting initial labelled tweets from the target rumour are observed as well, as opposed to the Leave-One-Out (LOO) setting. In the case of LPO, we propose to weigh the importance of tweets from the reference rumours depending on how similar their characteristics are to the tweets from the target rumour available for training. To handle this with GPC, we use a multiple output model based on the Intrinsic Coregionalisation Model (ICM; [Álvarez et al. 2012]). This model has already been applied successfully to NLP regression problems [Beck et al. 2014] and it can also be applied to classification ones. ICM parametrizes the kernel by a matrix which represents the extent of covariance between pairs of tasks. The complete kernel takes form of where B is a square coregionalisation matrix, d and d denote the tasks of the two inputs and k data is a kernel for comparing inputs x and x (here, linear). We parametrize the coregionalisation matrix B = κI + vv T , where v specifies the correlation between tasks and the vector κ controls the extent of task independence. Note that in case of LOO setting this model does not provide useful information, since no target rumour data is available to estimate similarity to other rumours. Hyperparameter selection. We tune hyperparameters v, κ and σ 2 by maximizing evidence of the model p(y|X), thus having no need for a validation set. Methods. We consider GPs in three different settings, varying in what data the model is trained on and what kernel it uses. The first setting (denoted GP) considers only target rumour data for training. The second (GPPooled) additionally considers tweets from reference rumours (i.e. other than the target rumour). The third setting is GPICM, where an ICM kernel is used to weight influence from tweets from reference rumours. 4.1.2. Baselines. To assess and compare the efficiency of Gaussian Processes for rumour stance classification, we also experimented with five more baseline classifiers, all of which were implemented using the scikit Python package [Pedregosa et al. 2011]: (1) majority classifier, which is a naive classifier that labels all the instances in the test set with the most common class in the training set, (2) logistic regression (MaxEnt), (3) support vector machines (SVM), (4) naive bayes (NB) and (5) random forest (RF). The selection of these baselines is in line with the classifiers used in recent research on stance classification [Zeng et al. 2016], who found that random forests, followed by logistic regression, performed best. Features We conducted a series of preprocessing steps in order to address data sparsity. All words were converted to lowercase; stopwords have been removed 3 ; all emoticons were replaced by words 4 ; and stemming was performed. In addition, multiple occurrences of a character were replaced with a double occurrence [Agarwal et al. 2011], to correct for misspellings and lengthenings, e.g., looool. All punctuation was also removed, except for ., ! and ?, which we hypothesize to be important for expressing emotion. Lastly, usernames were removed as they tend to be rumour-specific, i.e., very few users comment on more than one rumour. After preprocessing the text data, we use either the resulting bag of words (BOW) feature representation and replace all words with their Brown cluster ids (Brown). Brown clustering is a hard hierarchical clustering method [Liang 2005]. It clusters words based on maximizing the probability of the words under the bigram language model, where words are generated based on their clusters. In previous work it has been shown that Brown clusters yield better performance than directly using the BOW features [Lukasik et al. 2015a]. In our experiments, the clusters used were obtained using 1000 clusters acquired from a large scale Twitter corpus [Owoputi et al. 2013], from which we can learn Brown clusters aimed at representing a generalisable Twitter vocabulary. Retweets are removed from the training set to prevent bias [Llewellyn et al. 2014]. More details on the Brown clusters that we used as well as the words that are part of each cluster are available online 5 . During the experimentation process, we also tested additional features, including the use of the bag of words instead of the Brown clusters, as well as using word embeddings trained from the training sets [Mikolov et al. 2013]. However, results turned out to be substantially poorer than those we obtained with the Brown clusters. We conjecture that this was due to the little data available to train the word embeddings; further exploring use of word embeddings trained from larger training datasets is left future work. In order to focus on our main objective of proving the effectiveness of a multi-task learning approach, as well as for clarity purposes, since the number of approaches to show in the figures increases if we also consider the BOW features, we only show results for the classifiers relying on Brown clusters as features. Evaluation Measures Accuracy is often deemed a suitable evaluation measure to assess the performance of a classifier on a multi-class classification task. However, the classes are clearly imbalanced in our case, with varying tendencies towards one of the classes in each of the rumours. We argue that in these scenarios the sole evaluation based on accuracy is insufficient, and further measurement is needed to account for category imbalance. This is especially necessary in our case, as a classifier that always predicts the majority class in an imbalanced dataset will achieve high accuracy, even if the classifier is useless in practice. To tackle this, we use both micro-averaged and macro-averaged F1 scores. Note that the micro-averaged F1 score is equivalent to the well-known accuracy measure, while the macroaveraged F1 score complements it by measuring performance assigning the same weight to each category. Both of the measures rely on precision (Equation 1) and recall (Equation 2) to compute the final F1 score. where tp k (true positives) refer to the number of instances correctly classified in class k, f p k is the number of instances incorrectly classified in class k, and f n k is the number of instances that actually belong to class k but were not classified as such. The above equations can be used to compute precision and recall for a specific class. Precision and recall for all the classes in a problem with c classes are computed differently if they are microaveraged (see Equations 3 and 4) or macroaveraged (see Equations 5 and 6). After computing microaveraged and macroveraged precision and recall, the final F1 score is computed in the same way, i.e., calculating the harmonic mean of the precision and recall in question (see Equation 7). After computing the F1 score for each fold, we compute the micro-averaged score across folds. RESULTS First, we look at the results on each dataset separately. Then we complement the analysis by aggregating the results from both datasets, which leads to further understanding the performance of our classifiers on rumour stance classification. Comparison of Classifiers We show the results for the LOO and LPO settings in the same figure, distinguished by the training size displayed in the X axis. In all the cases, labelled tweets from the remainder of the rumours (rumours other than the test/targer rumour) are used for training, and hence the training size shown in the X axis is in addition to those. Note that the training size refers to the number of labelled instances that the classifier is making use of from the target rumour. Thus, a training size of 0 indicates the LOO setting, while training sizes from 10 to 50 pertain to the LPO setting. Figure 1 and Table IV show how micro-averaged and macro-averaged F1 scores for the England riots dataset change as the number of tweets from the target rumour used for training increases. We observe that, as initially expected, the performance of most of the methods improves as the number of labelled training instances from the target rumour increases. This increase is especially remarkable with the GP-ICM method, which gradually increases after having as few as 10 training instances. GP-ICM's performance keeps improving as the number of training instances approaches 50 6 Two aspects stand out from analysing GP-ICM's performance: -It performs poorly in terms of micro-averaged F1 when no labelled instances from the target rumour are used. However, it makes very effective use of the labelled training instances, overtaking the rest of the approaches and achieving the best results. This proves the ability of GP-ICM to make the most of the labelled instances from the target rumour, which the rest of the approaches struggle with. -Irrespective of the number of labelled instances, GP-ICM is robust when evaluated in terms of macro-averaged F1. This means that GP-ICM is managing to determine the distribution of classes effectively, assigning labels to instances in the test set in a way that is better distributed than the rest of the classifier. Despite the saliency of GP-ICM, we notice that two other baseline approaches, namely MaxEnt and RF, achieve competitive results that are above the rest of the baselines, but still perform worse than GP-ICM. The results from the PHEME dataset are shown in Figure 2 and Table V. Overall, we can observe that results are lower in this case than they were for the riots dataset. The reason for this can be attributed to the following two observations: on the one hand, each fold pertaining to a different event in the PHEME dataset means that the classifier encounters a new event in the classification, where it will likely find new vocabulary, which may be more difficult to classify; on the other hand, 6 Note that 50 tweets represent, on average, less than 7% of the whole rumour, with the rest of the rumour yet to be observed. the PHEME dataset is more prominently composed of tweets that are replying to others, which are likely shorter and less descriptive on their own and hence more difficult to get meaningful features from. Despite the additional difficulty in this dataset, we are interested in exploring if the same trend holds across classifiers, from which we can generalise the analysis to different types of classifiers. One striking difference with respect to the results from the riots dataset is that, in this case, the classifiers, including GP-ICM, are not gaining as much from the inclusion of labelled instances from the target rumour. This is likely due to the heterogeneity of each of the events in the PHEME dataset. Here a diverse set of rumourous newsworthy pieces of information are discussed pertaining to the selected events as they unfold. By contrast, each rumour in the riots dataset is more homogeneous, as each rumour focuses on a specific story. Interestingly, when we compare the performance of different classifiers, we observe that GP-ICM again outperforms the rest of the approaches, both in terms of micro-averaged and macro-averaged F1 scores. While the micro-averaged F1 score does not increase as the number of training instances increases, we can see a slight improvement in terms of macro-averaged F1. This improvement suggests that GP-ICM does still take advantage of the labelled training instances to boost performance, in this case by better distributing the predicted labels. Again, as we observed in the case of the riots dataset, two baselines stand out, MaxEnt and RF. They are very close to the performance of GP-ICM for the PHEME dataset, event outperforming it in a few occasions. In the following subsection we take a closer look at the differences among the three classifiers. Analysing the Performance of the Best-Performing Classifiers We delve into the results of the best-performing classifiers, namely GP-ICM, MaxEnt and RF, looking at their per-class performance. This will help us understand when they perform well and where it is that GP-ICM stands out achieving the best results. Tables VI and VII show per-class F1 measures for the aforementioned three best-performing classifiers for the England riots dataset and the PHEME dataset, respectively. They also show statistics of the mis-classifications that the classifiers made, in the form of percentage of deviations towards the other classes. Looking at the per-class performance analysis, we observe that the performance of GP-ICM varies when we look into Precision and Recall. Still, in all the dataset-class pairs, GP-ICM performs best in terms of either Precision or Recall, even though never in both. Moreover, it is generally the best in terms of F1, achieving the best Precision and Recall. The only exception is with MaxEnt classifying questioning tweets more accurately in terms of F1 for the England riots. When we look at the deviations, we see that all the classifiers suffer from the datasets being imbalanced towards supporting tweets. This results in all classifiers classifying numerous instances as supporting, while they are actually denying or questioning. This is a known problem in rumour diffusion, as previous studies have found that people barely deny or question rumours but generally tend to support them irrespective of their actual veracity value [Zubiaga et al. 2016]. While we have found that GP-ICM can tackle the imbalance issue quite effectively and better than other classifiers, this caveat posits the need for further research in dealing with the striking majority of supporting tweets in the context of rumours in social media. DISCUSSION Experimentation with two different approaches based on Gaussian Processes (GP and GP-ICM) and comparison with respect to a set of competitive baselines over two rumour datasets enables us to gain generalisable insight on rumour stance classification on Twitter. This is reinforced by the fact that the two datasets are very different from each other. The first dataset, collected during the England riots in 2011, is a single event that we have split into folds, each fold belonging to a separate rumour within the event; hence, all the rumours are part of the same event. The second dataset, collected within the PHEME project, includes tweets for a set of five newsworthy events, where each event has been assigned a separate fold; therefore, the classifier needs to learn from four events and test on a new, unknown event, which has proven more challenging. Results are generally consistent across datasets, which enables us to generalise conclusions well. We observe that while GP itself does not suffice to achieve competitive results, GP-ICM does instead help boost the performance of the classifier substantially to even outperform the rest of the baselines in the majority of the cases. GP-ICM has proven to consistently perform well in both datasets, despite their very different characteristics, being competitive not only in terms of micro-averaged F1, but also in terms of macro-averaged F1. GP-ICM manages to balance the varying class distributions effectively, showing that its performance is above the rest of the baselines in accurately determining the distribution of classes. This is very important in this task of rumour stance classification, owing to the fact that even if a classifier that is 100% accurate is unlikely, a classifier that accurately guesses the overall distribution of classes can be of great help. If a classifier makes a good estimation of the number of denials in an aggregated set of tweets, it can be useful to flag those potentially false rumours with high level of confidence. Another factor that stands out from GP-ICM is its capacity to perform well when a few labelled instances of the target rumour are leveraged in the training phase. GP-ICM effectively exploits the knowledge garnered from the few instances from the target rumour, outperforming the rest of the baselines even when its performance was modest when no labelled instances were used from the target rumour. In light of these results, we deem GP-ICM the most competitive approach to use when one can afford to get a few instances labelled from the target rumour. The labels from the target rumour can be obtained in practice in different ways: (1) having someone in-house (e.g. journalists monitoring breaking news stories) label a few instances prior to running the classifier, (2) making use of resources for human computation such as crowdsourcing platforms to outsource the labelling work, or (3) developing techniques that will attempt to classify the first few instances, incorporating in the training set those for which a classification with high level of confidence has been produced. The latter presents an ambitious avenue for future work that could help alleviate the labelling task. On the other hand, in the absence of labelled data from the target rumour, which is the case of the LOO setting, the effectiveness of the GP-ICM classifier is not as prominent. For this scenario, other classifiers such as MaxEnt and Random Forests have proven more competitive and one could see them as better options. However, we do believe that the remarkable difference that the reliance on the LPO setting produces is worth exploiting where possible. CONCLUSIONS Social media is becoming an increasingly important tool for maintaining social resilience: individuals use it to express opinions and follow events as they unfold; news media organisations use it as a source to inform their coverage of these events; and government agencies, such as the emergency services, use it to gather intelligence to help in decision-making and in advising the public about how they should respond [Procter et al. 2013a]. While previous research has suggested that mechanisms for exposing false rumours are implicit in the ways in which people use social media [Procter et al. 2013b], it is nevertheless critically important to explore if there are ways in which computational tools can help to accelerate these mechanisms so that misinformation and disinformation can be targeted more rapidly, and the benefits of social media to society maintained [Derczynski et al. 2015]. As a first step to achieving this aim, this paper has investigated the problem of classifying the different types of stance expressed by individuals in tweets about rumours. First, we considered a setting where no training data from the target rumours is available (LOO). Without access to annotated examples of the target rumour the learning problem becomes very difficult. We showed that in the supervised domain adaptation setting (LPO), even annotating a small number of tweets helps to achieve better results. Moreover, we demonstrated the benefits of a multi-task learning approach, as well as that Brown cluster features are more useful for the task than simple bag of words. Findings from previous work, such as Castillo et al.; Procter et al., have suggested that the aggregate stance of individual users is correlated with actual rumour veracity. Hence, the next step in our own work will be to make use of the classifier for the stance expressed in the reactions of individual Twitter users in order to predict the actual veracity of the rumour in question. Another interesting direction for future work would be the addition of non-textual features to the classifier. For example, the rumour diffusion patterns [Lukasik et al. 2015b] may be a useful cue for stance classification.
2016-09-07T12:33:02.000Z
2016-09-07T00:00:00.000
{ "year": 2016, "sha1": "527d196978f73059eed4d91b05a988e4a2a60ba9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "527d196978f73059eed4d91b05a988e4a2a60ba9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15973884
pes2o/s2orc
v3-fos-license
Unusual case of pelvic hydatid cyst of broad ligament mimicking an ovarian tumour Introduction: The diagnosis of hydatid cyst in female genital tract is rare and difficult. A high degree of clinical suspicion is needed for pre-operative investigations to exclude hydatid cyst of female pelvis. The objective of this presentation is to highlight a pelvic hydatid cyst that presented as an ovarian tumour. Case presentation: A 22-year-old female, presented with constipation and haematuria with acute urinary retention. On examination, a mass measuring 15×13 cm was palpable in the left iliac region reaching up to the umbilicus. It was smooth, movable and non-tender and a provisional diagnosis of ovarian teratoma was made pre-operatively. At laparotomy, a cystic mass was found attached to the broad ligament, excised, and a frozen section was sent for histopathology. Gross features were consistent with hydatid cyst; the cystic wall was white and there were multiple small thin-wall daughter cysts. Microscopic diagnosis with paraffin sections showed cystic lesions with laminated wall and scolices in the daughter cyst. Indirect haemagglutination test for specific antibodies was positive (128 IU). The patient responded well to surgical excision followed by albendazole administration. Conclusion: This case highlights the fact that a pelvic hydatid disease may resemble neoplastic ovarian cyst, clinically and radiologically. The possibility of pelvic hydatid disease should be included, in endemic areas where differential diagnosis of cystic ovarian lesions is needed, so that the patient can be managed accordingly. Introduction Hydatid disease is a parasitic infection caused by the larval stage of cestode (tapeworm) Echinococcus granulosus and Echinococcus multilocularis. The disease is endemic in sheep and cattle grazing countries like India, Australia, Middle East, Africa and South America (Pawlowski et al., 2001;Craig, 2003). It is transmitted by the ingestion of eggs. It most commonly affects the liver and lungs. The pelvic organs in females are rarely the primary site of cyst formation (Mandell et al., 2014). Bickers (1970), after reviewing 532 cases of hydatid disease from an endemic area over a 20-year period, recorded 12 instances where hydatid cysts were present in the pelvis, only 2 of which were in the broad ligament. The aim of this report is to highlight a rare presentation of primary pelvic hydatid disease located in the broad ligament. Case report A 22-year-old Ethiopian lady was admitted to Mubarak Al-Kabeer Hospital complaining of urinary retention and haematuria. There was no associated fever or loss of appetite. The patient also had constipation with dull abdominal pain for the past 2 weeks. Her menstrual cycle was regular and her last menstrual period was 16 days back. The patient is single and has been working in Kuwait as a house maid for 2 years. There was no history of recent travel. There was no history of tuberculosis nor were there dogs or any pets where the patient lived. The patient denied having been on a farm. On general physical examination, the patient was thin built, well nourished and pale. Temperature was 36.8 C, blood pressure was 125/65 mm Hg and pulse was 72 beats Abbreviations: AE, alveolar echinococcosis; CE, cystic echinococcosis; CT, computerized tomography; FNAC, fine-needle aspiration cytology; IHA, indirect haemagglutination assay; MRI, magnetic resonance imaging; USS, ultrasound scan; WHO-IWGE, World Health Organization Informal Working Group on Echinococcosis. min À1 . The patient's respiratory, cardiovascular and neurological systems were normal. The abdomen was soft and lax with mild diffuse tenderness but there was no rigidity. There was no hepatosplenomegaly and no ascites. A mass measuring 15Â13 cm was palpable in the left iliac region reaching up to the umbilicus that had smooth surface, was movable and non-tender. There was no lymphadenopathy. Investigations Routine haematological parameters revealed the following: white blood cell 33.2Â109 l À1 , eosinophil count 0Â109 l À1 , neutrophil count 28.9Â109 l À1 , Hb 99 g l À1 and platelet 394Â109 l À1 . Beta human chorionic gonadotrophin was negative. Biochemical parameters, including the liver and kidney function tests, were normal. Ultrasonography of the abdomen was ordered and it revealed a large pelvic and lower abdominal multi-septated cystic mass measuring 17Â12 cm that was thought to be most likely from the left ovary. There was also bilateral hydronephrosis and hydroureter most likely due to pressure changes of distal ureters from the mass lesion. The other abdominal organs were normal and there was no free fluid. In order to properly assess the mass, a magnetic resonance imaging (MRI) of pelvis was done. It revealed a large multiloculated cystic mass, 11Â13.3Â14 cm, occupying the whole pelvis and extending into the left lower abdominal quadrant, which grossly compressed and deviated the uterus anteriorly and to the right of the pelvis. The mass had compressed the mid-portion of the rectum causing prominence of its proximal part in keeping with the patient's history of constipation. The ovaries could not be seen properly, but there was no evidence of lymphadenopathy, free fluid or localized collection in the pelvis. The radiological report suggested a cystic mass that might be a neoplastic ovarian cyst, e.g. cystic teratoma. Thereafter, the patient was promptly transferred to the maternity hospital for further gynaecological consultation and laparoscopic exploration. The patient underwent exploratory laparotomy through an intra-umbilical midline incision till the symphysis pubis. A cystic mass was found occupying the left broad ligament and displacing the uterus to the right with the left fallopian tube stretched over the mass. Both ovaries were normal. Only partial removal of the cyst was possible as a small part of the cyst wall was adherent to the rectum and uterus. It was not possible to dissect the cyst wall completely and so drooling and marsupialisation was performed on the remaining cyst wall to avoid injury to the rectum and uterus. No other intraabdominal pathology was found. The liver was explored and no lesions were found. Peritoneal toilet and irrigation with hypertonic saline of the cavity was done many times and a suction drain was left in the Douglas pouch. The mass was a cyst filled with clear fluid and multiple daughter cysts. It was sent for histopathology and serology and fluid aspirated from left broad ligament cyst was sent for culture. Diagnosis Histopathology report of the frozen section was as follows: macroscopic diagnosis (Fig. 1a) revealed a left broad ligament cystectomy. The gross features were consistent with hydatid cyst. The cystic wall was white and measured 12Â10Â0.5 cm. There were multiple small thin-walled daughter cysts. Microscopic diagnosis with paraffin sections is shown in Fig. 1(b, c). There were cystic lesions with laminated wall and the scolices were noted in the daughter cyst as demonstrated in Fig. 1c. Serology test was performed on the patient's serum using indirect haemagglutination assay (IHA) for the quantitative detection of E. granulosus antibodies as an adjunct for the diagnosis of hydatid cysts. It was positive with a reading of 128 IU (diagnostic range !128 IU). Bacteriological culture of the fluid revealed no growth after 48 h. Blood sent for malarial parasite was also negative. Treatment Surgical removal of the cyst was carried out during the exploratory laparotomy. After the diagnosis of pelvic hydatid disease was confirmed, the patient was managed with albendazole 400 mg twice a day orally for 28 days, followed by a period of rest and a repeat cycle again thereafter. Outcome and follow-up The infectious disease hospital was informed of the case. The post-operative period was uneventful. The patient was stable and well. The drain was removed after 24 h. The liver function and renal function test were monitored twice weekly together with the complete blood count and coagulation profile. The patient was discharged on the 7th postoperative day to be followed up in the outpatient clinic. Discussion Human echinococcosis (hydatidosis or hydatid disease) is caused by E. granulosus that causes cystic echinococcosis (CE) and E. multilocularis that is the causative agent of alveolar echinococcosis (AE). CE and AE are the two forms most frequently encountered. Our patient presented with CE. Hydatid disease is usually acquired in childhood (Mandell et al., 2014) The symptoms present several years after exposure and it may take 5-20 years before a diagnosis is made, which was probably the case in this patient. Human is an accidental host in the life cycle of E. granulosus (Mandell et al., 2014). She most likely got the infection by ingesting the ova either by consuming contaminated unwashed vegetables or as a result of close association with pet dogs but she could not recall any pet in her house when she was young. However, this should not be surprising as the ova are partially resistant to desiccation and remain viable for many weeks, allowing delayed transmission to individuals with no direct contact with vector animals. Once in the intestinal tract, the ova hatch to form oncospheres then encyst in host viscera, developing over time to form mature larval cysts (Mandell et al., 2014). Infection with E. granulosus is estimated to occur in up to 2-6 % of endemic populations (Fuller & Fuller, 1981). A hyperendemic focus of hydatid disease was found in southwestern Ethiopia. Two tribes, the Dassanetch and Nyangatom, in the lower Omo River Valley were found to have a particularly high prevalence of the disease (Fuller & Fuller, 1981). The factors felt to contribute to this high incidence were the use of nurse dogs to clean up children; the close, familiar relationships between dogs and human; and a cluster village settlement pattern with its increased number of sheep-dog-man contacts (Fuller & Fuller, 1981). Since our patient is an Ethiopian, she is probably at high risk of being infected during her childhood in the endemic area and might not recall dogs in her childhood years. The hydatid cyst tends to form in the liver in 50-70 % of cases, or in the lung 20-30 %, but may be found in any organ (Mandell et al., 2014). Primary hydatid cyst in the pelvis as in this patient is rare and usually presents with pressure symptoms affecting the adjacent abdominal organs as was the case in our patient with pressure on the rectum leading to constipation and bladder with consequent urinary retention. For most cases, symptoms are often absent, and in many cases, infections are detected only incidentally by imaging studies. When symptoms do occur, they are usually due to the space-occupying effect of the enlarging cyst. In our patient, the cyst obstructed the ureters causing bilateral hydronephrosis. Of the 51 cases of hydatid disease reported in Kuwait between 1956 and 1960, only one was located in the pelvis (El Gazzar & McCreadie, 1962). In that study, the majority of patients were immigrants from other countries such as Iraq, Iran, Saudi Arabia and Jordan; only 5 (2.6 %) patients were Kuwaitis (El Gazzar & McCreadie, 1962). The mode of transmission of infection to pelvic area is not clear. The genital organs are considered to be the most affected areas in the pelvis in females. This can be attributed to the fact that the genital organs are relatively highly vascularized. Other reasons could be due to invasion from the connective tissue of the peritoneum of Douglas and suspensory ligaments (Terek et al., 2000). Dissemination via lymphatics has been implicated as a possible route in primary pelvic hydatid disease (Luliano et al., 2000). It is very important that a correct pre-operative diagnosis is made since all precautions must be taken to prevent dissemination and seeding of the surgical field. Unfortunately, the presentation in this case was atypical and, as such, the recommendation of image-based stage-specific approach for CE by Brunetti et al. for the World Health Organization Informal Working Group on Echinococcosis (WHO-IWGE) (Brunetti et al., 2010) was not followed. Deaths have been reported due to anaphylactic shock resulting from spillage during excision or biopsy after a mistaken diagnosis of a retroperitoneal tumour. Infection that is suspected based on imaging studies may be confirmed by a specific enzyme-linked immunosorbent assay and western blot serology (Terek et al., 2000). Serology is 80-100 % sensitive and 88-96 % specific for liver cyst but less sensitive for lung (50-56 %) or other organs (25-56 %) involvement. In this case, the IHA was positive. Eosinophilia is not a consistent or reliable finding. Imaging remains more sensitive with ultrasound scan (USS) and higher with computerized tomography (CT) and MRI than serodiagnostic techniques (Mandell et al., 2014). USS and CT scan may demonstrate features like multi-locular appearance, a fluid level from hydatid sand and ultrasonic 'water lily sign'. In this patient, abdominal USS and MRI gave the impression that the mass was a neoplastic ovarian cyst. Fineneedle aspiration cytology (FNAC) may help in establishing the diagnosis of uni-locular pelvic cystic mass but care must be taken to avoid incidence of anaphylactic reactions. FNAC may also show hooklets, scolices and laminated cyst wall. However, FNAC was not done in this case because neoplastic ovarian cyst was the provisional diagnosis. According to WHO-IWGE report (Brunetti et al., 2010), there is no 'best' treatment for CE as no clinical trial has compared all the different modalities. The optimal treatment of symptomatic CE is total surgical resection. However, consensus by the WHO-IWGE experts is that adequate therapy should be based on image-based staging. Traditionally, because of risk of spreading infection due to cyst rupture, the recommended approach has been to visualize the cyst, remove a fraction of the fluid and instill a cysticidal agent such as hypertonic saline, cetrimide or 70-95 % ethanol to kill the germinal layer and daughter cysts before resection (El Gazzar & McCreadie, 1962) when the cyst is totally removed after 30 min of instillation. In our case, the cyst could not be removed completely, as a small part of the cyst wall was adherent to the rectum and uterus. The cyst wall was marsupialized followed by peritoneal toilet and irrigation of the abdominal cavity with hypertonic saline and a drain was left in the Douglas pouch. Laparoscopic surgery for cyst removal has been done in the past in less advanced cases in which spillage of contents is less likely to occur (Luliano et al., 2000). Pre-operative treatment with albendazole for 1-3 months has been shown to significantly reduce the number of viable cysts found during surgery. Medical therapy for inoperative cysts with albendazole or mebendazole has provided improvement in most patients (55-79 %) but total cure in a smaller number (29 %). The preferred agent is albendazole because of its greater absorption from the gastrointestinal tract and higher plasma levels. It is given for 3 or more cycles at a dose of 400 mg twice a day for 4 weeks. However, for those less than 60 kg, 15 mg kg À1 day in 2 divided doses should be given, followed by 2 weeks of rest without therapy. The alternative agent, mebendazole, is poorly absorbed and must be taken at higher doses of 50-70 mg kg À1 day, for several months to achieve a therapeutic effect. In conclusion, this case highlights the fact that a pelvic hydatid disease may resemble neoplastic ovarian cyst, clinically and radiologically. The possibility of pelvic hydatid disease should be included, in endemic areas where differential diagnosis of cystic ovarian lesions is needed, so that the patient can be managed accordingly.
2018-04-03T01:24:59.004Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "7224dcd193939de1e4eb5c1f6930efbc7c3d017f", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5330247?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c5d519856b00eead7abfcab7eb5187fe0ac20af6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18219886
pes2o/s2orc
v3-fos-license
This Work Is Licensed under a Creative Commons Attribution 4.0 International License Date Deposited: the Contribution of Raised Intraneuronal Chloride to Epileptic Network Activity The contribution of raised intraneuronal chloride to epileptic network activity. This is an Open Access article distributed under the terms of the Creative Commons Attribution License Creative Commons Attribution 4.0 International, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. Altered inhibitory function is an important facet of epileptic pathology. A key concept is that GABAergic activity can become excitatory if intraneuronal chloride rises. However, it has proved difficult to separate the role of raised chloride from other contributory factors in complex network phenomena, such as epileptic pathology. Therefore, we asked what patterns of activity are associated with chloride dysregulation by making novel use of Halorhodopsin to load clusters of mouse pyramidal cells artificially with Cl Ϫ. Brief (1–10 s) activation of Halorhodopsin caused substantial positive shifts in the GABAergic reversal potential that were proportional to the charge transfer during the illumination and in adult neocortical pyramidal neurons decayed with a time constant of ␶ ϭ 8.0 Ϯ 2.8s. At the network level, these positive shifts in E GABA produced a transient rise in network excitability, with many distinctive features of epileptic foci, including high-frequency oscillations with evidence of out-of-phase firing (Ibarz et al., 2010). We show how such firing patterns can arise from quite small shifts in the mean intracellular Cl Ϫ level, within heterogeneous neuronal populations. Notably, however, chloride loading by itself did not trigger full ictal events, even with additional electrical stimulation to the underlying white matter. In contrast, when performed in combination with low, subepileptic levels of 4-aminopyridine, Halorhodopsin activation rapidly induced full ictal activity. These results suggest that chloride loading has at most an adjunctive role in ictogenesis. Our simulations also show how chloride loading can affect the jitter of action potential timing associated with imminent recruitment to an ictal event (Netoff and Schiff, 2002). Introduction Inhibitory dysfunction has long been considered to be a major factor in triggering epileptic seizures (Miles and Wong, 1983;Sloviter, 1987;Traub and Miles, 1991;Cossart et al., 2005;Pinto et al., 2005;Huberfeld et al., 2007;Zsiros and Maccaferri, 2008;Kaila et al., 2014;Pallud et al., 2014). In recent years, particular attention has focused on how positive shifts in the GABAergic reversal potential, caused by raised intracellular Cl Ϫ levels in neurons, may contribute to this process. Recordings from human brain slices, resected to treat epilepsy, show evidence of excitatory GABAergic activity (Cohen et al., 2002;Pallud et al., 2014) associated with reduced expression of the potassium chloride cotransporter KCC2 (Huberfeld et al., 2007). Even without deficits in KCC2 expression, Cl Ϫ levels may rise acutely during intense GABAergic activation (Alger and Nicoll, 1979;Fujiwara-Tsukamoto et al., 2003;Isomura et al., 2003;Kaila, 1994;Kaila et al., 1997;Staley et al., 1995;Thompson and Gähwiler, 1989a,b,c) or be influenced by the distribution of other permeant and impermeant anions . Collectively, these suggest an important role for chloride dysregulation in epileptogenesis, but it has not actually been shown whether raising intracellular Cl Ϫ generally within the neuronal population is sufficient by itself to trigger epileptiform events or what patterns of activity are associated with such Cl Ϫ loading. The largest group of cortical neurons is the pyramidal population, representing ϳ80% of all cortical neurons, and the timing of their firing is influenced strongly by the output of basket cells (Cobb et al., 1995). In epileptic cortical networks, there are particularly intense bursts of interneuronal activity that appear to provide a restraint on the propagation of epileptiform discharges (Prince and Wilder, 1967;Trevelyan et al., 2006;Schevon et al., 2012). We reasoned that chloride loading in the pyramidal population would strongly influence the pattern of any breakthrough firing during the high-frequency inhibitory volleys. In particular, we asked whether it might explain certain distinctive highfrequency features (Foffani et al., 2007;Ibarz et al., 2010) and firing patterns (Netoff and Schiff, 2002) associated with epileptic foci (Bragin et al., 2002a,b;Staba et al., 2002;Jiruska et al., 2010) and the onset of ictal activity. To test this, we made novel use of the enhanced optogenetic chloride pump derived from Natromonas, Halorhodopsin (eNpHR;Gradinaru et al., 2008) to artificially load pyramidal cells with chloride (Zhang et al., 2007;Raimondo et al., 2012) to examine the specific contribution of chloride dysregulation to epileptiform activity. Materials and Methods Dissociated neuronal culture recordings. All procedures were performed according to the requirements of the United Kingdom Animals (Scientific Procedures) Act 1986. Assessment of the chloride-loading effect of eNpHR activation was performed using dissociated neuronal cultures. Primary neuronal cultures were prepared from rat pups at embryonic days 18 -20 in the following way. A pregnant rat was killed by cervical dislocation, and a sagittal incision was made in the abdominal area to remove the pups. The neocortex and hippocampal tissue was isolated from the pups and digested using papain enzyme (Sigma-Aldrich) for 40 min. Cells were then dissociated within the growing medium (Neurobasal A, 2% B-27 supplement, 1% FBS, 0.5% glutamate, and 0.5% antibiotic-antimicotic; Invitrogen) by pipetting it up and down. Cells were plated (10 5 cells/ml) on coverslips coated with poly-Dlysine and laminin (Sigma-Aldrich). Cells were allowed to attach to the coverslips by incubating at 37°C for 2.5 h. Cells were transfected using virus [human synapsin-eNpHR3.0 -enhanced yellow fluorescent protein (EYFP)] at 7 d in vitro, and experiments were conducted on 14 -28 d in vitro. Optogenetic expression. All animal handling and experimentation were done according to United Kingdom Home Office guidelines. Optogenetic proteins were expressed on pyramidal cells by doing injection of viral vectors or breeding. To achieve pyramidal expression of eNpHR, we injected adeno-associated virus (AAV)-CaMKII-cre-GFP (University of North Carolina Viral Vector Core) into homozygous floxxed-eNpHR animals [Ai39; B6; 129S-Gt(ROSA)26Sor tm39(CAGHOP/EYFP)Hze /J; stock #014539 The Jackson Laboratory]. For pyramidal expression of Archaerhodopsin (Arch), we injected AAV-CaMKIIa-eArchT3.0 -EYFP (University of North Carolina Viral Vector Core) into wild-type C57BL/6J mice. Injections were made into either postnatal day 0 -1 pups or young adult mice. The pups had EMLA cream (2.5% lidocaine and 2.5% prilocaine) applied to the left top of their head and were anesthetized subsequently using isoflurane inhalation. A single injection of virus was made using a 10 l Hamilton syringe, with a beveled 36 gauge needle (World Precision Instruments) ϳ1 mm anterior to lambda and 1 mm lateral to the midline into the left hemisphere, at 1.7-0.8 mm deep to the skin (four separate 50 nl injections, deepest first). Approximately 0.2 l (ϳ10 11 -10 12 viral particles) was injected over a 10 min period. This reliably labeled pyramidal cells throughout the hemisphere and with minimal or no cortical scarring apparent from the injection tract at the time of recordings, at age 5 weeks to 3 months. This became the preferred method for labeling, but for the early experiments, we also performed injections in adult animals. For these adult injections, done at age 5-8 weeks, animals were anesthetized by ketamine-methoxamine intraperitoneal injection and placed in a stereotaxic head holder (David Kopf Instruments). Injections were made at three to four locations in an anterior-posterior row in one hemisphere, 1.5-2 mm lateral to the midline and 1-0.4 mm deep to the pia (0.6 l, injected over 15 min). This gave far more restricted labeling, extending 0.2-0.5 mm in coronal slices, and the illumination and recordings were then targeted to the center of this area. We also used brain slices prepared from first-generation cross-breeding of floxxed-eNpHR animals with CaMKIIa-cre mice [B6.Cg-Tg(CaMK2a-cre)T29 -1Stl/J; stock #5359; The Jackson Laboratory]. The method of introducing eN-pHR into pyramidal cells did not affect the pattern of activity induced by eNpHR priming. Fluorescence immunocytochemistry. Adult male HaloGFP f/f;CaMKcre mice were terminally anesthetized by brief inhalation of isoflurane (0.05% in air), followed by an intramuscular injection of ketamine (Ն100 mg/kg) and xylazine (Ն10 mg/kg). Once anesthetized mice were perfused through the heart with 4% (w/v) paraformaldehyde (PFA) in PBS (0.1 M), pH 7.2. After perfusion, brains were removed and postfixed in 4% PFA in PBS overnight at 4°C. The brain was cut coronally (50-mthick sections) using a Leica VT1000S vibratome. Sections were collected in PBS and then incubated in 50 mM ammonium chloride in PBS for 20 min at room temperature. After washing in PBS, sections were incubated in 0.1% (w/v) gelatin and 0.1% (v/v) Triton X-100 in PBS. Sections were double labeled with antibodies to eNpHR (rabbit anti-Halorhodopsin at 1:200; catalog #AS12 1851; Agrisera) and interneuronal markers (sheep anti-neuropeptide Y at 1:1000, catalog #AB1583, Millipore; goat antiparvalbumin at 1:1000, catalog #PVG214, Swant; goat anti-somatostatin at 1:250, catalog #sc7819, Santa Cruz Biotechnology). The antibodies were diluted in 0.1% (w/v) gelatin and 0.1% (v/v) Triton X-100 in PBS and incubated at 4°C for 48 -72 h. After extensive washing with PBS, sections were incubated with fluorescein isothiocyanate donkey antirabbit and Cy3 donkey anti-sheep and anti-goat secondary antibodies (Jackson ImmunoResearch) at a dilution of 1:500 in PBS for 2 h at room temperature. Finally, sections were washed extensively in PBS and mounted in VECTASHIELD HardSet mounting medium (Vector Laboratories). Control experiments, in which the primary antibodies were omitted, resulted in no immunoreactivity. Double-labeling images were obtained by using a Leica TCS SP2 UV confocal microscope. For analysis, optical slices were reconstructed by using NIH ImageJ. The photomicrographs used in the figures were produced by first generating an extended depth of field projection of the z-stack by using NIH ImageJ. The brightness and contrast of each image were then optimized, and multipanel figures were composed and labeled by using Adobe Photoshop and Adobe Illustrator (Adobe Systems). Brain slice experiments. Coronal brain slices (250-m-thick slices for perforated patch recordings and analysis of cellular excitability; 400 m for field recordings) were prepared from the injected animals (3-10 months) on ice-cold oxygenated (95% O 2 /5% CO 2 ) artificial CSF (ACSF; in mM: 125 NaCl, 26 NaHCO 3 , 10 glucose, 3.5 KCl, 1.26 NaH 2 PO 4 , 3 MgCl 2 , 1 Na-kynurenate, and 0.3 Na-ascorbate). For the perforated patch recordings, slices were simply transferred to a submerged incubation chamber for at least 1 h before recording. For the field recordings, after cutting, the slices were washed two times for 10 min with the oxygenated ACSF (in mM: 125 NaCl, 26 NaHCO 3 , 10 glucose, 3.5 KCl, 1.26 NaH 2 PO 4 , 1.2 CaCl 2 , and 1 MgCl 2 ) and transferred to an incubation, interface chamber (room temperature) perfused with the same ACSF for at least 1 h before being transferred to a recording interface chamber (33-36°C) perfused with this same ACSF. The concentration of divalent cations was used following the study by Sanchez-Vivez and McCormick (1999), because this allowed the reliable triggering of sustained bursts of activity, with a prominent high-frequency discharge of fast-spiking interneurons, by white matter stimulation (putative thalamic inputs). Perforated patch-clamp recordings. Recordings were made using a laser spinning disc confocal microscope (Visitech) fitted with Patchstar micromanipulators (Scientifica) mounted on a Scientifica movable top plate. Electrophysiological data were collected using Multiclamp 700B (Molecular Devices) and Digidata acquisition boards connected to desktop computers (Dell Computer Company) running pClamp software (Molecular Devices). During the entire recording, cells were bathed in circulating oxygenated ACSF solution (perfusion at 1-3 ml/min), heated to 33-37°C by a sleeve heater element (Warner Instruments) around the inflow tube. Gramicidin perforated patch recordings were made using 3-7 M⍀ pipettes (borosilicate glass; Harvard Apparatus), pulled on a P87 Micropipette Puller (Sutter Instruments). Electrodes were filled with a highchloride internal solution [in mM: 135 KCl, 4 Na 2 ATP, 0.3 Na 3 GTP, 2 MgCl 2 , and 10 HEPES (290 mOsm), pH 7.35], so that the integrity of the cell membrane could be monitored. Fresh gramicidin stock was prepared daily at 5 mg/ml in DMSO (catalog #G5002; Sigma-Aldrich). Gramicidin stock was added to the electrode filling solution to achieve a final concentration of 100 l/ml, mixed thoroughly (40 s vortexing and sonication), then filtered (Millex syringe filter, 0.45 m pore size), and used for patching immediately. Recordings were made when the series resistance had stabilized below 200 M⍀ (ϳ30 min after patching). Recordings were made in voltage-clamp mode, with a baseline holding potential of Ϫ70 mV. GABA A currents were induced by 100 M muscimol application delivered close to the recorded neuron through a patch pipette coupled to a Picospritzer II (Parker Instrumentation) delivering 10 ms pressure pulses (10 -20 psi), and timing was coordinated with the illumination (optogenetic activation) using pClamp software coupled to a Digitizer box. Test voltage ramps (200 ms duration, a saw-tooth, updown function; peak, Ϫ50 mV; trough, Ϫ90 mV; slope, Ϯ400 mV/s) were applied at baseline and also at close to the peak of GABAergic current (I GABA ). The GABAergic current, I GABA , was derived from the difference between the ramp currents to allow a GABAergic currentvoltage (I-V ) curve to be plotted. I-V plots were derived with and without previous eNpHR activation, and the effect of eNpHR activation on E GABA was estimated from the shift in the x-intercept of the I-V plot. The time course of the chloride loading was measured by doing 2.8 s eNpHR activation, followed by muscimol puffs with increasing delay. The change in E GABA was calculated for each time delay and normalized to the value for the shortest delay, and a single exponential curve was fitted to the normalized data to give the time constant for chloride clearance. Recovery of cellular properties after optogenetic activation. Whole-cell patch-clamp recordings were made by direct visualization of neocortical pyramidal neurons (differential interference contrast microscopy). Cells were recorded in current-clamp mode (Multiclamp 700B and 1440 Digidata analog-to-digital board), and brief hyperpolarizing and depolarizing current pulses were applied before, during, and immediately after prolonged eNpHR or Arch activation to allow assessment of the input resistance and excitability parameters. The action potential (AP) threshold was estimated by fitting a straight line to the steepest part of the dV/dt (first derivative of the voltage with respect to time) versus voltage phase plots and then extrapolating back to 0 dV/dt (Fig. 1C). Field potential recordings. Early experiments were performed with sharp, glass microelectrodes, but to facilitate the spike sorting, we then turned to using tetrodes with four electrode heads in a diagonal arrangement separated by 50 -100 m (NeuroNexus). Single-electrode extracellular recordings were made through broken-tip borosilicate patch pipettes (Harvard Apparatus), conducted using either Multiclamp 700B (Molecular Devices) or Axoclamp 1D (Molecular Devices) amplifiers, a 1401-3 analog-to-digital converter (Cambridge Electronic Design), and Spike2 software (Cambridge Electronic Design) with sampling rate of 5-10 kHz. Multichannel extracellular recordings were collected at 10 kHz, using a single four-channel-probe configuration (Q1x1-tet "tetrode"; NeuroNexus) connected to an ME16-FAI-PA system and MC_Rack software (Multichannel Systems). Bipolar electrical stimulation (0.2 ms duration) was delivered via tungsten electrodes onto the white matter. eNpHR and Arch were activated by 561 nm, 50 mW solidstate laser (Cobolt) connected to a fiber optic with a fine cannula (400 m core, 0.20 numerical aperture; Thorlabs). After optimizing the optic fiber coupling, the total light power at the cannula tip was measured at 15-30 mW. To achieve an approximately equivalent inhibitory effect by the two different optogenetic probes, we then scaled down the illumination intensity until both gave an approximately equivalent suppression of the network response to a white matter electrical stimulation. This was measured at the same location as the subsequent post-illumination field effects. Typically, matching required slightly less light for the Arch than for the eNpHR experiments, consistent with our observations from single-cell recordings that Archtransfected neurons often showed larger light-activated currents. The illumination used in the recordings was constant in any given slice but ranged from 3 to 20 mW at the cannula tip to give equivalent suppression of the network event, as assessed by the line length of the unfiltered signal in the 0.5 s after the electrical artifact (Guo et al., 2010; eNpHR: mean reduction in line length of evoked event, 82.9 Ϯ 2.8%; Arch: mean reduction, 88.5 Ϯ 4.4%; not significant). For the optogenetic illumination, to achieve illumination comparable hyperpolarization current (estimated by the suppression of electrical stimulation and the amplitude of the field recording artifact at the onset and offset of illumination), the light intensity was further reduced using neutral density filters (ND filters 0.2-0.6; Thorlabs) placed in the light patch between the laser and the optic fiber. Our patch-clamp studies Figure 1. eNpHR chloride-loading effect. Ai, E GABA was measured by doing perforated gramicidin patch recording and applying muscimol with or without previous eNpHR activation. Aii, Sample traces from a single neuron, showing progressively longer eNpHR activation, associated with a progressively larger effect on E GABA , as measured by a voltage ramp during a muscimol-triggered postsynaptic current. Aiii, Scatter plot showing the shift in E GABA versus the total loading charge calculated as the integral of the current over the illumination period. The plot includes multiple data points from single dissociated neurons, derived from different duration eNpHR activations (n cells) and five data points from five neurons (red) in adult brain slices, all from 2 s illumination. B, The recovery of E GABA after eNpHR activation. Bi, Sample traces from one of the neurons recorded in a brain slice, showing the response to muscimol puffs at progressively longer latencies after a 2 s eNpHR activation. A single-exponential curve (black line) is fitted to the minima of the voltage-ramp responses. Bii, Pooled data showing the recovery of E GABA after eNpHR activation in dissociated neuron cultures (black) and neurons recorded in adult brain slices (red). Single-exponential fits of the mean data are shown, but note that the time constants reported in Results are the averages of fits made to each individual cell. C, Example trace showing the excitability, in response to somatic charge injection, of a layer 5 pyramidal cells recorded in an adult brain slice, before, during, and immediately after activation of eNpHR current (orange bar). D, Phase plots of the rate of voltage change (dV/dt) versus the voltage for AP doublets immediately before (red) and within 1 s after a 5 s eNpHR activation (black), illustrating that eNpHR activation had no lasting effect on the AP threshold or shape. E, Measures of input resistance (R N ) normalized to the pre-optogenetic activation value, showing a pronounced drop during the optogenetic activation but a rapid recovery. The post-optogenetic measures were taken within 1 s of the end of the illumination period. showed that a more stable optogenetic current could be obtained by coillumination of both 561 and 491 nm light, thought to be attributable to improved cis-to-trans recovery of the activated retinal by blue light (Han and Boyden, 2007). Therefore, we provided coillumination of the brain slices, using the 561 nm laser light with blue epifluorescence light (460 Ϯ 20 nm excitation filter) through a 4ϫ air objective (0.28 numerical aperture; Nikon). We confirmed that this coillumination strategy provided an enhanced and sustained eNpHR current by assessing the suppressive effect on electrical stimulation at different times during a long illumination period. Spike-timing analyses. Frequency band analysis and all the spectrograms were done on the raw data, whereas for single-unit and multiunit analyses, bandpass filter of 300 -5000 Hz was applied and spikes were detected by a simple thresholding algorithm. Spike analyses were only performed for periods when the tissue was not illuminated (that is to say, not during the periods of optogenetic activation). To analyze spike timing according to the dominant oscillation, detected spikes were plotted on the Hilbert transform of the corresponding dominant oscillation in the 75-300 Hz bandpass. Angle and linear histograms were derived from the Hilbert plot. Baseline data were centralized and fitted to a Gaussian curve to obtain the baseline half-width. To assess the AP jitter and outof-phase firing across multiple experiments, we derived a "half-width index," which was calculated using the following formula: Half-width index ϭ (spikes outside baseline HW) (spikes within baseline HW) where HW is the half-width measured on the baseline period. Note that the baseline HW is used for all analyses for a single experiment, including for the optogenetically primed datasets that have a different width modal histogram peak. Computer simulations of firing patterns. Modeling was performed to provide a simple, intuitive illustration of the phenomenon we describe in our experimental studies, showing the consequence of changing E GABA in a conductance-based model neuron. All the simulations presented here were run on personal computers (Dell Computer Company) using the NEURON simulation program (Hines and Carnevale, 2001). We will provide the main model files on request but will describe their features below. All simulations were run on simple four-compartment models possessing a soma, an axon, and a single dendrite comprising a short proximal compartment and a long distal one. The passive properties were homogeneous throughout the cells [membrane capacitance, 1 F/cm 2 ; axial resistivity, 160 ⍀cm; leak conductance (g leak ), 0.66 pS/m 2 ; g leak reversal potential, Ϫ80.6 mV; resting membrane potential, Ϫ76 mV]. The input resistance in the passive structure (also zero synaptic conductance) was relatively high (ϳ1 G⍀), primarily because of the relatively small structure of the cell, but the membrane time constant (passive 0 ϭ ϳ27 ms) was consistent with that measured in layer 2/3 pyramidal cells at physiological temperature (Trevelyan and Jack, 2002). All model cells had the same somatic and axonal active conductances: a Hodgkin-Huxley-type Na ϩ conductance (peak conductance, 2000 pS/m 2 ) and two non-inactivating K ϩ conductances (peak conductance, 5 and 3.5 pS/m 2 , respectively). Ten excitatory synaptic conductances (E rev ϭ 0 mV) were located on the distal compartment, spaced equally along its length, providing a persistent, noisy excitatory drive of mean 6.4 pS/m 2 (calculated over the entire distal dendritic compartment) with a variance of 10%. This noisy excitatory drive was implemented as described by Destexhe et al. (2001) and reduced the input resistance to ϳ170 M⍀. In the absence of inhibition, this generated a high rate of spiking, with no periodicity (see Fig. 4A). Threshold for AP generation was Ϫ57.7 mV. Inhibitory synapses were located only on the proximal dendritic tree to simulate the powerful basket cell inputs that are known to dictate pyramidal spike timing (Cobb et al., 1995). Because our intention was only to explore the constraint of pyramidal firing by basket cells, we did not include other inhibitory synapses in our models. Basket cells were set to fire at 100 Hz and deliver a steady amplitude train of postsynaptic conductance events. Each synaptic event was modeled as a transient conductance, described by a rising time constant ( rise ϭ 0.2 ms) and a decay time constant ( decay ϭ 2.5 ms) as follows: where g is the conductance, and W is the synaptic weight, giving a peak conductance of 15.9 pS/m 2 over the whole proximal dendritic compartment. We ran separate simulations for a range of E GABA values from Ϫ45 to Ϫ85 mV. At 100 Hz, and the mean conductance was 4.86 pS/m 2 across the whole compartment. Consistent with experimental models, our simulations showed that this high-frequency proximal dendritic inhibition imposed very clear phase relation on the times of pyramidal firing for all values of E GABA , with respect to the rhythm of the basket cell firing. We will make the NEURON code available freely on request. , with electrical stimulation during the dark periods (5 s). Electrical stimulation was applied 0.5 s after and 4.5 s before the illumination. The interval between stimuli was always 30 s. Baseline periods had the same electrical stimulation frequency (period, t ϭ 30 s) but no eNpHR illumination. Bii, Representative traces and spectrograms taken during two periods of electrical stimulation without eNpHR illumination (Base1 and Base2) and two periods with illumination (HP1 and HP2). Note the large increase in power at 300 -600 Hz during the eNpHR-priming periods, which reversed rapidly without illumination (Base2 and Ci). C, Composites of sequential spectrograms to show the changes during the entire experiment. Note the prominent band at ϳ200 -500 Hz, indicative of a rise in high-frequency power induced by eNpHR priming and the reversion to baseline without illumination. Similar experiments with Arch induced a small increase in amplitude of the network event but without any change in the high-frequency activity. The simulations of the population firing were made by convolving the spike-phase plots with histograms of E GABA values for two distributions, one taken from Huberfeld's measures (mean E GABA ϭ Ϫ60.8 mV; Huberfeld et al., 2007) and a second "normal" distribution that was shifted in a hyperpolarizing direction (mean E GABA ϭ Ϫ68.5 mV). We made eight simulations of the spike-phase plots for values of E GABA between Ϫ80 to Ϫ45 mV (5 mV steps) and multiplied each distribution by the number of cells in that bin before normalizing the plots to give an ordinate axis in terms of the probability of a spike per bin per cycle. Offline analysis was done using Igor (WaveMetrics), MATLAB (Math-Works), and Microsoft Excel software. Box plots were generated using online software (http://boxplot.tyerslab.com). Statistics are given as mean Ϯ SEM unless otherwise stated, and significance is tested using paired Student's t tests. Transient chloride loading of neurons using optogenetics Perforated patch recordings of dissociated pyramidal cells in culture indicated that activation of eNpHR for just a few seconds could induce positive shifts in E GABA of up to 25 mV (Fig. 1A), confirming previous work by Raimondo et al. (2012). The shift in E GABA decayed back to baseline levels with a time constant, decay , of 28.12 Ϯ 9.26 s (n ϭ 14; Fig. 1B). We repeated these experiments also on pyramidal cells in brain slices prepared from adult mice (5 months) and showed a qualitatively similar effect ( decay ϭ 8.0 Ϯ 2.8 s; n ϭ 5). Notably, both groups showed large variance in the calculated time constants, indicating some degree of heterogeneity with respect to this cellular behavior, and consequently, although the averages suggested that the decay was faster in brain slices, the difference from the cultured neurons was not significant in our samples (t s ϭ 1.36; 0.2 Ͻ p Ͻ 0.3). Importantly, other cellular parameters affecting neuronal excitability returned rapidly to normal after the end of eNpHR activation. In whole-cell current-clamp recordings of pyramidal cells recorded in brain slices prepared from adult mice, we induced eNpHR activation, causing an average hyperpolarizing shift of 23.0 Ϯ 1.5 mV, strongly suppressing firing driven by current injection and also inducing a 27 Ϯ 8% drop in input resistance. Measurements taken within the first second after the end of eNpHR activation showed no difference from the preillumination measures for resting membrane potential (pre-eNpHR E m ϭ Ϫ74.8 Ϯ 2.5 mV; post-eNpHR E m ϭ Ϫ75.0 Ϯ 2.9 mV; n ϭ 4 cells), input resistance ( Fig. 1C; change from baseline, 4.4 Ϯ 1.4%), and AP threshold [pre-eNpHR threshold (above E m ), 33.0 Ϯ 1.2 mV; post-eNpHR threshold, 33.3 Ϯ 1.2 mV; n ϭ 4], height (peak Ϫ threshold: pre, 53.9 Ϯ 4.4 mV; post, 52.9 Ϯ 4.5 mV; n ϭ 4), or shape ( Fig. 1D). At this same time point, E GABA was shifted positively by 5.9 Ϯ 1.0 mV in cultures (n ϭ 12 cells) and 3.9 Ϯ 0.6 mV in adult pyramidal cells (n ϭ 4 cells) in acutely prepared brain slices (culture vs slice comparison: t s ϭ 1.09; 0.4 Ͻ p Ͻ 0.2, nonsignificant). In short, the only apparent cellular change persisting beyond 1 s was the change in E GABA . We made similar measurements after Arch activation. This optogenetic protein also hyperpolarizes neurons but by pumping protons out of cells. As with eNpHR activation, Arch activation was associated with a suppression of firing and a drop in input resistance (normalized to preillumination, R n ϭ 61 Ϯ 20%, n ϭ 4), but these both corrected rapidly within a second of ending illumination (post-illumination, normalized R n ϭ 106 Ϯ 4%), as did baseline E m (pre, Ϫ71.1 mV; post, Ϫ70.6 mV; n ϭ 4; t s ϭ 0.03, n.s.), AP threshold relative to baseline E m (pre, 32.1 Ϯ 4.2 mV; post, 32.5 Ϯ 4.3 mV; n ϭ 4; t s ϭ 0.02, n.s.), and AP height (peak Ϫ threshold: pre, 56.2 Ϯ 6.7 mV; post, 54.9 Ϯ 7.0 mV; n ϭ 4; t s ϭ 0.13, n.s.). However, unlike after eNpHR activation, Arch activation produced no change in E GABA (measured at ϳ1 s after illumination; pre, Ϫ64.5 Ϯ 5.1 mV; post, Ϫ64.7 Ϯ 5.5 mV; n ϭ 3; t s showing the changes in spectral power for different frequency bands, normalized to the baseline for each recording, for epochs of repeated eNpHR primed (red) and Arch primed (black; ON and intermediary periods without illumination (OFF). B, Box plot of the averaged ON-period power during eNpHR priming and Arch priming for different frequency bands. eNpHR activation positively modulates activity across all frequency bands but only differs significantly from Arch activation in the 300 -600 Hz frequency band (n ϭ 11 for eNpHR and Arch, p Ͻ 0.01, t test). Halo, Halorhodopsin. Thus, eNpHR activation can non-invasively induce a transient loading of Cl Ϫ into many neurons simultaneously through broad illumination of networks expressing the protein. Therefore, we used this technique to modulate intraneuronal Cl Ϫ levels in pyramidal cells in brain slices to investigate what patterns of activity ensue when neurons become loaded with Cl Ϫ , as is thought to happen in epileptic pathology. We recorded neocortical activity patterns in brain slices taken from animals expressing either eNpHR or Arch (Chow et al., 2010), both under the CaMKII␣ promoter (Fig. 2). We used a methodology developed by Sanchez-Vives and McCormick (2000), in which the extracellular divalent cation concentration is lowered slightly (1.2 mM Ca 2ϩ , 1 mM Mg 2ϩ ), because this allowed the reliable triggering of sustained bursts of activity, with a prominent discharge of fast-spiking interneurons (Shu et al., 2003), by white matter stimulation (putative thalamic inputs). We first confirmed that activation of the optogenetic probe, by illumination through an optic fiber placed adjacent to the recording electrode, could suppress substantially the electrically evoked network response (Fig. 2Aiii). We then changed the timing of electrical stimulation so that it was delivered after the optogenetic activation to investigate the chloride-loaded state induced by eNpHR priming. Prolonged eNpHR activation might also induce rebound firing simply as a reaction to the hyperpolarization. Therefore, we repeated these experiments in brain slices expressing instead Arch, which is another hyperpolarizing optogenetic tool that pumps protons out and which thus allows us to separate the chloride-loading effects from the rebound activation. Distinctive network activity induced by chloride loading Chloride loading the pyramidal population, by previous eNpHR activation ("eNpHR priming"), reliably induced a large increase in evoked network activity with a prominent signal in highfrequency bandwidth between 150 and 600 Hz. This effect reversed rapidly, within five trials (150 s), once eNpHR priming was stopped (Figs. 2 B, C, 3A). In contrast, after Arch priming, there was no change in the high-frequency power from baseline trials, confirming that the eNpHR-primed induced highfrequency activity is Cl Ϫ dependent and not merely attributable to rebound firing caused by synchronized hyperpolarization of pyramidal cells (Figs. 2C, 3 A, B). We performed more detailed analysis of the high-frequency components of the extracellular signal between 75 and 600 Hz. The eNpHR-primed tissue showed increases above that shown by Arch-primed tissue at all frequencies, but this was only significantly different for the 300 -600 Hz bin [ Fig. 3A; first priming, eNpHR (4.13 Ϯ 0.87, n ϭ 11) vs Arch (1.44 Ϯ 0.26, n ϭ 11), p Ͻ 0.01; second priming, eNpHR (3.67 Ϯ 1.09, n ϭ 7) vs Arch (1.56 Ϯ 0.31, n ϭ 10), p Ͻ 0.05, Student's t test]. The 600 -1200 Hz signal showed virtually no change for either eNpHR-or Arch-primed tissue, indicating that the eNpHR-priming effect at 300 -600 Hz is unlikely to arise simply from a signal harmonic process, because that would have also produced increases at higher-order harmonics, too. In both baseline and eNpHR-primed periods, there were episodes of bursting activity that showed up as prominent bands in the spectrograms between 150 and 300 Hz (Fig. 4A), with an additional band at approximately double this frequency after eNpHR priming (Fig. 4A, right). We speculated that the lowerfrequency bandwidth (Ͻ300 Hz) may be dictated by bursts of fast-spiking interneurons, which are known to fire in this range, and because the output of these neurons can entrain pyramidal firing, we analyzed the timing of the multiunit activity with respect to the dominant frequency band between 75 and 300 Hz. We did this by performing a Hilbert transform on the 75-300 Hz band, which allows the oscillating signal to be plotted as a continuous circular trajectory, and then plotting histograms of the spike times with respect to this cycle (Fig. 4A). Spiking during the baseline period was virtually entirely confined to the lower left quadrant (eNpHR-expressing tissue: mean phase angle, 3.58 Ϯ 0.11 Rad, n ϭ 7; Arch-expressing tissue: mean phase angle, 3.24 Ϯ 0.14 Rad, n ϭ 6), which represented spikes occurring close to the trough of the local field potential. In contrast, after eNpHR priming, the spiking was far more intense, spike timing showed a marked increase in jitter with respect to the dominant high gamma oscillations in all slices, and, in three of eight slices, there was a biphasic distribution, with a prominent peak in the histogram exactly out-of-phase with the main peak. This feature was unique to the eNpHR-priming experiments and did not occur with Arch priming (Fig. 4B). To pool data from different experiments, we derived an outof-phase index by fitting a Gaussian curve centered on the circular mean to define the half-width of the modal peak and then calculating the ratio of the number of spikes outside of this halfwidth limit to the number of spikes within it (Fig. 4C). This showed a highly significant increase in out-of-phase spiking in the eNpHR-primed tissue (half-width index, 1.35 Ϯ 0.15, n ϭ 8) compared with both baseline periods (0.56 Ϯ 0.04, n ϭ 8, p Ͻ 0.001) and also to the control Arch-primed tissue (0.60 Ϯ 0.20, n ϭ 6, p Ͻ 0.02; Fig. 4D). An important consideration when analyzing such spike-phase relationships, especially when relating units and field recordings made in the same electrode, is spectral leak, which is when a very 4 (Figure legend continues.) (late) shows a progression of time] were plotted on the Hilbert transform of the dominant oscillation (75-300 Hz; blue circular trace), and the rose plots represent the numbers of APs occurring at different phases of the oscillation. Note the out-of-phase spiking in the eNpHR-primed dataset, also apparent as a second minor peak (arrowed) in the conventional histogram (duplicated data beyond Ϫ and ). B, Periods of hyperpolarization using Arch (Arch-priming) causes a rebound increase in spiking (contrast the calibration bars for the histograms) but no change in the phase distribution of spikes. C, Half-width index measured the ratio between the number of out-of-phase spikes to the in-phase spikes, which were taken to be the spikes within bounds (red lines) set by the half-width of a Gaussian fit to the main spike peak in the baseline histograms. Left column shows the raster plots for baseline and Cl-loaded tissue, and the right column shows the same data plotted as histograms of the spike times. D, Pooled data for half-width indices, showing significantly higher values for Cl-loaded slices (eNpHR-primed) compared with baseline (n ϭ 8, p Ͻ 0.001, t test) and for Arch-primed tissue (n ϭ 6, p Ͻ 0.02, t test). Halo, Halorhodopsin. high-frequency event will create a small oscillation also in lower bandpass filtered traces. To separate the spectral leak from the spike-independent field oscillation, we therefore performed two control analyses to examine potential confounding effects of spectral leak. First, we performed amputations of the unit spikes in the raw data before performing the 75-300 Hz filtering (Fig. 5). This reduced the amplitudes of the peaks (Fig. 5A, contrast the blue and black lines), and the spike-phase distributions showed slightly broader peaks than for the normal signal, both indicating that there was indeed a small spectral leak effect, but importantly, the distribution remained strongly skewed. We further tested the confounding influence of spectral leak with a second more ex- Figure 5. Control analyses to examine the effect of spectral leak. A, A raw data trace, showing the spikes (blue) and the amputated version (black). The bottom traces show the 75-300 Hz bandpass filtered traces. B, 75-300 Hz bandpass filtered traces of the raw (blue) and the amputated traces. Red dots indicate the spike times. Note the reduced amplitude peaks for the amputated spikes but that these still occur at the same phase of the oscillation, as evidence by the same skewed orientation of the pooled data shown in the spike-phase plots (right columns). C, The same filtered traces but showing the new locations of APs for the time-shifted analysis. The spike-phase plots showed a shifted phase, reflecting the time shift, and a broader main peak, but importantly, they were still heavily skewed for both the raw and amputated data. Figure 6. Spike-phase relationships were preserved for three different control analyses for spectral leak effects. A, Example phase histograms showing the shifted spike-phase distributions for the three control paradigms illustrated in Figure 5, but note that the key features, with baseline spike-phase plots having single peaks, and eNpHR-primed plots having double peaks, are maintained. B, Pooled data showing that there were highly significant differences between the eNpHR-primed and Arch-primed matched analyses for all control paradigms. *p Ͻ 0.05; **p Ͻ 0.01; ***p Ͻ 0.001. Halo, Halorhodopsin. treme test by examining the phase relationship at a time ahead of the spikes. Our reasoning was that, if the spikes were indeed embedded within genuine oscillations, then these oscillations would extend sufficiently far in front and after the spike that the relationship would still exist for time-shifted points. We chose a forward time shift to avoid any postsynaptic influences. We tested increasing ⌬t until the amputated and non-amputated spike-phase plots were identical, indicating a time point beyond the effect of the spikes. The ⌬t was specific for each trace and ranged between 1.3 and 1.8 ms ahead of the spike. Importantly, for all three analyses [(1) amputated; (2) time-shifted; and (3) both amputated and shifted data], the baseline and Arch-primed datasets produced single-peak phase distribu- Figure 7. Heterogeneity in levels of intracellular Cl Ϫ can explain the appearance of out-of-phase population firing. A, Simulation using NEURON of how a train of high-frequency IPSCs from a fast-spiking interneuron superimposed onto a noisy, desynchronized glutamatergic drive creates a patterned firing in the pyramidal cell. AP threshold was approximately Ϫ48 mV. B, Simulations in the same model, at four different GABAergic reversal potentials. The firing probability is plotted with respect to the field oscillation, which is approximately /4 phase shifted from the start of the IPSC, as judged by comparisons with the timing of fast-spiking interneuron APs (Hasenstaub et al., 2005). These different probability histograms are convolved with estimates of the distribution of E GABA in the pyramidal population [middle, black; the bottom histogram is taken from the study of Huberfeld et al. (2007), mean E GABA ϭ Ϫ64mV; the top is a simulated, normal distribution shifted to a slightly more hyperpolarized mean E GABA ϭ Ϫ68 mV]. The convolution is achieved by assuming that the various bins show the firing phase relations in the left plots, to yield estimates of the population firing for the "normal" and the "Huberfeld" populations. tions, whereas the eNpHR-primed datasets produced double peaks (Fig. 6). We derived half-width indices for all three analyses, which all showed highly significant differences between eNpHR-primed and baseline periods and also between the eNpHR-primed and Arch-primed data (Fig. 6). Therefore, we concluded that, although there is indeed a small but demonstrable spectral leak effect, this could not explain the distinctive spike-phase plots found in the different experiments. Activity patterns explained by heterogeneity within the Clloaded pyramidal population We hypothesized that a mechanism involving fast-spiking interneurons, which are known to influence the timing of pyramidal neurons and are also thought to discharge at high rates in response to surges in network activity (Cammarota et al., 2013), might explain these spiking patterns. In particular, we investigated how the influence of a high frequency inhibitory barrage might be distorted by changing the intracellular Cl Ϫ levels in the postsynaptic population, using compartmental neuronal modeling (Hines and Carnevale, 2001). We simulated an intense, desynchronized glutamatergic drive on to a pyramidal cell and then further delivered a high-frequency barrage of IPSCs on to the soma and proximal dendrites; Figure 7A shows repeated simulations of postsynaptic spiking for a cell with a relatively hyperpolarizing E GABA (Ϫ75 mV), which are collated into spiking phase histograms based on the cycle of IPSCs. We next simulated how the effect of this same inhibitory barrage changed with E GABA shifting toward more positive levels (Fig. 7B) to show that the window of opportunity for pyramidal spiking broadens as the effective inhibition diminishes (E GABA shifting from Ϫ75 to Ϫ65 to Ϫ55mV). Eventually, when E GABA exceeds AP threshold (more than Ϫ48 mV), there is a sudden 180°() phase shift in the spiking. Previous studies have shown that the output of fast-spiking interneurons is synchronized precisely on to the multiple postsynaptic pyramidal cells (Miles et al., 1996;Trevelyan, 2009). If we consider that this postsynaptic population has a distribution of E GABA values, as measured by Huberfeld et al. (2007) and also shown using Cl Ϫ imaging (Dzhala et al., 2010), we can use the simulations of firing patterns for the single cell at different E GABA levels to simulate the multiunit spiking patterns in this heteroge-neous population. We derived the population response using two different distributions of E GABA values, one distribution as described in resected human epileptic hippocampi (Huberfeld et al., 2007) and a negatively shifted one as the physiological distribution. The physiologically shifted distribution (mean E GABA ϭ Ϫ68.5 mV; Fig. 7B, top right) showed a unimodal spiking distribution with respect to the rhythm imposed by the basket cells. In contrast, only a small positive shift in E GABA (mean E GABA ϭ Ϫ60.8 mV; Fig. 7B, bottom right) allowed a marked increase in spiking because of the broadening of the main peak but also the appearance of a prominent out-of-phase peak, reflecting the activity of the subpopulation of neurons with pathologically high levels of Cl Ϫ and a correspondingly high E GABA , in excess of AP threshold. These distributions reproduced very well the histograms drawn from different eNpHR-primed brain slices (Fig. 4A). Cl ؊ loading only triggered full ictal events in conjunction with other pathological activity This model thus provides a coherent explanation of how our eNpHR-priming experiments can give rise to activity patterns that have also been described in epileptic animals (Foffani et al., 2007;Ibarz et al., 2010). Therefore, it was a surprise that, in none of these experiments, either with eNpHR or Arch priming, did repeated electrical stimulation to the network actually trigger ictal-like events, with hypersynchronous, rhythmic discharges. We next asked whether Cl Ϫ loading altered the seizure threshold for other treatments. We examined the 4-aminopyridine (4-AP) model, because this model is known to trigger intense bursts of firing in the fast-spiking interneuronal population (Cammarota et al., 2013), therefore reasoning that such activity may escalate toward ictal activity if their postsynaptic output were imposed on a population of pyramidal cells with raised intracellular Cl Ϫ levels. Epileptiform activity can be readily induced by bath application of 50 -100 M 4-AP in brain slices. When we used only 20 M 4-AP, 4 of 20 slices showed epileptiform activity very quickly (Ͻ10 min), and this activity was not in any way modulated by subsequent eNpHR activation (Fig. 8). The majority of slices were quiescent (80%), even when bathed in 20 M for Ͼ1 h. However, when these quiescent slices were then eNpHR primed, full ictal activity was very rapidly induced, within a few minutes, of an epileptiform discharge that starts immediately after the end of a period of eNpHR-activation (orange bar). Note how the event persists even when the illumination (eNpHR activation) is resumed. B, Example traces showing three initial patterns of activity in 20 M 4-AP before eNpHR activation: Type 1, no evidence of any epileptiform activity (12 of 20; black trace); Type 2, occasional brief and small-amplitude interictal events (middle trace, dark gray; n ϭ 4); and Type 3, continual frequent discharges starting within minutes of applying 4-AP ("status epilepticus"; bottom trace, light gray; n ϭ 4). C, eNpHR activation caused a rapid escalation of epileptiform activity, with the appearance of sustained epileptiform bursts (full ictal events) in 15 of 16 (94%) of the recordings that initially showed Type 1 (non-epileptic) and Type 2 (interical events only) activity. Type 3 activity was not obviously modulated by eNpHR activation, persisting also through periods of illumination without changing frequency or amplitude. Halo, Halorhodopsin. in all but a single slice [15 of 16 slices; latency to the first full ictal event after start of eNpHR priming, 8.9 Ϯ 2.7 cycles (30 s cycles of 25 s illumination/5 s dark); latency from the start of the first cycle, 268 Ϯ 80 s, n ϭ 15; Fig. 8B]. The ictal activity generally started immediately after the light was turned off, without the need of electrical stimulation, and often persisted into the next illumination cycle, resisting the inhibitory action of the eNpHR (see the expanded example traces in Fig. 8A). Discussion We explored a key hypothesis in epilepsy: that chloride dysregulation in neurons is a major factor in triggering seizures. Surprisingly, Cl Ϫ loading by itself did not trigger full ictal activity, even when electrical stimulation was delivered to the network. However, when Cl Ϫ loading was associated with other pathological activation, by bathing in low levels of 4-AP, it did rapidly induce ictal activity. Thus, we make a clear distinction between how Cl Ϫ loading creates a primed brain state and the requirement for some adjunct pathology to actually trigger a seizure. Previous animal work has suggested that intense bursts of GABAergic activity can themselves be a direct trigger of seizures, with the proposed mechanism being a positive shift in E GABA (Bernard et al., 2000;Gnatkovsky et al., 2008). An important component of the pathology may be that this pattern of inhibitory discharge can synchronize the postsynaptic population of pyramidal cells (Klaassen et al., 2006). These issues are explored further in several review articles (Menendez de la Prida and Trevelyan, 2011;Jiruska et al., 2013). Our results suggest that the combination of such intense interneuronal discharges together with a progressive shifting E GABA may be particularly ictogenic and also give rise to certain previously unexplained features of electrophysiological recordings immediately before the seizure onset. Our findings have a clear parallel with recent studies of human brain tissue resected during epilepsy surgery (Cohen et al., 2002;Huberfeld et al., 2007;Pallud et al., 2014), which show spontaneously occurring interictal events when bathed in conventional ACSF. These interictal events are sensitive to GABAergic blockade, suggestive of a Cl Ϫ -loaded, excitatory GABAergic state, but to trigger full ictal events in these slices required excitability to be further enhanced by bathing in raised K ϩ . Importantly, the subsequent ictal activity appeared to arise out of a fundamentally different type of transient discharge that was not sensitive to GABAergic blockade (Pallud et al., 2014). In other words, as with our data, we can distinguish between interictal activity associated with Cl Ϫ dysregulation and a second pattern of pathological activity that is independent of Cl Ϫ dysregulation; the combination of these is associated with full ictal activation, but the first alone does not predispose to full ictal events. We contrasted the immediate changes in network excitability after eNpHR priming versus Arch priming. We attempted to achieve an approximately equivalent suppressive effect of network activation by Arch and eNpHR by adjusting the illumination intensity. Of course this is rather inexact, but the key issue is that, in both optogenetic paradigms, we clearly achieved some measurable network suppression, and then our data collection focused on the residual, post-illumination effects, of which the change in E GABA appeared to be the most persistent. After activation of both eNpHR and Arch, there was a rise in excitability, suggesting that rebound activation may contribute, but the effect was far larger for eNpHR priming. Importantly, there were other, highly distinctive changes in firing patterns induced by eNpHR priming that are well captured by a model of a heterogeneous Cl Ϫ loading in the population of pyramidal cells, causing them to react differently to the same high-frequency, GABAergic, synaptic barrage. The eNpHR-priming network changes also reversed with a timeframe similar to the recovery of E GABA measured in single cells. Collectively, this strongly suggests that the eNpHRpriming effect identifies unique features of network excitability attributable to Cl Ϫ loading. These changes in activity in the Cl Ϫ -loaded tissue correspond well with activity patterns recorded in epileptic animals (Foffani et al., 2007;Ibarz et al., 2010) in that both show particular highfrequency field oscillations with an apparent harmonic feature. The activity in the epileptic animals has been explained in terms of individual neurons firing at lower rates but with different subpopulations of neurons firing out-of-phase with each other. Our Cl Ϫ -loading experiments support this view, and our model explains how this binary segregation could occur, depending on whether E GABA is below or above the AP threshold in different cells. The important feature of this model is that there is a distribution of E GABA values, and, in this situation, the eNpHRpriming effect can arise with relatively small (single figure millivolt changes) shifts in the mean E GABA . There is an increasing body of evidence that links highfrequency oscillations to the focus of epileptic pathology in humans, too (Bragin et al., 2002b;Staba et al., 2002). There are, of course, other mechanisms suggested for the origin of highfrequency oscillations, albeit without this "harmonic" feature. Any intense, focal activation of large numbers of neurons will generate a high-frequency signal, and there are several paradigms of epileptiform activity in which this occurs independent of any fast-spiking interneuron involvement. For instance, epileptic activity may arise from local loss of inhibition or even in preparations without synaptic function (Draguhn et al., 1998), in which ephaptic (Jiruska et al., 2010) or gap junction-mediated (Traub et al., 1999) spread has been implicated. However, our data provide the first evidence that there may be characteristic features of highfrequency activity that are pathognomonic for Cl Ϫ dysregulation. It should further prompt us to look for other features that may also be used to subclassify pathological activity patterns, particularly if they also offer insights into the underlying pathology. Our model also pertains to another long-time puzzle about epileptic spiking patterns, which is that, as the cortical network is recruited to a full ictal event, there is an increase in spiking jitter between neurons (Netoff and Schiff, 2002). This result had seemed to contradict the traditional concept of a progression toward hypersynchrony, but our data now offer an explanation, suggesting, instead, that the progressive trend is with the shift in E GABA , and a consequence of this is that the initial apparent effect is that the spiking window is made broader, before the critical stage is reached when E GABA surpasses AP threshold.
2016-10-08T01:47:31.943Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "8a4ae2d0e961833102e94c8b07d37f5e5e3369b6", "oa_license": "CCBY", "oa_url": "https://www.jneurosci.org/content/jneuro/35/20/7715.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "33d92b9815d0d28efba34ac9fb566664fdd08e82", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119093839
pes2o/s2orc
v3-fos-license
One-loop quantization of rigid spinning strings in $AdS_3 \times S^3 \times T^4$ with mixed flux We compute the one-loop correction to the classical dispersion relation of rigid closed spinning strings with two equal angular momenta in the $AdS_3 \times S^3 \times T^4$ background supported with a mixture of R-R and NS-NS three-form fluxes. This analysis is extended to the case of two arbitrary angular momenta in the pure NS-NS limit. We perform this computation by means of two different methods. The first method relies on the Euler-Lagrange equations for the quadratic fluctuations around the classical solution, while the second one exploits the underlying integrability of the problem through the finite-gap equations. We find that the one-loop correction vanishes in the pure NS-NS limit. Besides, integrability has been shown to be more fruitful in the M 4 = T 4 than in the M 4 = S 3 × S 1 case, where it has been shown to emerge also in its CFT 2 side [31]. In spite of its similarities with the AdS 5 × S 5 scenario, these two backgrounds exhibit some features that obstruct a straightforward integrability approach. The presence of massless excitations play a prominent role among them, as it seems to be responsible for the mismatch between the computation of the dressing factor for massive excitations performed via perturbative world-sheet calculations [32,33] and crossing equations [9,15]. This link was proposed for the AdS 3 × S 3 × T 4 space in [34], where it is argued that the lack of suppression of wrapping corrections involving massless modes associated to the T 4 factor could explain the discrepancy. Unlike AdS 5 × S 5 , just supported by a Ramond-Ramond (R-R) five-form flux, both AdS 3 × S 3 × T 4 and AdS 3 × S 3 × S 3 × S 1 can be deformed through the addition of a Neveu-Schwarz-Neveu-Schwarz (NS-NS) three-form flux, which mixes with the R-R one. The aim of this paper is to compute the one-loop correction to the dispersion relation of rigid spinning string on AdS 3 × S 3 × T 4 in the presence of mixed flux. To this end, we use two different methods. The first one extracts the characteristic frequencies from the Lagrangian of quadratic fluctuations, whose signed sum leads to the one-loop correction. This procedure has been already applied successfully to different string configurations in AdS 5 × S 5 [42][43][44][45][46][47][48][49][50]. It was also applied to rigid spinning strings on AdS 3 × S 3 × T 4 with pure R-R flux [32]. The second method starts from the construction of the algebraic curve associated to the Lax connection of the P SU(1, 1|2) 2 /(SU(1, 1) × SU(2)) supercoset sigma model. The quantization of the algebraic curve is well understood for the AdS 5 × S 5 case, see [51] for an in-depth explanation. The extension of this procedure to the AdS 3 ×S 3 ×T 4 space is straightforward for the massive excitations with pure R-R flux. However, the construction of the finite-gap equations requires some extensions when dealing with the full spectrum supported by a general mixed flux. First of all, the introduction of an NS-NS flux shifts the poles of the Lax connection away from ±1. This issue has been solved in [52], where the finite-gap equations proposed for AdS 3 × S 3 × T 4 in the pure R-R regime [25] are generalized. A second problem arises from the massless excitations, which require to loosen the usual implementation of the Virasoro constraints [53]. The one-loop correction to the BMN string for AdS 3 × S 3 × T 4 and AdS 3 × S 3 × S 3 × S 1 with pure R-R flux has been obtained within this framework in [53], while the respective correction to the short folded string in such backgrounds has been derived in [33]. The one-loop correction to the BMN string for AdS 3 × S 3 × T 4 with mixed flux has been computed in [52]. The outline is as follows. In section 2 we summarize some features of the rigid classical spinning string solution on R × S 3 ⊂ AdS 3 × S 3 × T 4 . The pure NS-NS limit and the restriction to the su(2) sector are then discussed. In section 3 we compute the Lagrangian of the quadratic fluctuations around the rigid spinning solution and solve its equations of motion, hence obtaining their characteristic frequencies. In section 4 we check the results of the previous section by rederiving the characteristic frequencies from the flux-deformed finite-gap equations. In doing so, we neglect contributions from the massless excitations as we will have proven in section 3 that their net contribution ultimately vanishes. In section 5 we make use of the characteristic frequencies previously obtained to compute the one-loop correction to the dispersion relation. In section 6 we present an argument regarding the extension to non-rigid strings in the NS-NS limit. We close the article with summary and conclusions. In the appendices we have collected conventions and detailed computations that supplement the main text. 2 The R × S 3 string with mixed flux In this section we review the dynamics of rigid closed spinning strings on a R × S 3 ⊂ AdS 3 × S 3 × T 4 background in the presence of both R-R and NS-NS three-form fluxes. This kind of solution as been studied, for example, in references [54][55][56][57]. 1 After doing so, we restrict ourselves to two regimes of interest, the su(2) sector and the pure NS-NS limit, which are the focus of the reminder of the article. The AdS 3 × S 3 × T 4 background metric can be parameterized by with the constraints Even though the background is supported by R-R and NS-NS fluxes, only the latter contribute to the bosonic string Lagrangian as the B-field where the parameter q ∈ [0, 1] measures the mixing of the two fluxes. In the case of pure R-R flux, q = 0, the theory can be formulated in terms of a pure Green-Schwarz action akin to the AdS 5 × S 5 background. The Lagrangian associated to the bosonic spinning string ansatz in AdS 3 × S 3 reduces in this case to the Neumann-Rosochatius integrable system [62]. Turning on the NS-NS flux introduces an additional term in the latter which does not spoil integrability [55,56]. In fact, the complete Lagrangian remains integrable under this deformation [35]. The pure NS-NS flux limit is of particular interest, q = 1, because the action can be re-expressed as a supersymmetric WZW model [63,64], which leads to several simplifications. Instead of considering the most general setup, the remainder of this paper is devoted to bosonic classical spinning strings rotating in S 3 at the center of AdS 3 with no dynamics along T 4 . Thus we fix z 0 = 1 and γ i = 0, and impose the ansatz describing a closed string rotating with two angular momenta in S 3 Dealing with closed string solutions requires periodic boundary conditions of (2.4) on σ, The general spinning string solution encompasses other well known solutions as particular limits which are interesting enough to be studied separately. Examples are the giant magnon [58,59], spiky strings [58] and multi-spike strings [60]. Spinning D1-strings have also been studied [61]. where m i are integer winding numbers. After substituting the ansatz (2.4) into the Polyakov action with the B-field in the conformal gauge, we obtain where the prime stands for derivatives with respect to σ and h = h( √ λ) is the coupling constant. 2 In addition, we can construct three non-vanishing conserved charges from the isometries of the metric: the energy and the two angular momenta The Euler-Lagrange equations for the radial coordinates are On the other hand, the equations of motion for the angular equations can be easily integrated as the Lagrangian only depends on their derivatives where v i are integration constants which can be understood as the momenta associated to α i . Replacing α ′ i by these momenta in the Lagrangian (2.6) leads to aforementioned deformation of the Neumann-Rosochatius system. The equations of motion must be supplemented with the Virasoro constraints, which read (2.14) 2 The relationship between the coupling constant and the string tension is the same as the one for AdS 5 × S 5 at first order, i.e. h = √ λ 4π + . . . , although it might receive corrections (both perturbative and non-perturbative) in the 't Hooft coupling. However, it is known that the first correction O(1) should vanish for the pure RR case [23,26,32] and it might vanish also for the mixed flux case. The most straightforward solutions to the equations of motion are those of constant radii r i (σ) = a i . (2.15) On these solutions α ′ i = m i and (2.10) and (2.11) simplify to Furthermore, the restriction (2.2) and the Virasoro constraint (2.14) determine 17) or, using the definitions of the angular momenta, . The dispersion relation for general values of q can only be written down as a series in negative powers of the total angular momentum. Despite so, there exist two regimes where we can find a closed analytic expression for the dispersion relation. The first one is the pure NS-NS limit we commented above, while the second one corresponds to a restriction to a su(2) subsector of the theory, which amounts to set the J 1 = J 2 . Note that these two regimes are not mutually exclusive. Let us examine in more depth both cases. In the first limit the equations of motion can be solved by where J = J 1 + J 2 . Using the first Virasoro constraint (2.13) and (2.17) we can obtain the dispersion relation for later convenience. The expression for the dispersion relation reads Quadratic fluctuations around the classical solution In this section we derive the Lagragian for the quadratic fluctuations around the rigid spinning string-type solutions on R × S 3 . We obtain the equation for the characteristic frequencies for them, which greatly simplifies in both the case of two equal angular momenta and the case of pure NS-NS flux. The signed sum of the characteristic frequencies provide the one-loop correction to the classical energy. We derive this effective Lagrangian by splitting the target-space fields into the classical background and fluctuation fields and truncating the latter up to second order in the action. We treat the bosonic fluctuations on the sphere, on the anti-de Sitter space, on the torus and the fermionic fluctuations separately in order to make the section more readable. 3 Bosonic fluctuations on S 3 Regarding the fluctuations on the sphere, we take advantage of the spherical symmetry and perform the following substitution wherer i , ρ i denote the perturbation fields. 4 Introducing (3.1) into the Polyakov action with the B-field in the conformal gauge, the Lagrangian for the quadratic fluctuations of the fields turns out to be 3 In this section we do not label the origin of the frequencies (S, AdS, T or F ) as no ambiguity arise. 4 It is also possible to incorporate the fluctuations as r i → a i +r i and ϕ i → α i + ω i τ +φ i instead. Both choices are equivalent since ρ i ≈ a iφi at first order. where the dot stands for derivatives with respect to τ . Besides, an orthogonality requirement has to be satisfied between the classical solution and the fluctuation fields. Such condition can be seen as a consequence of the perturbation of the Lagrange multiplier in (2.6), promoted as Λ → Λ +Λ. In particular, this constraint Imposing it, the Euler-Lagrange equations forr 2 , ρ 1 and ρ 2 become We are allowed to decomposer 2 , ρ 1 and ρ 2 in a base of exponential functions due to the periodic boundary conditions in σ. Supplementing the decomposition with an expansion in Fourier modes in τ , we write The sum over k follows from the existence of six different frequencies associated to the same mode number n [65]. Employing this ansatz, the Euler-Lagrange equations (3.4) become the matrix equation where the unspecified matrix elements are explicitly The existence of non-trivial solutions to (3.6) requires det M = 0, which provides the following characteristic equation for the frequencies This equation has six solution as we commented above, and reduces to the one obtained in [43] in the limit of a pure R-R flux. We should note that two frequencies corresponding to decoupled massless modes arise as solutions to the characteristic equation. It is possible to prove that their contribution to the 1-loop correction to the energy gets cancelled by the ghost fields contribution, which emerge as consequence of the conformal gauge-fixing condition [44]. Accordingly, we can safely ignore them. In principle, we can find the remaining solutions to the equation (3.8) as a series in inverse powers of the total angular momentum J. Since expressions thus obtained are note very enlightening, we focus exclusively on the two regimes we presented at the end of the previous section. In the su(2) sector (m 1 = −m 2 = m) the characteristic frequencies, written as a series in Υ, are whereas in the pure NS-NS limit they read In the overlap of both regimes the frequencies can be written as follows Bosonic fluctuations on AdS 3 We proceed analogously for AdS 3 fluctuations using the parameterization This choice leads to the following Lagrangian density for the fluctuation fields This Lagrangian has to be supplemented with an orthogonality constraint between the background and the fluctuation fields similar to (3.3), which in this case readsz 0 = 0. As a consequence, the field χ 0 decouples, leading to two massless excitation. Again, the massless contributions cancel with the conformal ghost contributions and consequently χ 0 can be ignored in view of the discussion above. The relevant Euler-Lagrange equations are An analogous expansion to (3.5) allows us to derive the characteristic equations for the frequencies. In this case, 15) whose solutions are All of them reduce to already known result in the non-deformed case [32,43]. Furthermore, the pure NS-NS limit allow us to complete squares, obtaining with uncorrelated signs. Bosonic fluctuations on T 4 Since we consider no classical dynamics on the torus, we are led to a free Lagrangian for the fluctuations. Therefore, the characteristic frequencies are 5 with one pair of solutions for each of the four coordinates. Fermionic fluctuations As our background solution is purely bosonic, the Lagrangian for the fermionic fluctuations reduces to the usual fermionic Lagrangian computed up to quadratic order in the fermionic fields. For a type IIB theory, the latter is given bỹ where the covariant derivative is [12,14] ( We refer to appendix A for definitions and conventions. We get the characteristic equation for the frequencies by substituting the classical solution, fixing the kappa symmetry and expanding the fermions in Fourier modes. In 5 In the T 4 space one could also consider fluctuations with non-trivial windings. However, after averaging over these windings, one is led only with the contribution from the zero winding sectors. We want to thank Tristan McLoughlin for pointing us this issue. particular, the kappa gauge condition we choose is θ 1 = θ 2 ≡ θ as [25,32]. The Fourier expansion reads where we have summed up to eight frequencies for each mode instead of up to the sixteen frequencies that would be expected from the number of degrees of freedom [66]. We are allowed to do so because only six of the ten target-space coordinates are non-trivially involved in the equations of motion and hence we can restrict ourselves to six-dimensional gamma matrices. To recover the full set of frequencies we have to double the multiplicity of each frequency ω k,n . We should remark that we have imposed periodic boundary conditions to the fermionic fields as [43,67,68], relying on the discussion of the appendix E of [69]. Imposing the vanishing of the determinant of the differential operator in (3.19), the resulting expression for the characteristic equation is where κ 2 = 1 − q 2 . Note that it is a polynomial of eighth degree, which agrees with the discussion above. Solving this equation, we find the following frequencies for the su(2) sector: We stress that the massless frequencies remain as such independently from the mixing parameter, cancelling the contributions from the T 4 modes for all values of q. Performing the q → 1 limit manifestly simplifies the expressions in (3.23) 6 where we have used that lim q→1 w 0 = Υ. Frequencies from the algebraic curve In order to check the computations performed in the previous section, we derive again the frequencies associated to the fluctuations using a different method that relies on the integrability of our problem: the semi-classical quantization of the classical algebraic curve. We start by constructing the eigenvalues of the monodromy matrix for the classical solution, whose associated quasi-momenta define a Riemann surface. This classical setting is quantized by adding infinitesimal cuts to this surface, thus modifying the analytical properties of the quasi-momenta, which contain the one-loop correction to the energy. The Riemann surface we are interested in presents only one cut, analogously to the one studied for the AdS 5 × S 5 scenario in [69]. Although the presence of the NS-NS flux deforms the construction of the algebraic curve, hence rendering the results from AdS 5 ×S 5 inapplicable, the procedure remains mostly the same and can be used as guideline. For the full detailed derivation of the finite-gap equations for general values of q we refer to [52], where the authors apply them to the particular case of the BMN string. Although this procedure neglects the contribution from the massless excitations, we already know from the previous section that their contributions cancel each other. We start the computation of the classical algebraic curve by choosing the gauge g L ⊕ In this limit we can find and solve the equation beyond the su (2) for the remainder. 7 Notice that the choice of representative in [52] was not correct. Even though a 2 i = Ji wi holds for the undeformed case, this relation gets modified due to the flux becoming the one shown in equation (2.18). In any case, this problem does not affect the computation of the correction to the BMN spectrum therein. Our quasi-momenta for the rigid spinning string are the same as those obtained there after replacing their Ω by our Υ. Using the normalization from [52] 8 the Lax connection associated reads Notice that the usual ±1 poles of the Lax connection shift due to the presence of both fluxes, appearing now at ±s and ± 1 s , where s = s(q) is defined as The quasi-momenta associated to this Lax connection, obtained from the logarithm of the eigenvalues of the monodomy matrix, arê where K(x) = m 2 x 2 κ 2 + 2qκm 2 x + Υ 2 . In order to find the one-loop correction to the energy we add extra cuts to the Riemann surface. These cuts are infinitesimally small and appear as poles on the quasi-momenta. Depending on the sheets of the Riemann surface the infinitesimal cuts connect, they correspond to different kinds of excitations. The precise relations between them are The explicit residue of these poles are where X is either A or S depending on which sheet the cut ends. The functionsα(x) anď To fix the positions where we have to add these poles we have to use the relation between the quasi-momenta above and below the branch cuts of the Riemann surface C ij , where i and j (comprising both A or S and 1 or 2) label the sheets the cut connects, This equation not only constraints the positions of the poles x ij n , but also the behaviour of the corrections to the quasi-momenta on them On top of that, the quasi-momenta also present poles on the same points as the Lax These restrictions provides us with enough information to completely fix the corrections to the quasi-momenta. The details of the construction are lengthy, so we have relegated them to Appendix B. We eventually obtain the following expression for the correction to the dispersion relation where we still have to implement the expressions for each pole. We can check that in the q → 0 limit we recover the undeformed AdS 3 × S 3 ⊂ AdS 5 × S 5 expressions [25]. Note that all −2qn terms in the expressions from appendix B have been replaced by −qn (4.14) The value of the poles is determined by equation (4.10), which can be solved as a series in Υ −1 for general values of q. Nevertheless, when taking the q → 1 limit, those equations heavily simplify and we can find exact solutions after an appropriate regularization the poles with κ factors. The solutions for general q are collected in the appendix C. Here we just write down the solutions for q = 1, which read . (4.15) the limit is well behaved despite the apparent singularities that appear. Plugging back into the aforementioned equation and using the definitions (4.14) we get , , We end this section comparing the expressions of the characteristic frequencies obtained through both methods and discuss their differences. Here we focus on the su(2) sector in the pure NS-NS limit. The comparison for general values of q is relegated to the appendix C, but the arguments presented in the discussion below are still valid. When we collate equations (3.9), (3.16) and (3.23) with (4.14) and (4.16) we observe that they are equal up to some shifts. Such shifts fall into two categories, shifts of the mode number and shifts of the frequencies, and can be understood as a change of reference frame [69]. As our frequencies present the same shift structure and the shifts at q = 1 cancel each other when summed, we confirm that both computations are in agreement. Therefore, we can extract the one-loop correction to the dispersion relation using the frequencies from either of the methods. Computation of the one-loop correction In this section we put together the characteristic frequencies from previous sections to compute the one-loop shift to the dispersion relation in the su (2) sector. This correction is given by the sum of the fluctuation frequencies where ω B n and ω F n are the bosonic and fermionic contributions respectively. Firstly, we consider the pure NS-NS the limit in the su(2) sector. Using the frequencies obtained from the quadratic fluctuations, these contributions are ω B n = 2n + (n + w 0 ) + (n − w 0 ) + 4n = 8n , ω F n = 2 2 n + w 0 2 + 2 n − w 0 2 = 8n . Let us focus now on the case of general mixed flux. Here whereas the S 3 contribution is more conveniently written after summing part of the series in a square root, by analogy with the results we have obtained from the algebraic curve, In order to perform the infinite sum on the mode number we use the method presented in [47], which consists in replacing such sum by an integral weighed with a cotangent function 2πi n∈Z ω n = C dz π cot(πz)ω z , where the contour C encircles the real axis. The frequencies related to the AdS space (5.4) and fermions (5.6) contain square roots with complex branch points. The same applies to the leading order of the frequencies related to S 3 (5.7). Choosing our branch cuts from each branch point to infinity allow us to deform the contour of the integral so it encircles them. As the branch points are of order iΥ, we can consistently approximate the cotangent by one in the semiclassical limit, reducing the contour integral to a regular integral of a square root. In our case, (5.8) involves integrals of the form where Λ is a sharp cut-off which regularizes (5.9). The sum of these contributions gives n∈Z ω S n + ω AdS n + ω T n − ω F n = I(qΥ, κΥ) + I(−qΥ, κΥ) + I(qw 0 , κw 0 ) + I(−qw 0 , κw 0 ) A direct inspection shows that the quadratic and linear contributions of the regulator cancel but the logarithmic contribution does not. This fact reflects that higher orders in the expansion in the mode number of the S 3 frequencies contribute to the cancellation. In order to check this statement we should expand the characteristic frequencies of the S 3 modes in the regime of large mode number n where we have written down just the solutions with +n as leading contribution. Since matching with the frequencies expanded in Υ is not direct, the labelling here is arbitrary. Taking into account the relation we can check that after replacing the first two contributions in (5.10) by the sum over n of the frequencies (5.11), the logarithmic divergence now cancels. This proves that the one-loop correction to the dispersion relation is finite for all values of the mixing parameter. A comment about non-rigid strings In this section we provide a plausibility argument for the vanishing of the one-loop correction for the non-rigid case in the pure NS-NS limit. where c i and ν are constants that depend on ω i , v i and q whose explicit expressions can be found in [56]. For our purposes it is enough to know that the elliptic parameter ν vanishes when q → 1. Another important feature of the solution (6.1) is that its functional form is the same as the one for non-rigid spinning strings in AdS 5 × S 5 [62]. The one-loop corrections for spinning folded and pulsating strings in AdS 5 × S 5 were computed in [49,50], where it was shown that the Euler-Lagrange equations of all the fluctuations can be rewritten as the eigenvalue problem of a single-gap Lamé operators It is important to remark that our finite-gap computation does not take into account the massless fields because the background field expansion showed that their net contribution vanishes. Including the massless contributions into the finite-gap equations would need a further extension of the procedure followed here. It would be desirable to generalize the method presented in [53] to deal with these massless excitations in the case of mixed flux. A natural generalization of our analysis would be the precise computation of the oneloop correction for non-rigid strings. Even though the vanishing of this correction on the q → 1 limit seems plausible, an explicit check is needed. In principle, the procedure used in [49,50] for AdS 5 × S 5 string theory could be generalized to this end. Semiclassical giant magnon solutions support this statement, as they can be obtained as a particular regime of general spinning strings. These solutions were studied in [37,59], where it was shown that they display a linear dispersion relation. 9 Furthermore, in [37] it is argued that such dispersion relation holds at each perturbative order for giant magnons understood as magnonic bound states, which in particular implies the vanishing of its one-loop correction up to corrections in the coupling constant h. The construction of the S-matrix for elementary magnonic excitations starting from symmetry considerations strongly supports this fact [14]. Equation (5.3) and the discussion of section 6 suggests that the vanishing of the one-loop correction in the pure NS-NS limit is not a characteristic of rigid spinning string solutions but it might be a feature of general spinning strings. Proving this statement would shed light on the role of spinning semiclassical solutions in the mixed flux scenario, and its q → 1 limit. Besides, it would be desirable to compare our results with the prediction from the string Bethe ansatz for the dressing phase. From this perspective there are two possible sources for the correction we have computed: quantum corrections to the classical integrable structure and wrapping corrections coming from finite size effects. The comparison was first performed in [67] for the AdS 5 × S 5 background and allowed to extract strongcoupling corrections to the dressing phase that later were proven to be in agreement with the predictions derived from crossing relations. The same comparison was performed for AdS 3 × S 3 × T 4 with pure R-R flux, but there both computations did not match. The presence of massless modes, with no analogy in AdS 5 × S 5 , is believed to underlie this disagreement as wrapping contributions are exponentially supressed by the mass of the excitations involved. 10 On the other hand, wrapping corrections are absent in the pure 9 Note that in [59] a more general magnonic dispersion relation has been derived. Nonetheless, semiclassical solutions can be typically mapped via AdS/CFT duality in the J i → ∞ limit, according to which the dispersion relation therein indeed becomes linear. 10 Perturbative analyses around the BMN vacuum of the dressing phase in the R-R [34] and mixed flux NS-NS limit [40]. Therefore, in the limit of great angular momentum, we expect that some control over massless wrapping corrections can be gained by means of the parameter q, helping to elucidate the disagreement. Finally, we want to point out that the spectrum of closed strings with zero winding and zero momentum on the torus on a F1/NS5-brane supported with R-R moduli has been studied in [41]. It is also shown there that the mixed flux background studied here can be retrieved from the near-horizon geometry of such scenario. Thus, our computation may also be relevant to the pure NS-NS theory at a generic point in its moduli space. A Conventions In this appendix we collect the conventions we used during the article, in particular those concerning the fermionic Lagrangian (3.19). Firstly, we fix our index notation. We use greek indices for the worldsheet coordinates, lower-case latin indices for the ten-dimensional Minkowski flat spacetime, uppercase undotted indices for target spacetime and upper-case dotted indices separate the 32 components Majorana-Weyl spinors in 10 dimensions into two 16 components spinors. The worldsheet coordinates, denoted as τ and σ, are raised and lowered with the flat metric η αβ =diag(−1, 1) and its associated Levi-Civita symbol is defined so ǫ τ σ = 1. Flat Minkowskian coordinates takes values in {0, . . . , 9}, being raised and lowered with the flat metric η ab =diag (−1, 1, . . . , 1). [39] regimes has also led to discrepancies with the all-loop S-matrix predictions and the dispersion relation, obtained both from the underlying symmetries of the system, when dealing with massless excitations. In order to relate target space coordinates and flat Minkowskian coordinates we construct the vielbeins E a = E a A dX A and the spin connection differential 1-forms Ω ab = Ω cab E c , obtained from the former using For the AdS 3 × S 3 space the vielbeins are given by The spin connection differential one-forms are written down as the remaining components either could be obtained through Ω a b = −Ω b a , or are zero. Both vielbeins and spin connection can be pulled back to the worldsheet using a solution of the equations of motion as When the constant radii classical solution presented in section 2 is plugged, the non-trivial pulled back vielbiens are e 0 = κ dτ , e 4 = a 1 (ω 1 dτ + α ′ 1 dσ) , e 5 = a 2 (ω 2 dτ + α ′ 2 dσ) , (A.4) and the non-trivial pulled-back spin connection differential one-forms are H abc and F abc refer the Minkowskian components of the Neveu-Schwarz-Neveu-Schwarz and Ramond-Ramond three-form fluxes respectively, given by [14] / H a = 2q / E a (Γ 012 + Γ 345 ) + (Γ 012 + Γ 345 ) / E a , where the slash denotes contraction with the gamma matrices. Integrability and conformal symmetry fix q 2 + κ 2 = 1 [35]. B Computation of the corrections to the quasi-momenta of the one-cut solution We treat the AdS 3 , the S 3 and the fermionic contributions separately to simplify the computation. This separation allow us to alleviate notation by dropping the sheet labels both on the poles x and on the number of excitations/cuts N n . B.1 Contribution from AdS 3 excitations The classical algebraic curve presents a cut only on the sheets related to the sphere, making the process of computing the AdS 3 modes equivalent to the computation of fluctuations around the BMN string solution. This computation has been already performed in [52], so we just quote their result here (without imposing the level matching condition) Substituting the quasimomenta (4.4) on the cut condition (4.10) relates the functionsα andα with the mode number and the position of the poleŝ Plugging them into the previous expression we obtain B.2 Contribution from S 3 excitations Adding infinitesimal cuts in the sheets related to S 3 has to take into account the existence of a branch cut on the classical algebraic curve. The presence of both cuts generates two kinds of corrections to the quasi-momenta, one coming from the poles associated to the infinitesimal cuts and another coming from shifts of the branch points of the cut due to the addition of these poles. Thus we have to split the ansatz for the corrections to the quasi-momenta in two contributions where the factor K(x) dividing the second term comes from the fact that Using both the analytic properties of the corrections to the quasi-momenta on the cut (4.10) and the inversion symmetry of the algebraic curve we can fix the rest of the quasi-momenta related to the sphere The explicit form of these functions is obtained from the known pole structure of the corrections and their asymptotic properties. Equation (4.7) entails that the combinations δp S 1 + δp S 2 and δp S 1 + δp S 2 have no poles, hence we are free to choose f (x) = 0. On the other hand, δp S 2 − δp S 1 has a simple pole with residue 2α(x)Ň n atx and δp S 2 − δp S 1 has a simple pole with residue 2α(x)N n atx. Furthermore, δp S i (respectively δp S i ) have poles at −s and 1 s (respectively s and − 1 s ) whose residues are correlated with those of the δp A i (respectively δp A i ) quasi-momenta at the same points, see (4.11). As a consequence, we can write down the following ansatz for the function g(x) where a and δa i are unknown constants. As we mentioned in section 4, our conventions for the definition of the Lax connection allow us to relate it with the Noether currents of the system for large values of the spectral parameter in a simple way. Therefore, the asymptotic behaviour of the quasi-momenta can be related with conserved global charges. In particular, for the excitations we are Besides, the behaviour around zero provides us two conditions that fix the remaining two unknown constants To simplify our results we write the residuesα andα in terms of the poles using the equation (4.10) and the classical values of the quasi-momenta (4.5) and (4.6) 14) The O(1) of (B.12) reads while the O(x) gives us Now that we know the residues at −s and 1/s of the quasimomenta associated to the sphere, we can compute the correction to, for example, thep A 2 quasi-momenta, as δ∆ is computed from its asymptotic behaviour. Using that K(−s) = K(1/s) = w 0 we can write and thus B.3 Contribution from fermionic excitations As opposed to the previous cases, we have to distinguish between the fermionic contributions N AS and N SA , both hatted and checked. Nevertheless, equations (4.4), (4.5) and (4.6) imply that the differences of classical quasi-momenta are equal two by twô hence both kind of poles are equal in pairsx AS =x SA andx AS =x SA . As we have to deal with the cut of the classical solution too, we can use an analogue ansatz to the one we used for the S 3 modes (B.5), but with different functions f (x) and g(x) that reflect the pole structure of fermionic fluctuations (4.7). The ansätze for such functions are where we have defined 2ň ≡Ň AS −Ň SA and 2Ň ≡Ň AS +Ň SA , with similar expression for the hatted ones. The comparison between fermionic frequencies is not so immediate, since it requires a shift of the mode number n. In fact the combinationΩ F n−qw 0 + w 0 2 is identical to ω 5,n and ω 7,n from the equation (3.23). A similar relation relatesΩ F n with ω 6,n and ω 8,n with an extra contribution from the winding. The comparison of sphere frequencies is more involved since we do not have a closed expression for general values of q. Instead we can compare them at the level of the characteristic equation. From the equation (3.8) and equations (B.13) and (B.14) we infer thatΩ S n = −ω S n andΩ S n−2m = −ω S n . Using the reality condition for the fluctuation fields, ω S n = −ω S −n , these relations can be rewritten asΩ S −n = ω S n andΩ S −n−2m = ω S n .
2018-07-27T11:06:49.000Z
2018-04-27T00:00:00.000
{ "year": 2018, "sha1": "5aa38bb7439370bb91f05f28185f8d86bf88330f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2018)141.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "5aa38bb7439370bb91f05f28185f8d86bf88330f", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225321500
pes2o/s2orc
v3-fos-license
Structural and Magnetic Properties of NiZn Ferrite Nanoparticles Synthesized by a Thermal Decomposition Method : Ni 1 − x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) nanoparticles were synthesized by a thermal decomposition method. The synthesized particles were identified as pure spinel ferrite structures by X-ray di ff raction analysis, and they were calculated to be 46–51 nm in diameter by the Scherrer equation, depending on the composition. In the FE-SEM image, the ferrite nanoparticles have spherical shapes with slight agglomeration, and the particle size is about 50 nm, which was consistent with the value obtained by the Scherrer equation. The lattice parameter of the ferrite nanoparticles monotonically increased from 8.34 to 8.358 Å as the Zn concentration increased from 0.5 to 0.7. Initially, the saturation magnetization value slowly decreases from 81.44 to 83.97 emu / g, then quickly decreases to 71.84 emu / g as the zinc content increases from x = 0.5, through 0.6, to 0.7. Ni 1 − x Zn x Fe 2 O 4 toroidal samples were prepared by sintering ferrite nanoparticles at 1250 ◦ C and exhibited faceted grain morphologies in the FE-SEM images with their grain sizes being around 5 µ m regardless of the Zinc content. The real magnetic permeability ( µ (cid:48) ) of the toroidal samples measured at 5 MHz was monotonically increased from 106, through 150, to 217 with increasing the Zinc content from x = 0.5, through 0.6, to 0.7. The cuto ff frequency of the ferrite toroidal samples was estimated to be about 20 MHz from the broad maximum point in the plot of imaginary magnetic permeability ( µ (cid:48)(cid:48) ) vs. frequencies, which seemed to be associated with domain wall resonance. Introduction Spinel ferrite nanoparticles such as NiZn, MnZn have gained much attention for several applications [1][2][3][4][5], including high frequency circuits, the cores of radiofrequency (RF) transformers, inductors, antennas and radar absorbing materials, based on their high resistivity and low loss at high frequency. They also have great potential as an efficient catalysts and/or catalyst supports for decomposing organic or inorganic pollutants [6,7]. NiZn ferrite nanoparticles are becoming more and more important in the field of bio-medical applications for instance magnetic resonance imaging (MRI), drug delivery systems and hyperthermia treatment of cancer by using their appropriate magnetic properties, antimicrobial activity and biological compatibility [8,9]. Figure 1 shows the spinel structure (the lattice parameter is about 0.84 nm) is formed by 24 cations (Fe 2+ , Zn 2+ , Co 2+ , Mn 2+ , Ni 2+ , Mg 2+ , Fe 2+ , Gd 2+ ) and 32 O 2− anions and generally has a chemical form of AB 2 O 4 , which is designated as a cubic closed-packing of O 2− ions [10][11][12][13]. The round and the square brackets represent the tetrahedral interstitial A site and the larger octahedral interstitial B site, respectively. Zinc ferrite is called a normal spinel structure of the form ZnFe 2 O 4 , where Zn 2+ ions occupy the tetrahedral sites and Fe 3+ ions occupy the octahedral sites [13]. Nickel ferrite is called an inverse spinel structure of the form FeNiFeO 4 , where Ni 2+ ions occupy the octahedral sites, and half the Fe 3+ ions occupy the tetrahedral sites and the other half occupy the octahedral sites [10]. NiZn ferrite is a kind of solid solution composed of Ni ferrite and Zn ferrite, which can be expressed as the form Zn x Fe (1−x) Ni (1−x) Fe (1+x) O 4 . The magnetic properties of the NiZn ferrite are strongly dependent on the number of 3d unpaired electrons of transition metals. The Zn 2+ ions do not have any unpaired electrons (or Bohr magneton) in the 3d orbital, while Ni 2+ ions have two unpaired electrons and Fe 3+ ions have five unpaired electrons in the 3d orbital. Therefore, the appropriate loading of the Ni 2+ ion, which has unpaired electrons into the crystal structure of ZnFe 2 O 4 provides much better magnetic properties, e.g., higher saturation magnetization (M s ) and lower coercivity (H c ) [14,15] because an addition of transition metal ions leads to the movement of some Fe 3+ ions from the octahedral sites to the tetrahedral sites, and the unequal of Fe 3+ ions in both lattice sites effectively produce the remainder of unpaired electrons, which generate and increase the magnetism [16]. Appl. Sci. 2020, 10, x FOR PEER REVIEW 2 of 11 form of AB2O4, which is designated as a cubic closed-packing of O 2− ions [10][11][12][13]. The round and the square brackets represent the tetrahedral interstitial A site and the larger octahedral interstitial B site, respectively. Zinc ferrite is called a normal spinel structure of the form ZnFe2O4, where Zn 2+ ions occupy the tetrahedral sites and Fe 3+ ions occupy the octahedral sites [13]. Nickel ferrite is called an inverse spinel structure of the form FeNiFeO4, where Ni 2+ ions occupy the octahedral sites, and half the Fe 3+ ions occupy the tetrahedral sites and the other half occupy the octahedral sites [10]. NiZn ferrite is a kind of solid solution composed of Ni ferrite and Zn ferrite, which can be expressed as the form ZnxFe(1−x)Ni(1−x)Fe(1+x)O4. The magnetic properties of the NiZn ferrite are strongly dependent on the number of 3d unpaired electrons of transition metals. The Zn 2+ ions do not have any unpaired electrons (or Bohr magneton) in the 3d orbital, while Ni 2+ ions have two unpaired electrons and Fe 3+ ions have five unpaired electrons in the 3d orbital. Therefore, the appropriate loading of the Ni 2+ ion, which has unpaired electrons into the crystal structure of ZnFe2O4 provides much better magnetic properties, e.g., higher saturation magnetization (Ms) and lower coercivity (Hc) [14,15] because an addition of transition metal ions leads to the movement of some Fe 3+ ions from the octahedral sites to the tetrahedral sites, and the unequal of Fe 3+ ions in both lattice sites effectively produce the remainder of unpaired electrons, which generate and increase the magnetism [16]. In general, with a reduction in particle size, the coercive force gradually increases until it reaches the maximum value, but if the size of the particle is reduced to a critical size, the thermal effect becomes even more intense and produces super paramagnetic characteristics in which each particle becomes a single domain with not only very low magnetic loss but also very high magnetic permeability [17,18]. Moreover, the superparamagnetic particles have a zero coercive force and high magnetization, so the control of the particle size is very important because the properties of the nanocrystals strongly depend upon that. Properties of NiZn ferrite nanoparticles are greatly sensitive to the synthesis method and its preparation conditions. There are many methods for the synthesis of ferrite nanoparticles, including sol-gel, co-precipitation, high energy ball milling and thermal decomposition [19][20][21][22][23]. Thermal decomposition method has more advantages than other methods in synthesizing uniform and fine-nanoparticles with a high crystallinity. Also, it is easy to control particle size and particle size distribution with this method [24,25]. The morphology and composition of magnetic nanoparticles have a great influence on their magnetic properties not only in their own particles but also in bulk samples after sintering. When the saturation magnetization of magnetic nanoparticles increases, the initial permeability generally also increases according to the relationship of µi′ = MsD/K1, where K1 = MsHc/0.96 is the crystalline anisotropy constant, Ms is the saturation magnetization, Hc is the coercivity and D is the average grain size [26,27]. However, there are few studies available in the literature showing the relationship that the permeabilities of sintered magnetic samples are inversely proportional to their magnetization In general, with a reduction in particle size, the coercive force gradually increases until it reaches the maximum value, but if the size of the particle is reduced to a critical size, the thermal effect becomes even more intense and produces super paramagnetic characteristics in which each particle becomes a single domain with not only very low magnetic loss but also very high magnetic permeability [17,18]. Moreover, the superparamagnetic particles have a zero coercive force and high magnetization, so the control of the particle size is very important because the properties of the nanocrystals strongly depend upon that. Properties of NiZn ferrite nanoparticles are greatly sensitive to the synthesis method and its preparation conditions. There are many methods for the synthesis of ferrite nanoparticles, including sol-gel, co-precipitation, high energy ball milling and thermal decomposition [19][20][21][22][23]. Thermal decomposition method has more advantages than other methods in synthesizing uniform and fine-nanoparticles with a high crystallinity. Also, it is easy to control particle size and particle size distribution with this method [24,25]. The morphology and composition of magnetic nanoparticles have a great influence on their magnetic properties not only in their own particles but also in bulk samples after sintering. When the saturation magnetization of magnetic nanoparticles increases, the initial permeability generally also increases according to the relationship of µ i = M s D/K 1 , where K 1 = M s H c /0.96 is the crystalline anisotropy constant, M s is the saturation magnetization, H c is the coercivity and D is the average grain size [26,27]. However, there are few studies available in the literature showing the relationship that the permeabilities of sintered magnetic samples are inversely proportional to their magnetization value, and explaining the relationship in detail. Therefore, it seems to be beneficial studying NiZn ferrite systems showing these magnetic behavioral relationships in understanding and improving magnetic properties. In this paper, we synthesized Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) ferrite nanoparticles by the thermal decomposition method and examined the influence of the Ni/Zn ratio on the crystal structures, microstructures and magnetic properties of the prepared NiZn ferrites using an X-ray diffractometer (XRD), a field emission scanning electron microscope (FE-SEM), an impedance analyzer and a vibrating sample magnetometer (VSM). Materials and Methods Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) nano-particles were synthesized by a thermal decomposition method as shown in Figure 2. The raw materials were nickel(II) acetylacetonate (97%), zinc acetylacetonate hydrate (95%), iron(III) acetylacetonate (97%). Oleic acid (90%) and oleylamine (70%) were used as a surfactant, 1,2-hexadecandediol (90%) as a reducing agent, and benzyl ether (98%) as a solvent. Ni, Zn and Fe-acetylacetonate precursors were weighed, respectively according to the chemical form of Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) and poured into a 500 mL three neck flask that was filled with a mixed solution of 6 mmol oleic acid, 6 mmol oleylamine, 1,2-hexadecandediol(HDD), and 20 mL benzyl ether. The precursor solution was heated to 200 • C for 1 h, then further heated to 300 • C and kept there for 1 h; the process was carried out in a N 2 atmosphere with refluxing using cooling water. The solution that finished the reaction was cooled to room temperature, centrifuged for 30 min at a speed of 4000 rpm and washed repeatedly with hexane and ethanol in order to separate organic remains from the ferrite particles. Hexane was used to dissolve fat acids such as oleic acid and oleylamine. Finally, Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) nanoparticles were collected and dried at 100 • C for 24 h. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 11 value, and explaining the relationship in detail. Therefore, it seems to be beneficial studying NiZn ferrite systems showing these magnetic behavioral relationships in understanding and improving magnetic properties. In this paper, we synthesized Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) ferrite nanoparticles by the thermal decomposition method and examined the influence of the Ni/Zn ratio on the crystal structures, microstructures and magnetic properties of the prepared NiZn ferrites using an X-ray diffractometer (XRD), a field emission scanning electron microscope (FE-SEM), an impedance analyzer and a vibrating sample magnetometer (VSM). Materials and Methods Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) nano-particles were synthesized by a thermal decomposition method as shown in Figure 2. The raw materials were nickel(II) acetylacetonate (97%), zinc acetylacetonate hydrate (95%), iron(III) acetylacetonate (97%). Oleic acid (90%) and oleylamine (70%) were used as a surfactant, 1,2-hexadecandediol (90%) as a reducing agent, and benzyl ether (98%) as a solvent. Ni, Zn and Fe-acetylacetonate precursors were weighed, respectively according to the chemical form of Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) and poured into a 500 mL three neck flask that was filled with a mixed solution of 6 mmol oleic acid, 6 mmol oleylamine, 1,2-hexadecandediol(HDD), and 20 mL benzyl ether. The precursor solution was heated to 200 °C for 1 h, then further heated to 300 °C and kept there for 1 h; the process was carried out in a N2 atmosphere with refluxing using cooling water. The solution that finished the reaction was cooled to room temperature, centrifuged for 30 min at a speed of 4000 rpm and washed repeatedly with hexane and ethanol in order to separate organic remains from the ferrite particles. Hexane was used to dissolve fat acids such as oleic acid and oleylamine. Finally, Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) nanoparticles were collected and dried at 100 °C for 24 h. Toroidal samples of NiZn ferrites were fabricated by uniaxial pressing. The ferrite powder was mixed with a PVA (Polyvinyl alcohol, 5 wt %) binder solution, which was poured into a toroidal shaped mold and applied with a pressure of 1 ton/cm 2 to make green toroidal samples. The green toroidal samples were burned out at 650 • C for 30 min in air and then sintered at 1250 • C for 2 h. The phase purity of the NiZn ferrites nanoparticles was investigated with an X-ray diffractometer (XRD: D/max 2200 V/PC, Rigaku Co., Akishima, Japan). Microstructures of sintered toroidal ferrite samples were observed with a Scanning Electron Microscope (SEM, JSM 6700F, JEOL, Japan). Magnetic permeability (µ , µ ) was measured from 1 MHz to 1 GHz with an Impedance analyzer (E4991A). Saturation magnetization of the ferrite particles was measured with a Vibrating Sample Magnetometer (VSM). Results Figure 3a-c shows the X-ray diffraction patterns of Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) nanoparticles, which indicate a pure spinel structure for all three ferrites without any second phases regardless of the composition. Figure 4 shows the dependence of the lattice parameters of the ferrite nanoparticles on the Ni concentration, which were calculated with the following equation [28]. The lattice parameter increases monotonically from 8.340 Å to 8.358 Å with increasing Zn concentration from x = 0.5 to 0.7, which is due to the fact that the ionic radius of the Zn 2+ ion (0.74 Å) is larger than that of the Ni 2+ ion (0.69 Å). This result means that some Zn ions appear to be incorporated into Ni site, considering that the larger Zn 2+ ions prefer tetrahedral A-sites, but the smaller Ni 2+ ions prefer octahedral B-sites in the crystalline spinel structure. The lattice volume of Ni 1−x Zn x Fe 2 O 4 was also increased with increasing Zn concentration, which is reasonable considering the difference in their ionic radii. Figure 5 shows the dependence of the crystallite size of ferrite nanoparticles on the Zn concentration, in which the average size (D) of crystallites was calculated from (311) peak using Scherer's equation as shown below. where the constant k = 0.89, λ = 1.5406 Å, and β is the FWHM of the (311) diffraction peak. Regardless of the Zn concentration, the crystallite size was in the range of 46 to 51 nm and there was no significant difference in it. Figure 6 shows the hysteresis curve of NiZn ferrite nanoparticles at room temperature. As the Zn ion concentration increases from x = 0.5 to 0.7 in Ni 1−x Zn x Fe 2 O 4 , the saturation magnetization and the coercivity decreases from 83 to 71 emu/g and from 18.54 to 16.48 O e , respectively, which is considered high compared to some other results. The magnetic properties of the Ni 1−x Zn x Fe 2 O 4 depends on its chemical composition, grain size, preparation method, and the arrangement of cation between two interstitial sites [29]. The ferrite under an applied field has two magnetization processes: domain wall motion and magnetization rotation within domains. In general, domain wall motion is sensitive to extrinsic material features such as grain size and grain-boundary structure, the presence of inclusions or pores within the grains, impurity levels and stresses. Figure 7 shows the Ni 1−x Zn x Fe 2 O 4 particles are considered single domain particles, taking into account the fact that the crystallite size gotten from the XRD is almost the same as the particle size obtained from the FE-SEM image. Therefore, it appears that the high saturation magnetization is mainly due to an arrangement of cations on the sub-lattices A and B. The decrease in the saturation magnetization with Zn ion content is well complied with the report that the Curie temperature, which means the transition temperature between paramagnetic and ferromagnetic phase, decreases rapidly with increasing nonmagnetic Zn ions in NiZn ferrite [29]. Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 11 The saturated magnetization of ferrite is proportional to the difference in sub-lattice magnetization associated with octahedral and tetrahedral sites. In normal spinel, tetrahedral sites are occupied by divalent cations, while in reverse spinel these sites are filled by trivalent cations. ZnFe2O4 is reported as normal spinel, but NiFe2O4 is inverse spinel. The combination of ZnFe2O4 and NiFe2O4 form a solid solution Ni1−xZnxFe2O4. With increase in Zn content, Fe +3 in the tetrahedral sites are replaced by Zn 2+ cations and Fe +3 fill the octahedral sites emptied by Ni 2+ . Zn 2+ ion doesn't contain any unpaired electrons, however, Ni 2+ ion has two and Fe +3 ion has five unpaired electrons. The following convention has been adopted to define the location of cations in the octahedral and tetrahedral sites of ferrite: where the ions on the octahedral sites are enclosed in brackets. Ferric ions may occupy tetrahedral or octahedral sites depending on the different cations present. When non-magnetic Zn ions are incorporated into the NiFe2O4 lattice, they have a stronger affinity for the tetrahedral site than ferric ions and thus will reduce the amount of Fe 3+ ions on the tetragonal A-site. The net magnetic moment due to the number of unpaired electrons in nickel zinc ferrite is proportional to Equation (2). The Zn 2+ ions have no unpaired electrons, while the Ni 2+ ions have two unpaired electrons and the Fe 3+ ions have five unpaired electrons [30]. Saturation magnetization is considered to increase with increasing Zn content according to Equations (1) The saturated magnetization of ferrite is proportional to the difference in sub-lattice magnetization associated with octahedral and tetrahedral sites. In normal spinel, tetrahedral sites are occupied by divalent cations, while in reverse spinel these sites are filled by trivalent cations. ZnFe2O4 is reported as normal spinel, but NiFe2O4 is inverse spinel. The combination of ZnFe2O4 and NiFe2O4 form a solid solution Ni1−xZnxFe2O4. With increase in Zn content, Fe +3 in the tetrahedral sites are replaced by Zn 2+ cations and Fe +3 fill the octahedral sites emptied by Ni 2+ . Zn 2+ ion doesn't contain any unpaired electrons, however, Ni 2+ ion has two and Fe +3 ion has five unpaired electrons. The following convention has been adopted to define the location of cations in the octahedral and tetrahedral sites of ferrite: Zn 2+ (Fe2 3+ )O4 normal spinel Fe 3+ (Ni 2+ Fe 3+ )O4 inverse spinel where the ions on the octahedral sites are enclosed in brackets. Ferric ions may occupy tetrahedral or octahedral sites depending on the different cations present. When non-magnetic Zn ions are incorporated into the NiFe2O4 lattice, they have a stronger affinity for the tetrahedral site than ferric ions and thus will reduce the amount of Fe 3+ ions on the tetragonal A-site. The net magnetic moment due to the number of unpaired electrons in nickel zinc ferrite is proportional to Equation (2). The Zn 2+ ions have no unpaired electrons, while the Ni 2+ ions have two unpaired electrons and the Fe 3+ ions have five unpaired electrons [30]. Saturation magnetization is considered to increase with increasing Zn content according to Equations (1) The saturated magnetization of ferrite is proportional to the difference in sub-lattice magnetization associated with octahedral and tetrahedral sites. In normal spinel, tetrahedral sites are occupied by divalent cations, while in reverse spinel these sites are filled by trivalent cations. ZnFe 2 O 4 is reported as normal spinel, but NiFe 2 O 4 is inverse spinel. The combination of ZnFe 2 O 4 and NiFe 2 O 4 form a solid solution Ni 1−x Zn x Fe 2 O 4 . With increase in Zn content, Fe +3 in the tetrahedral sites are replaced by Zn 2+ cations and Fe +3 fill the octahedral sites emptied by Ni 2+ . Zn 2+ ion doesn't contain any unpaired electrons, however, Ni 2+ ion has two and Fe +3 ion has five unpaired electrons. The following convention has been adopted to define the location of cations in the octahedral and tetrahedral sites of ferrite: where the ions on the octahedral sites are enclosed in brackets. Ferric ions may occupy tetrahedral or octahedral sites depending on the different cations present. When non-magnetic Zn ions are incorporated into the NiFe 2 O 4 lattice, they have a stronger affinity for the tetrahedral site than ferric ions and thus will reduce the amount of Fe 3+ ions on the tetragonal A-site. The net magnetic moment due to the number of unpaired electrons in nickel zinc ferrite is proportional to Equation (2). The Zn 2+ ions have no unpaired electrons, while the Ni 2+ ions have two unpaired electrons and the Fe 3+ ions have five unpaired electrons [30]. Saturation magnetization is considered to increase with increasing Zn content according to Equations (1) where y is the mole fraction of Zn 2+ ions. The theoretical value of this magnetization was inconsistent with the magnetization data which indicates a decreasing trend with increasing Zn content. However, it has been reported that the magnetization of NiZn ferrite decreases as the Zn content increases above 0.5 mole fraction in NiZn ferrite [31]. At high levels of zinc substitution, Fe 3+ ions in the tetragonal sites were so diluted that super-exchange interaction between tetrahedral and octahedral sites was lost and the saturation magnetization dropped. The Curie temperature declines abruptly and eventually drops below room temperature with increasing Zn content, so that the magnetic properties disappear at this temperature. Figure 8a-c shows the X-ray diffraction patterns of the Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) samples sintered at 1250 • C. There are no second phases in these X-ray diffraction patterns, which means that the sintered samples have the same pure spinel structure regardless of their composition. ferrite. This behavior is consistent with the study that Zn ions not only lower the sintering temperature, but also play a crucial role in increasing the sintering density. Meanwhile, it is interesting to note that although the particle size is somewhat uniform with about 5 µm, the particle shape is faceted. Figure 10a,b shows the real and imaginary parts of the magnetic permeability of Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) toroidal samples in the frequency range from 1 MHz to 1 GHz. It is clear from the Figure 10 that the real permeability (µ ) at 5 MHz increases monotonically from µ = 106 to 217 with increasing Zn content from x = 0.5 to 0.7 in of Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7). The value of µ for all three toroid samples remains constant up to 10 MHz and increases slightly with further increase in frequency. The imaginary part of permeability (µ ) increases with increase in frequency and a broad maximum is observed at 20 MHz, where µ decreases rapidly. This characteristic is a consequence of domain wall resonance, which consists of relaxation and resonance type dispersions. Hence, in the low frequency domain, the complex permeability spectrum of domain wall motion can be treated as the superposition of resonance and relaxation dispersion [30]. The broad maximum in µ is the result of the overlapping of the domain wall motion (DWM) resonance and spin resonance [32][33][34][35]. The contributions of domain wall motion on the complex permeability spectrum decreases gradually with increasing frequency, and only the spin rotational component becomes dominant at higher frequency [36]. At frequencies below 500 kHz, the chief magnetizing mechanism of ferrite is DMW [37], while the domain wall resonance happens in the range from 1 to 100 MHz and rotational resonance occurs above 1 GHz [32,33]. Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 11 where y is the mole fraction of Zn 2+ ions. The theoretical value of this magnetization was inconsistent with the magnetization data which indicates a decreasing trend with increasing Zn content. However, it has been reported that the magnetization of NiZn ferrite decreases as the Zn content increases above 0.5 mole fraction in NiZn ferrite [31]. At high levels of zinc substitution, Fe 3+ ions in the tetragonal sites were so diluted that super-exchange interaction between tetrahedral and octahedral sites was lost and the saturation magnetization dropped. The Curie temperature declines abruptly and eventually drops below room temperature with increasing Zn content, so that the magnetic properties disappear at this temperature. Figure 8a-c shows the X-ray diffraction patterns of the Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) samples sintered at 1250 °C. There are no second phases in these X-ray diffraction patterns, which means that the sintered samples have the same pure spinel structure regardless of their composition. Figure 9a-c displays FE-SEM images of the Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) sintered at 1250 °C. Note that the microstructures become denser with an increasing Zn concentration in Ni1−xZnxFe2O4 ferrite. This behavior is consistent with the study that Zn ions not only lower the sintering temperature, but also play a crucial role in increasing the sintering density. Meanwhile, it is interesting to note that although the particle size is somewhat uniform with about 5 µm, the particle shape is faceted. Figure 10a-b shows the real and imaginary parts of the magnetic permeability of Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) toroidal samples in the frequency range from 1 MHz to 1 GHz. It is clear from the Figure 10 that the real permeability (µ′) at 5 MHz increases monotonically from µ′ = 106 to 217 with increasing Zn content from x = 0.5 to 0.7 in of Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7). The value of µ′ for all three toroid samples remains constant up to 10 MHz and increases slightly with further increase in frequency. The imaginary part of permeability (µ″) increases with increase in frequency and a broad maximum is observed at 20 MHz, where µ′ decreases rapidly. This characteristic is a consequence of domain wall resonance, which consists of relaxation and resonance type dispersions. Hence, in the low frequency domain, the complex permeability spectrum of domain wall motion can be treated as the superposition of resonance and relaxation dispersion [30]. The broad maximum in µ″ is the result of the overlapping of the domain wall motion (DWM) resonance and spin resonance [32][33][34][35]. The contributions of domain wall motion on the complex permeability spectrum decreases gradually with increasing frequency, and only the spin rotational component becomes dominant at higher frequency [36]. At frequencies below 500 kHz, the chief magnetizing mechanism of ferrite is DMW [37], while the domain wall resonance happens in the range from 1 to 100 MHz and rotational resonance occurs above 1 GHz [32,33]. where χdw is the domain wall susceptibility, χsp is the intrinsic rotational susceptibility, Ku is the anisotropy constant, γ is the gyromagnetic ratio, Ms is the saturation magnetization, and D is the grain diameter. With increasing Zn content, µ′ of Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) increases, while the cut-off frequency decreases. Real part of permeability (µ′) is depends on various parameters such as stoichiometry, grain size, composition, porosity, impurity levels, saturation magnetization, magnetostriction, and crystal anisotropy [37]. Higher permeabilities are favored by large grain size, low porosity, high saturation magnetization, low crystalline anisotropy, low magnetostriction and high purity. In particular, the real part of permeability strongly depends on grain size. As Zn ion increases, the crystalline anisotropy is decreased, but permeability is increased as shown in Table 1. Here, the crystalline anisotropy constant K1 was calculated using the following relation [38] expressed as K1 = MsHc/0.96, where Ms is the saturation magnetization and Hc is the coercivity. The initial permeability varies inversely with magneto-crystalline anisotropy constant, but proportional to grain size according to the relationship [39] represented by µi′ = MsD/K1 where D is the average grain size. Thus, the formula µi′ = 0.96D/Hc is obtained, which means that as the Zn concentration increases, the increase in permeability (µ′) is related to increased grain size and reduced where χdw is the domain wall susceptibility, χsp is the intrinsic rotational susceptibility, Ku is the anisotropy constant, γ is the gyromagnetic ratio, Ms is the saturation magnetization, and D is the grain diameter. With increasing Zn content, µ′ of Ni1−xZnxFe2O4 (x = 0.5, 0.6, 0.7) increases, while the cut-off frequency decreases. Real part of permeability (µ′) is depends on various parameters such as stoichiometry, grain size, composition, porosity, impurity levels, saturation magnetization, magnetostriction, and crystal anisotropy [37]. Higher permeabilities are favored by large grain size, low porosity, high saturation magnetization, low crystalline anisotropy, low magnetostriction and high purity. In particular, the real part of permeability strongly depends on grain size. As Zn ion increases, the crystalline anisotropy is decreased, but permeability is increased as shown in Table 1. Here, the crystalline anisotropy constant K1 was calculated using the following relation [38] expressed as K1 = MsHc/0.96, where Ms is the saturation magnetization and Hc is the coercivity. The initial permeability varies inversely with magneto-crystalline anisotropy constant, but proportional to grain size according to the relationship [39] represented by µi′ = MsD/K1 where D is the average grain size. Thus, the formula µi′ = 0.96D/Hc is obtained, which means that as the Zn concentration increases, the increase in permeability (µ′) is related to increased grain size and reduced The frequency dependence of the permeability of ferrite samples is connected to two types of magnetization mechanisms: domain wall motion and spin rotation, as shown below. where χ dw is the domain wall susceptibility, χ sp is the intrinsic rotational susceptibility, K u is the anisotropy constant, γ is the gyromagnetic ratio, M s is the saturation magnetization, and D is the grain diameter. With increasing Zn content, µ of Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) increases, while the cut-off frequency decreases. Real part of permeability (µ ) is depends on various parameters such as stoichiometry, grain size, composition, porosity, impurity levels, saturation magnetization, magnetostriction, and crystal anisotropy [37]. Higher permeabilities are favored by large grain size, low porosity, high saturation magnetization, low crystalline anisotropy, low magnetostriction and high purity. In particular, the real part of permeability strongly depends on grain size. As Zn ion increases, the crystalline anisotropy is decreased, but permeability is increased as shown in Table 1. Here, the crystalline anisotropy constant K 1 was calculated using the following relation [38] expressed as K 1 = M s H c /0.96, where M s is the saturation magnetization and H c is the coercivity. The initial permeability varies inversely with magneto-crystalline anisotropy constant, but proportional to grain size according to the relationship [39] represented by µ i = M s D/K 1 where D is the average grain size. Thus, the formula µ i = 0.96D/H c is obtained, which means that as the Zn concentration increases, the increase in permeability (µ ) is related to increased grain size and reduced coercivity. This corresponds to the results shown in Table 1. In other words, an increase in µ as the Zn concentration increases can be attributed to an increase in the domain wall mobility promoted by the larger grains, as well as lower crystalline anisotropy according to Equations (1) and (2). Conclusions Ni 1−x Zn x Fe 2 O 4 (x = 0.5, 0.6, 0.7) nanoparticles were successfully synthesized by a thermal decomposition method and they were identified as pure spinel ferrite structures by X-ray diffraction analysis. The synthesized ferrite nanoparticles were calculated to be 46-51 nm in diameter by the Scherrer equation, which was consistent with the particle size of about 50 nm observed from FE-SEM images. The lattice parameters of ferrite nanoparticles monotonically increased as the Zn content increased. The synthesized ferrite nanoparticles showed relatively high saturation magnetization values of 71-83 emu/g depending on their composition. Toroidal samples were prepared by sintering ferrite nanoparticles at 125 • C, and they exhibited faceted grain morphology in FE-SEM images with a grain size of about 5 µm. The real magnetic permeability (µ ) of the toroidal sample measured at 5 MHz increased with increasing Zn content, showing µ = 217 for x = 0.7. The cutoff frequency of the ferrite toroidal sample was about 20 MHz, which seemed to be associated with domain wall resonance. Conflicts of Interest: The authors declare no conflict of interest.
2020-10-28T18:27:22.326Z
2020-09-09T00:00:00.000
{ "year": 2020, "sha1": "55286ad4b862994de933e2c5038cecd689cb0ca0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/app10186279", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5c152086a7d19bc8ed92e203518ea70c760d128f", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
233611168
pes2o/s2orc
v3-fos-license
Lud-Thermoluminescent dosimetry of panoramic radiography This study aims to calibrate a thermoluminescence dosimeter (TLD) using a diagnostic radiation device and evaluate the dose of panoramic radiography. TLD-100s were calibrated using a solid-state dosimeter (Unfors Mult-O-Meter 512L; Unfors Instruments, Billdal, Sweden) and a diagnostic radiation device (HDT-500R; Hyun Dai Medical X-ray Co., Paju, Korea). Forty eight TLDs were placed in 24 sites of a head and neck phantom of a male (ART-210; Radiology Support Devices Inc., Long Beach, CA, USA), and panoramic radiation was performed under exposure parameters of 70 kVp and 10 mA using a ProMax (Planmeca, Helsinki, Finland). Using the International Commission on Radiological Protection (ICRP) 2007 recommendation, the effective dose of panoramic radiography was calculated from the absorbed doses of the tissues of the 24 TLD sites in a head and neck phantom. The absorbed dose of the TLD site was higher in the parotid gland (right: 1854.4 μ Gy, left: 1788.9 μ Gy) and lower in the calvarium anterior (3.8 μ Gy). The effective dose was calculated at 28.4 μ Sv. The cancer and heritable risks were 1.56×10 –6 and 5.67×10 –8 , respectively. The TLD was calibrated using a diagnostic radiographic device, and the panoramic radiographic dose was evaluated. The findings of this study could be helpful in future dose studies. Introduction Radiation is commonly known to be harmful. The effects of radiation are classified as deterministic and stochastic; deterministic effects are evaluated as equivalent doses (H T ) and stochastic effects as effective doses (E) [1,2]. In 1990In , 2005, and 2007, the International Commission on Radiological Protection (ICRP) published the tissue weighting factors (W T ) required to calculate effective dose [3][4][5] recommendations are applied [6]. Since effective dose presents radiation hazards to the entire body, the risks of different radiographs could be directly compared. However, effective dose do not reflect individual characteristics such as age, sex, and genetic radioactivity [7]. Effective dose of panoramic radiography ranges from 3.85 to 39 μSv, which indicates a significant variation [8,9]. Lud-Thermoluminescent dosimetry of panoramic radiography low and Ivanovic [6] reported that the value of the dosimeter could vary by approximately 23%, depending on the collimator adjustment, unit calibration, and phantom position in the unit. Lee et al. [8] reported that effective dose could vary depending on the gender of the phantom, the number and location of the thermoluminescent dosimeter (TLD) used, and the exposure conditions, even for a similar panoramic device. Various methods using TLD, optically stimulated luminescence dosimeter, solid-state dosimeter, dose area product, and ionization chamber can be used for dose studies [1,7,10]. The most widely used method for measuring doses by placing TLDs into anthropomorphic phantoms requires significant time and effort, but it has been used as a basis for comparison in the study of doses in other methods [6,7]. In dosimetry using TLD, calibration of TLD is crucial for obtaining a conversion factor between the response of the dosimeter and the absorbed dose (D T ), as well as for improving the homogeneity of the TLD response. Radiation sources using radioactive isotopes, such as Co-60 and Cs-137, have been used to irradiate TLDs. Using X-rays as a radiation source is a medical linear accelerator, and the use of diagnostic radiation devices has rarely been reported [11][12][13]. This study aims to calibrate the TLD using a diagnostic radiation device and evaluate the dose of panoramic radiography. Materials and Methods Thermoluminescence dosimeter calibration A diagnostic radiation device (HDT-500R; Hyun Dai Medical X-ray Co., Paju, Korea) was used to expose the TLDs to radiation. The focus of the HDT-500R was adjusted, and one of the quartiles based on the central guide line within the field of view of the HDT-500R was selected (Fig. 1A). The irradiation area was 7 mm from the centerline of the horizontal and vertical lines and was selected for the TLD phantom 104 mm wide and 88 mm long. The TLD phantom with 108 holes for a TLD was obtained using polymethylmethacrylate, which has no significant difference in dose calculation [10]. The TLD phantom with TLDs in the irradiation area was exposed to radiation in the normal position and by flipping horizontally, vertically, and horizontally. TLD, thermoluminescence dosimeter. TLD dosimetry of panoramic radiography The TLD phantom with TLDs in the irradiation area was exposed to radiation in the normal position by flipping horizontally, vertically, and horizontally (Fig. 1C). Nine sites in the irradiation area were divided into four groups. As shown in Fig. 1, areas 1, 3, 7 and 9 (A group) are exposed in four corner positions successively; areas 2 and 8 (B group) are exposed successively; areas 4 and 6 (C group) are exposed in the same position successively; area 5 (D group) is exposed in the same position four times. The sum dose was obtained by grouping, and its mean was the reference dose for calibrating the TLD. TLDs with an error correction coefficient between 0.77 and 1.43 were selected and used. Dosimetry of panoramic radiography Panoramic radiography was performed 10 times under the exposure parameters of 70 kVp and 10 mA, which are the normal settings for adult patients. Additionally, the background radiation was measured using two TLDs. After reading the TLD and subtracting the average of the background radiation from the average absorbed dose of each site, the absorbed dose of each site was obtained by converting it to a single exposure. The absorbed dose of the tissue/organ for calculating effective dose was calculated by averaging the selected sites among the 24 TLD sites ( Table 2). The bone surface dose is derived from the bone dose [14]. The equivalent dose is the absorbed dose multiplied by the radiation weighting factor, and X-rays have a radiation weighting factor of 1. Therefore, the equivalent dose was equal to the absorbed dose. Moreover, effective dose was calculated as the sum of equivalent dose multiplied by the fraction ratio of tissue/organ exposed to radiation (fraction irradiated) and tissue weighting factors, using the following Results In the irradiation area, the dose measured by the solid- (Table 4). ( Table 1). The equivalent dose of the tissue/organ required to calculate effective dose is shown in Table 2. Discussion TLD-100 is widely used for dose research because of its excellent physical characteristics such as homogeneity, reproducibility, and linearity, although it has some disadvantages such as reading once, being sensitive to light, and being affected by moisture or dust [15,16]. For TLD calibration, Co-60, Cs-137, Ir-192, and medical linear accelerator may be used to irradiate the reference dose, and the actual radiation dose may differ from the planned doses by 0-6% depending on the irradiation method [11][12][13][16][17][18]. Because the diagnostic radiation device cannot irradiate the planned dose, the dose measured using a solid-state dosimeter was the reference dose. Additionally, to reduce the difference in dose distribution, it was irradiated repeatedly by flipping the TLD phantom horizontally and vertically (Fig. 1). Methods for irradiating TLDs uniformly using diagnostic radiation devices were devised and studied, but uncertainty verification was not performed. Considering this, follow-up research will be needed. Analyzing (Table 1). This is because the areas where the ghost and double images can be formed are irradiated several times, and an understanding of panoramic geometry is required to analyze the absorbed dose of panoramic radiography. Ludlow and Ivanovic [6] calculated the skin and lymph nodes of the head and neck as 5%. However, in this study, the head and neck skin was applied as 9% by rule of nine at burn, and lymph node as 37.5%. This is because 300 of the 800 lymph nodes were located in the neck [19,20]. The effective dose in this study was 28.4 μSv, which is different from the 24.3 μSv obtained using the same equipment by Ludlow et al. [21]. The difference could be attributed to the differences in exposure parameters, the position of the phantom, the position of the TLDs, and the difference in the fraction ratio of the tissue/organ applied. In this study, the sum of tissue weighting factors used in calculating the dose was 0.36. Tissue/organ equivalent to 0.64 is missing ( Table 2). The dose will be further reduced if the measuring area is reduced. Lee et al. [8] reported that effective dose of the entire body, including the head and neck, was higher than that calculated by the head and neck alone in a study using an entire body phantom. Although dental radiation is used mainly in the head and neck areas, an entire body phantom is suitable. In addition, dentists should be able to understand dose concepts and analyze dose-related reports because they select radiation equipment and determine how and how many radiographs should be taken. Some researchers have calculated equivalent doses by multiplying the fraction ratio of the tissue [6,8]. Equivalent dose is smaller when only a fraction of the tissue is exposed to radiation (such as bone marrow, skin, and lymph nodes). Because equivalent doses are doses to deterministic effects, doses from exposed areas should be applied as is. The application of fraction ratios to the effective dose may also be controversial. This is because it is difficult to expect the effects of radiation-induced cancer on areas other than the fingers when radiation irradiates the fingers. In addition, the units of equivalent dose and effective dose are Sv, thus causing confusion. The concepts and methods for obtaining doses need to be studied and developed further. Therefore, there is a limit to evaluating the harmful effects of radiography using only effective dose. Absorbed doses from the TLD location should be revealed to re-interpret the results of the study by assessing the adequacy of the study and applying dose-producing methods that may vary depending on time. Because the method for radiation risk assessment is
2021-05-04T22:06:27.685Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "cf3944adb122dc0c8e6c83e86591db54f700eac0", "oa_license": "CCBYNC", "oa_url": "http://www.chosunobr.org/journal/download_pdf.php?doi=10.21851/obr.45.01.202103.22", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a28d2d9ff10bbb0da739f5676f3334f65a92afbd", "s2fieldsofstudy": [ "Physics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17456658
pes2o/s2orc
v3-fos-license
Chaos-Order Transition in Matrix Theory Classical dynamics in SU(2) Matrix theory is investigated. A classical chaos-order transition is found. For the angular momentum small enough (even for small coupling constant) the system exhibits a chaotic behavior, for angular momentum large enough the system is regular. Introduction Matrix theory [1] is a surprisingly simple quantum mechanical model that is able to describe some major properties of superstring theory. Therefore the model obviously deserves a thorough study. Calculation of physical quantities is reduced to the appropriate calculations in the matrix quantum mechanics. A system of N Dirichlet zero-branes is described in terms of nine N × N Hermitian matrices X i , i = 1, ..., 9 together with their fermionic superpartners. The action can be regarded as the dimensional reduction of ten-dimensional SU(N) supersymmetric Yang-Mills to (0 + 1) space-time dimensions: where D t = ∂ t + iA 0 . The action (1) was considered in the theory of eleven-dimensional supermembranes in [2,3,4] and in the dynamics of D-particles in [5,6,7]. In the original formulation [1] of the conjectured correspondence between M-theory and M(atrix) theory the large N limit was assumed. A more recent formulation [8] deals with finite N. In the last year the Matrix theory was a subject of numerous investigations, for reviews see for example [9,8,11]. Although the model (1) is relatively simple its classical dynamics is in fact rather complicated. In this note we discuss the bosonic sector of (1) following the lines of our previous paper [12]. There we have realized that at least at some specials cases, the solutions of classical equations of motion are exponentially unstable, i.e. the system (2) is stochastic. The appearance of chaos in a classical system means that we cannot trust to the ordinary semiclassical analysis of the corresponding quantum system. Let us note that α ′ corrections to the action (1) induce a stabilization of the classical trajectories [13]. Here we shall confine ourselves to the simplest version of (1) which corresponds to the reduction of (2+1) dimensional SU(2) Yang-Mills to (0+1). In the A 0 = 0 gauge we deal with eqs.(2) for i = 1, 2 and the Gauss law constraint High symmetry of the system allows one to reduce the dimension of the phase space. Three components of the Gauss law and one more first integral, which we denote by n, lead to a four dimensional phase space and the Hamiltonian where p f,g are the momenta conjugated to the f, g. In this paper we present analytical and numerical study of the system (4). We will show that for n large enough the system is integrable and its motion is located in a compact region of configuration space. For n small enough the system exhibits the chaotic behavior. Chaotic behavior is a typical feature of systems which one gets as a long-wave approximation in a field theory, see for example [14,15,16,17]. For a better understanding of a role of n-dependent terms in (4) we start with the toy model governed by the Hamiltonian H with n = 0 and an infinite elastic reflecting wall parallel to the g-axis. This model exhibits a chaos-order transition. For the wall far enough from the origin the motion is confined to the region g ≪ f where it admits an analytical investigation [18]. We show that this model is integrable in this region. As the next approximation (especially valid for g ≪ f ) we choose the slightly simplified version of (4) with the hyperbolic wall potential: We show numerically that it also exhibits chaos-order transition governed by the param- λ and give some analytical arguments in favor of this result. Note that an effect of the extra term 1/2f 2 to chaotic behavior of a two-dimensional system has been discussed in the recent paper [17]. All this reasoning forces us to conjecture that Hamiltonian (4) also describes two phases depending on the value of n. We compute the Poincare sections for a number of characteristic values of n with the energy E and λ being fixed. As it expected for small n one has the typical stochastic distribution of points and for large n the points are distributed along regular lines. The paper organized as follows: In Section 2 we present out notation and remind the results of [12]. Section 3 is devoted to a toy model with an elastic reflecting wall. In Section 4 we discuss the model with hyperbolic wall and in Section 5 we present the results of numerical calculations. Notations In this section we review the results of [12] concerning the appropriate parametrization of the configuration space. The Lagrangian admits two global continuous symmetries. The SU(2) rotations yields the conservation of the "angular momentum" give one more first integral Hence, it is convenient to parametrize X 1 and X 2 as follows: where f (t), g(t), θ(t) are real functions and U(t) is a SU(2) group element. The parametrization (2.5)can be justified as follows. The variables X 1 and X 2 could be treated as vectors in the internal isotopic space. At any time they can be rotated to belong to some coordinate, say (2,3) plane by using an U(t) ∈ SU(2): that fixes U(t) up to rotation around the 1-axis. This rotation could be used to impose the following constraint: The rotation angle to fulfill (2.7) is is the orthogonality condition: (Φ 2 , Φ 3 ) = 0. A pair of orthogonal vectors in the plane can be parametrized by two radii and one angle (phase), say: (2.8) Eqs. (2.8) plus SU(2) rotation give the parametrization (2.5). Note, that U(1) angular momentum N just generates the shifts in θ. The main advantage of the coordinate system described above is that four of the six Lagrangian equations of motion appear to be nothing but the Noether conservation lawṡ Taking into account the Gauss law one gets from (2.9): whereU U + = l = i 2 σ j l j and n = N. By substituting (2.10) into Lagrangian equations for f and g one gets:f It is a matter of simple algebra to prove that eqs.(2.11) and (2.12) are the equations of motion following from the Lagrangian (2.13) Just this Lagrangian will be the subject of our subsequent analysis. In the particular case n = 0 the Lagrangian (2.13) was the object of intensive study about fifteen years ago in the context of long-wave approximation of Yang-Mills theory, this model will be referred as the hyperbolyc model. The dynamical system (2.13) has an additional potential term which produce two reflecting walls along the f = g and f = −g axes. The appearance of the reflecting walls can crucially change the behavior of the system. To demonstrate this in the next section we start with study of the hyperbolic model with elastic reflecting wall. 3 Hyperbolic model with reflecting wall. It is well known that the hyperbolic model exhibits a chaotic behavior [14,15]. Equations of motion for this system have the form: As it is shown in [18] in the asymptotical regime, where y ≪ x one can integrate the system of equations (3.14) and (3.15) by using the Bogolyubov-Krylov method [19]: This solution is characterized by four parameters α, β, γ and ϕ 0 , which are related with Couchy's initial data f 0 ≡ f (0), p 0 ≡ḟ (0) g 0 ≡ g(0) q 0 ≡ġ(0) as: or more explicitly, Note that the parameter α is the Ehrenfest adiabatic invariant α =ḟ 2 + λf 2 g 2 2f . (3.20) In the region f > 0, α is positive and therefore there exists a maximum of coordinate f : f max being expressed in terms of dynamical variables is an integral of motion Note that α and f max are approximate integrals of motion. In the asymptotic region g ∼ ξf where ξ ≪ 1, one hasḟ Let us put an elastic reflecting wall at f = l. This means that we consider equation (3.14) and (3.15) only for f ≥ l, and g is an arbitrary. We assume that f -component of momenta changes a sign upon collision with a wall and g component does not change it. For the case of elastic reflecting wall located on f = l there is maximal allowed value of g, g max = 1 l 2E λ ≪ 1 . The characteristic parameter ξ in this case is ξ ≤ 1 λ . If one consider a trajectory starting from a point on the right of the wall with ξ ≪ 1, then the trajectory is described rather well by (3.16) and (3.17). After reflecting the particle moves along the trajectory still given be (3.16) and (3.17) with new initial data. It is evident that the energy and the Ehrenfest invariant are conserved upon a collision of the particle with the elastic wall, so the value of maximal deviation is also conserved. Therefore, the particle can never reach f = ∞. This is basic property of the hyperbolic model with reflecting wall located so that the characteristic parameter ξ is small enough. In other words, we conclude that if we put the reflecting wall so that ξ ≪ 1 then we deal with the integrable system and f max is one of its integral of motion. Model with Hyperbolic Potential. In the asymptotic region g ≪ f Lagrangian (2.13) has form: or explicitly: In figure (1) and (2) we present the form of the potential (4.24) and (2.13), respectively and draw corresponding equipotential lines. Simple calculations show that the maximum of g is reached at the point f = n/ √ E. The minimal accessible value of f is f min = n/ √ 2E. So, we deal with the potential for which equipotential lines go to infinity along the f -axis. Maximal value of g/f in terms of E, n and λ is: The condition ζ ≪ 1 guaranties that the system is always in the asymptotic region g ∼ ξf with ξ ≪ 1. We integrate this system asymptotically by using Bogolyubov-Krylov method [19]. According to this method one has to take with some constant α, then integrate the equation Equation (4.31) can be easily integrated, Here constant of integration is chosen so that at t = 0 the f -coordinate takes its maximal value. Energy E and maximal deviation f max are related as follows: One can invert this algebraic equation and get f max as function of dynamical variables of our system (4.24). As result we suggest that in close analogy with previous model for ζ ≪ 1 the system becomes integrable and f max is integral of motion. We shall justify this conjecture numerically. Numeric calculations. In this section we investigate the existence of chaos-order transition for the hyperbolic model with reflecting wall, model with hyperbolic wall (4.24) and system (2.13) numerically. At first we test the Poincarè sections for the model with a reflecting wall. The conservation of energy restricts any trajectory of the four-dimensional phase space to a three-dimensional energy shell. At a given energy any additional constraint defines a two-dimensional surface in the phase space, which is called the Poincarè section It is convenient to take a constraint g = 0. All crossections of a trajectory with the surface are marked by points on the (f, p f )-plane. On each figure we plot a Poincarè section for a set of trajectories to show that behavior of the system does not depend on the initial data. Any trajectory was integrated as long as the program guarantees that the deviation of the energy is less then 0.1%. Different colors correspond to different trajectories with fixed parameters and random initial data. Chaotic motion is characterized by a set of randomly distributed points. Regular trajectories are depicted by dotted curves. In Figures (3), (4) and (5) we plot Poincarè sections for the different values of the reflecting wall coordinate l and with the same energy E = 1. The pictures show that for small values of the parameter l chaotic region is located near the wall. For large values of the parameter l the points arrange into closed dotted curves. An important characteristic of a dynamical system is the Lyapunov exponent η(t). It has a positive limit: (lim t→∞ η > 0) for a chaotic system and zero limit: (lim t→∞ η = 0) for a regular one. The calculations for the model with the hyperbolic wall (4.24) were performed for different values of ζ (4.29). By changing the energy E with λ and n being fixed. We vary the parameter ζ. The program starts with random initial data with given energy. The program calculates the coordinates f and g, the energy, f max and the Lyapunov exponent. Typical results for ζ = 1 and ζ ≪ 1 are shown on Figures (6) and (7) respectively. One can see that for ζ = 1 (Fig. 6, white curve) Lyapunov exponent has a positive limit. For ζ ≪ 1 the Lyapunov exponent goes to zero (Fig. 7, white curve). Parameter f max (blue curve) for small ζ does not change with the time. Energy (red line) shows that program works perfectly and energy does conserve through the whole calculation time. The numerical calculations show that for ζ = 1 the system (4.24) is stochastic and for ζ ≪ 1 it is the integrable one, that confirm the analytical results of sect. 4. Now we turn to the main model (2.13). The dynamics of (2.13) was analyzed in [12] for relatively small (in units of E) values of n and we have found it to be fully chaotic. For the relatively large values of n the motion is confined into the region ζ ≪ 1 (see Fig. 2). In this region the model with hyperbolic wall (4.24) gives a good approximation to the main model (2.13). We conjecture the main model also to be integrable for the large n. In favor of this conjecture in Figures (8), (9) and (10) we plot the Poincarè sections for (2.13) with different values of parameter n and fixed energy E = 1. For large values of parameter n there are only regular closed orbits. We also test the Lyapunov exponent and obtained the pictures quite similar to the ones for the model (4.24). Therefore we see that the system exhibit the chaos-order transition governed by the characteristic parameter ζ.
2014-10-01T00:00:00.000Z
1998-04-02T00:00:00.000
{ "year": 1998, "sha1": "29ddca50f92a4e051b35335fe400306b8e83b85d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9804021", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "29ddca50f92a4e051b35335fe400306b8e83b85d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
209358603
pes2o/s2orc
v3-fos-license
Fast and Reliable Burst Data Transmission for Backscatter Communications Computational radio frequency identification (CRFID) sensors are able to transfer potentially large amounts of data to the reader in the radio frequency range. However, the existing EPC C1G2 protocol is inefficient when there are abundant critical and emergency data to be transmitted and cannot adapt to changing energy-harvesting and channel conditions. In this paper, we propose a fast and reliable method for burst data transmission by fragmenting large data packets into blocks and we introduce a burst transmission mechanism to optimize the EPC C1G2 communication procedure for burst transmission when there are critical and emergency data to be transmitted. In addition, we utilize erasure codes to reduce Acknowledgement (ACK) delay and to improve system reliability. Our results show that our proposed scheme significantly outperforms the current fixed frame length approach and the dynamic frame length and charging time adaptation scheme (DFCA) and that the goodput is close to the theoretically optimal value under different energy-harvesting and channel conditions. Introduction Currently, miniature and ultra-low power sensors are widely used in urban and indoor areas. They are attached to common items and logistics to be used to track these objects [1]. Such devices utilize energy in the environment or RF to power and communicate with RFID readers by backscattering for data transmission. Computational radio frequency identification (CRFID) is a kind of passive sensing node that captures RF energy [2] and presents a new frontier for distributed sensing [3]. CRFIDs follow the RFID backscatter communication mode, which obtains energy from the RF signal emitted by the reader and modulates the encoded data back into the reader in the received signal. CRFID sensors are very different from existing battery sensor platforms and commercial RFID systems. They rely entirely on the energy of the capacitor for continuous sensing and efficient backscatter communication for data transmission. Compared to commercial RFID systems, they are able to sense, calculate, and store not just able to simply identify. In recent years, CRFID sensor systems have received increasing attention because of their potential for battery-free permanent sensing [4][5][6]. Typical CRFID systems include the WISP (Wireless Identification and Sensing Platform) [7], jointly developed by Intel and the University of Washington, and the UMass Moo [8], developed by the University of Michigan on the basis of WISP. In this paper, we focus on mobile CRFID [9] and consider CRFID to use the harvesting energy for burst data transmission. CRFIDs are very small and can be deployed in large numbers indoors on moving objects or on the human body. As they move near the reader, the buffered data are transmitted to the reader via backscatter communication, such as mobility health monitoring. With the widespread deployment of CRFIDs, sensing tasks are becoming more complex and the sensing data is increasing. In particular, when it needs to be transmitted quickly and efficiently for critical and emergency data [10], such as clinical monitoring and industrial gas leaks, current backscatter communication brings some challenges. First, commercial RFID systems follow the EPC C1G2 protocol [11], which is designed to read a small amount of data (identifier EPC) for a large number of tags but is less efficient in scenarios where a small number of CRFIDs need to transfer large amounts of buffered data. Second, the key parameter controlling the efficiency of the EPC C1G2 protocol is the size of the window Q value [12], which is set by the reader based on its estimated number of tags. Also, the QueryRep or QueryAdj command of the tag listening to the unresponsive slot generates a lot of overhead, resulting in wasted energy and time [13]. Third, mobility of CRFID sensors results in dynamic changes in both energy-harvesting and channel conditions [14]. When the energy-harvesting conditions are poor, CRFID needs a long sleep charging time to reach the working voltage; when the channel quality is poor, bit errors easily occur, resulting in data frames being discarded, and CRFID needs to be recharged again, which leads to low goodput. In response to the above difficulties, the following are the chief contributions of our work: • We propose a method for burst data transmission by fragmenting large data packets into blocks. Then, we dynamically adjust the frame length of every block through an online adjustment strategy at runtime by the feedback of the reader. • We introduce a burst transmission mechanism. The core idea is to let a tag occupy all time slots for burst transmission when there are critical and emergency data to be transmitted, which reduces the idle time slot. • We utilize erasure codes to reduce acknowledgement frame waiting delay and to avoid retransmission overhead, which improves system robustness and reliability. • Under specific energy-harvesting and channel conditions, our proposed scheme is much better than the EPC C1G2 protocol fixed frame length approach and DFCA scheme, and the performance can converge to near optimal. When there are abundant critical and emergency data to be transmitted, our scheme can achieve fast and reliable bulk burst data transmission. The rest of the paper is as follows: We discuss related work in Section 2. In Section 3, we discuss the core challenges that need to be addressed to improve the backscatter system efficiency. In Section 4, we describe the CRFID operating procedure optimization by introducing a burst transmission mechanism and erasure coding. The design of the burst data transmission scheme is presented in Section 5, while the performance evaluation results are given in Section 6. Finally, we conclude the paper in Section 7. Related Work As "smart" transponders, UHF-passive RFID tags enhanced with computational and sensing capabilities and CRFIDs are qualified to become fully fledged components of the Internet of Things (IoT). Recent advances in smart systems driven by IoT technologies have opened up great opportunities for the development of backscatter communications. To enable RFIDs to be accessible from/to communicate with any other networked devices in the Internet, Reference [15] made information accessible through IoT6 architecture integration. This method only allows access to the data generated by the RFID system but does not allow the use of RFIDs for real-time communication. In order to extend the Internet Protocol Version 6 (IPv6) to the RFID world, transparent agents are used in References [16,17] to actively manage the resources and operations of tags. Reference [18] designed a hybrid sensor network medical system integrated with the Wireless Sensor Networks (WSN) and RFID nodes, which not only is suitable for identifying and tracking patients, caregivers, and biomedical equipment in hospitals but also provides remote monitoring and emergency management through three-axis acceleration. As far as the author knows, none of these studies involved the processing of emergency data generated by burst tags to make it quickly, efficiently, and reliably transmitted to the reader. In References [19,20], and reliably transmitted to the reader. In References [19,20], the Wi-Fi or bluetooth signal sent by the reader is used as the tag excitation signal, which greatly improves the system throughput and transmission distance. However, the self-interference problem of the signal is difficult to solve and it is not compatible with the existing EPC Gen2 protocol. Therefore, our research focuses on using the existing protocol or its improved protocol to improve the backscatter communication goodput. In view of the problems for backscatter communications described in Section 1, References [21][22][23] designed a channel monitoring algorithm to adaptively adjust the data transmission rate to optimize the goodput by estimating the packet loss rate and the received signal strength indication (RSSI). QuarkNet [24] cuts data frames into very small data units to accommodate extremely poor environments. However, this work does not consider the transmission overheads such as the frame header overhead. MementOS [25] and Dewdrop [26] solve the problem of insufficient energy in different ways, but they are difficult to achieve in practice. Buzz [27] uses rateless code, but it must use a synchronous single-bit slot between nodes. FlipTracer [28] and BiGroup [29] studied the conflict of tags on the physical layer of RFID from two aspects, including constellation domain and time domain. Through parallel decoding of backscatter communication and assuming good channel conditions, they achieved large aggregated throughput. Our work focuses on the media access control (MAC) layer and differs in that we dynamically adjust frame length and coding redundancy under specific energy-harvesting and channel conditions while taking into account that transmission overhead and goodput can converge to near optimal. The dynamic adjustment of frame length and erasure code technology has been proven by many literatures to effectively enhance transmission reliability and system goodput under adverse channel conditions. In Reference [30], they proposed a dynamic segmentation scheme to enhance goodput in a time-varying wireless environment. Similar techniques have also been applied to WSN. In Reference [31], Dong et al. proposed a dynamic frame length control strategy for sensor networks, reducing communication overheads and improving energy utilization. The use of erasure coding in Reference [32,33] is used for data transmission in multi-hop of wireless sensor network (WSN) to improve network throughput, energy efficiency, and high reliability while greatly reducing end-toend delay. Unlike the active radio communication scenarios described above, the good performance of backscatter communication is affected by channel conditions and energy-harvesting conditions. Our work is designed to maximize goodput and to adapt to energy-harvesting and channel conditions. Challenge and Motivation In this section, we investigate the fundamental factors underlying the poor performance of backscatter communication and describe the main challenges that need to be addressed to improve the backscatter system efficiency. Challenge 1: Variable Energy per Transmission As shown in Figure 1, the CRFID node operates in a series of charging and discharging processes, which cannot work continuously due to too little energy. The device collects energy and charges a small energy store during a short period of sleep time and then wakes up and discharges to send packets. Why is the energy available in each discharge cycle difficult to predict? First, if the energy-harvesting condition is too low, the efficiency of storing energy into the capacitor is low [24]. Therefore, the maximum energy that can be accumulated depends on the current harvest condition. Second, the RF energy collected by the node depends on how much energy the reader outputs. When the reader is communicating, the harvest rate of each node is constantly changing. Third, the node needs to use an analog-to-digital conversion (ADC) to measure the energy level. Each ADC operation consumes 327 µJ on the WISP platform, which is equivalent to the energy budget for transmitting 27-bit data. This cost is too much on a micro-powered platform. Therefore, we need to adjust the length of the transmitted data according to the current energy environment. When the energy level is low, we need to shorten the frame length to adapt, but this usually reduces the goodput which is affected by the overhead of each transmission, including preamble, header, and hardware. In order to optimize goodput at the same time, it is important to transfer the data as much as possible given the available energy. Therefore, the problem faced by the node is that it needs to shorten its transmission frame under poor harvest conditions and to scale up to increase goodput when condition permits. Challenge 2: Variable Harvesting Rate The energy-harvesting rate has a significant impact on communication goodput, since higher harvesting rate means that more energy can be used for data transfer. Figure 2 shows the trend of energy-harvesting rate for both theoretical and empirical measurements as the sleep time between transmissions. One might expect to collect more energy by increasing the charging time. However, for longer sleep durations, the energy-harvesting rate drops to zero. Why is the energy available in each discharge cycle difficult to predict? First, if the energyharvesting condition is too low, the efficiency of storing energy into the capacitor is low [24]. Therefore, the maximum energy that can be accumulated depends on the current harvest condition. Second, the RF energy collected by the node depends on how much energy the reader outputs. When the reader is communicating, the harvest rate of each node is constantly changing. Third, the node needs to use an analog-to-digital conversion (ADC) to measure the energy level. Each ADC operation consumes 327 µJ on the WISP platform, which is equivalent to the energy budget for transmitting 27bit data. This cost is too much on a micro-powered platform. Therefore, we need to adjust the length of the transmitted data according to the current energy environment. When the energy level is low, we need to shorten the frame length to adapt, but this usually reduces the goodput which is affected by the overhead of each transmission, including preamble, header, and hardware. In order to optimize goodput at the same time, it is important to transfer the data as much as possible given the available energy. Therefore, the problem faced by the node is that it needs to shorten its transmission frame under poor harvest conditions and to scale up to increase goodput when condition permits. Challenge 2: Variable Harvesting Rate The energy-harvesting rate has a significant impact on communication goodput, since higher harvesting rate means that more energy can be used for data transfer. Figure 2 shows the trend of energy-harvesting rate for both theoretical and empirical measurements as the sleep time between transmissions. One might expect to collect more energy by increasing the charging time. However, for longer sleep durations, the energy-harvesting rate drops to zero. Next, we will explain this phenomenon by looking at how capacitors buffer energy. The charging process of the capacitor can be described the formula of voltage variation across the capacitor: where s t is the sleep time, τ is the RC circuit time constant, and max V is the maximum voltage achievable by the capacitor under the current energy-harvesting conditions. The energy-harvesting rate follows the following formula: Next, we will explain this phenomenon by looking at how capacitors buffer energy. The charging process of the capacitor can be described the formula of voltage variation across the capacitor: where t s is the sleep time, τ is the RC circuit time constant, and V max is the maximum voltage achievable by the capacitor under the current energy-harvesting conditions. The energy-harvesting rate follows the following formula: When the harvesting conditions are constant (i.e., V max and τ are fixed), H is a concave function of t s . When the energy-harvesting condition changes, both V max and τ change, so the optimal operating point also changes. When the tag's capacitor stores more energy than the threshold, the energy-harvesting rate drops sharply from a high level to near zero. This means that, after getting enough energy, if the tag does not perform the task immediately, the energy-harvesting rate will drop sharply. Therefore, in order to optimize goodput, it is important to adapt to current energy-harvesting conditions and to keep track of the maximum harvesting points. CRFID Operating Procedure Optimization In this section, we first describe the optimized CRFID operating procedure. Then, we analyze goodput optimization problem by controlling the optimal number of transmission frames and sleep time of CRFID according to the current energy harvesting. CRFID Operating Procedure We introduce a burst transmission mechanism to optimize the EPC C1G2 communication procedure for bulk transmission. The core idea is fragmenting large data packets into blocks and letting a tag occupy all time slots for burst transmission when there are data to be transmitted. In addition, in order to deal with the error of data frames, we introduce erasure codes to improve system reliability and to reduce ACK delay. An overview of erasure coding is given in Figure 3. The node encodes N source data frames that are required to transmit into N + M frames through the XOR of several random source data frames, where M is the number of redundant frames [34]. Each frame is sent only once, and the reader can decode and restore the original data by successfully receiving any N frames. Thus, there is no need to acknowledge each frame for the reader, which reduces ACK delay. If the number of errors in the data frame is greater than M+1, it means the data restoration failed. When the harvesting conditions are constant (i.e., max V and  are fixed), H is a concave function of s t . When the energy-harvesting condition changes, both max V and  change, so the optimal operating point also changes. When the tag's capacitor stores more energy than the threshold, the energy-harvesting rate drops sharply from a high level to near zero. This means that, after getting enough energy, if the tag does not perform the task immediately, the energy-harvesting rate will drop sharply. Therefore, in order to optimize goodput, it is important to adapt to current energy-harvesting conditions and to keep track of the maximum harvesting points. CRFID Operating Procedure Optimization In this section, we first describe the optimized CRFID operating procedure. Then, we analyze goodput optimization problem by controlling the optimal number of transmission frames and sleep time of CRFID according to the current energy harvesting. CRFID Operating Procedure We introduce a burst transmission mechanism to optimize the EPC C1G2 communication procedure for bulk transmission. The core idea is fragmenting large data packets into blocks and letting a tag occupy all time slots for burst transmission when there are data to be transmitted. In addition, in order to deal with the error of data frames, we introduce erasure codes to improve system reliability and to reduce ACK delay. An overview of erasure coding is given in Figure 3. The node encodes N source data frames that are required to transmit into + NM frames through the XOR of several random source data frames, where M is the number of redundant frames [34]. Each frame is sent only once, and the reader can decode and restore the original data by successfully receiving any N frames. Thus, there is no need to acknowledge each frame for the reader, which reduces ACK delay. If the number of errors in the data frame is greater than +1 M , it means the data restoration failed. The improved communication procedure is shown in Figure 4. The Query and QueryRep commands are used to initialize data requests and to set related parameters. The RN16 contains a 16bit random number generated by the CRFID. After being sent to the reader, the reader returns it as a field of the ACK to the tag to indicate agreeing to the data transmission request of the tag. Then, the tag selects the optimal sleep time and the number of transmitted data frames according to the current energy-harvesting condition for burst transmission. Then, the reader and CRFID repeat the above duty cycle operation until all data in the current round have been sent or wait until the reader returns a QueryRep command for the next round of sending request. The improved communication procedure is shown in Figure 4. The Query and QueryRep commands are used to initialize data requests and to set related parameters. The RN16 contains a 16-bit random number generated by the CRFID. After being sent to the reader, the reader returns it as a field of the ACK to the tag to indicate agreeing to the data transmission request of the tag. Then, the tag selects the optimal sleep time and the number of transmitted data frames according to the current energy-harvesting condition for burst transmission. Then, the reader and CRFID repeat the above duty cycle operation until all data in the current round have been sent or wait until the reader returns a QueryRep command for the next round of sending request. Goodput Analysis and Optimization We have already known that longer sleep time does not necessarily result in higher goodput when energy-harvesting conditions are poor. So how do we improve goodput by adapting to current energy-harvesting conditions? By continuously tracking the maximum harvest point, as shown in Figure 2, we can obtain the optimal sleep time and energy-harvesting rate, calculate the RF energy captured by the CRFID node, and further derive the optimal number of transmission frames under the current condition. Here, we use the gradient descent algorithm to approximate the optimal value. Sleep time adaptation: As can be seen from Figure 2, the energy-harvesting rate curve is a concave function of sleep time (under specific harvesting conditions). A fast and efficient way to converge to the optimal value of the concave function is used by the gradient descent algorithm. The gradient descent algorithm works as follows. We initialize the sleep time and calculate the gradient at this time. Then, we look for the direction of the positive gradient and move a certain step size in this direction. We repeat this iteration until the difference in energy-harvesting rate between the two iterations is small enough to indicate that the local optimum has been reached. If the step size is too small, the convergence may be too slow, and if the step size is too large, convergence cannot be guaranteed. Therefore, we should choose the appropriate step size according to the gradient. In addition, if the harvest conditions change, the curve will change. Our gradient-based sleep time adaption algorithm runs continuously once it converges to the optimal value. It periodically checks the gradient under the current optimal conditions and moves along a positive gradient as the optimal harvest rate changes. In this way, we get the optimal harvesting rate and sleep time at different energy-harvesting conditions. Optimal number of transmitted frames: According to the above analysis, at the specific energyharvesting condition, when the sleep time is o t , the energy-harvesting rate is the maximum max H . The radio frequency energy ( ) E t captured by the CRFID at this time is as follows: Suppose that the average energy cost by a node to transmit or receive unit-bit data is bit e and that the lengths of the data frame payload and the header (including the FCS field) are represented by p l and h l , respectively. The energy cost required to transmit one frame is thus as follows: The number of data frames for burst transmission in the th k duty cycle is represented by k n . Goodput Analysis and Optimization We have already known that longer sleep time does not necessarily result in higher goodput when energy-harvesting conditions are poor. So how do we improve goodput by adapting to current energy-harvesting conditions? By continuously tracking the maximum harvest point, as shown in Figure 2, we can obtain the optimal sleep time and energy-harvesting rate, calculate the RF energy captured by the CRFID node, and further derive the optimal number of transmission frames under the current condition. Here, we use the gradient descent algorithm to approximate the optimal value. Sleep time adaptation: As can be seen from Figure 2, the energy-harvesting rate curve is a concave function of sleep time (under specific harvesting conditions). A fast and efficient way to converge to the optimal value of the concave function is used by the gradient descent algorithm. The gradient descent algorithm works as follows. We initialize the sleep time and calculate the gradient at this time. Then, we look for the direction of the positive gradient and move a certain step size in this direction. We repeat this iteration until the difference in energy-harvesting rate between the two iterations is small enough to indicate that the local optimum has been reached. If the step size is too small, the convergence may be too slow, and if the step size is too large, convergence cannot be guaranteed. Therefore, we should choose the appropriate step size according to the gradient. In addition, if the harvest conditions change, the curve will change. Our gradient-based sleep time adaption algorithm runs continuously once it converges to the optimal value. It periodically checks the gradient under the current optimal conditions and moves along a positive gradient as the optimal harvest rate changes. In this way, we get the optimal harvesting rate and sleep time at different energy-harvesting conditions. Optimal number of transmitted frames: According to the above analysis, at the specific energy-harvesting condition, when the sleep time is t o , the energy-harvesting rate is the maximum H max . The radio frequency energy E(t) captured by the CRFID at this time is as follows: Suppose that the average energy cost by a node to transmit or receive unit-bit data is e bit and that the lengths of the data frame payload and the header (including the FCS field) are represented by l p and l h , respectively. The energy cost required to transmit one frame is thus as follows: The number of data frames for burst transmission in the kth duty cycle is represented by n k . Since the captured energy cannot be less than the energy required to transmit the frame, there is the following inequation. According to Equation (5), the number of burst transmission data frames in the kth duty cycle satisfies the following inequation: where represents rounding down and k−1 i=1 n i represents the total number of burst data frames of the previous k−1 duty cycles. Burst Data Transmission Scheme In this section, we present the design of our scheme by fragmenting large data packets into blocks. The major goal of our scheme is to achieve fast and reliable burst data transmission and goodput optimization for backscatter communications under dynamic harvesting and channel conditions. Design The main idea of our solution is to fragment large data packets into blocks and to adjust the frame length and coding redundancy according to the goodput feedback of the reader at runtime, meanwhile controlling the sleep time and the number of transmitted frames to improve the goodput. Our scheme works as follows: CRFID nodes periodically track the voltage of the capacitor and estimate U max and τ. When the CRFID node moves to the vicinity of the reader and there are data to be sent in the buffer, N source data frames of length l p are extracted from the buffer and encoded into N + M frames, and the initial frame length is set to 16 bits. It communicates with the reader following the procedure described in Figure 4. The reader monitors the received frame. When a round of data transmission is completed, the reader obtains the time spent and goodput and compares the current round goodput with the previous round. If the current round goodput is higher than the previous round goodput, the ACK is returned to inform the CRFID node to increase the length of the transmitted frame; otherwise, the frame length is reduced, and the unit of increasing or decreasing the frame length is 8 bits. To avoid frequent changes to the frame length, the adjustment parameter θ(0 ≤ θ ≤ 0.1) is introduced. If the goodput improvement or degradation does not exceed θ, the frame length is not changed. At the same time, after receiving the frame length specified by the reader, the CRFID node selects the number of frames to be transmitted in the current duty cycle according to the current energy-harvesting condition. Then, it enters the sleep state after the transmission is completed and waits for the next working cycle until all data of the current round have been sent or waits until the reader sends a QueryRep frame. After one round of transmission, the reader counts the number of redundant frames M actually transmitted in the case of successful transmission of the current round. If the transmission of the current round fails, the number of redundant frames in the next round is updated according to Equation (7). where M avg denotes the number of historical average transmitted redundant frames, M cur denotes the number of redundant frames transmitted in the current round, M min and M max respectively represent the minimum and maximum values of the number of redundant frames, α is the weight coefficient, and represents rounding up. Specifically, the CRFID node operates as follows: Step 1: If the CRFID receives Query or QueryRep command from the reader, an RN16 is returned. Step 2: If an ACK of the reader is received and the ACK contains the same 16-bit random number as the RN16, N source data frames of length l p are extracted from the buffer. The N source data frames are encoded into N + M frames, where the values of M and l p are obtained from the information carried in the ACK. Step 3: The capacitance voltage is tracked. U max and τ are estimated at the initial moment of the duty cycle k. Step 4: Calculate the optimal number of burst transmission frames n k of the duty cycle k according to Equation (6) and control the optimal sleep time t o . Step 5: After sleep t o , n k frames are continuously transmitted. If the reader receives a QueryRep in the current duty cycle, the CRFID immediately gives up the transmission and runs the first step; otherwise, Step 3 is run until all N + M frames have been transmitted. At the same time, the reader operates as follows: Step 1: The reader sends a Query to request data and returns an ACK if it receives the RN16 of the CRFID. The ACK includes the same random number as in the RN16 and the frame length l p used by the CRFID for subsequent transmission and the number of redundant frames M. The initial value of l p is 16 bits and the initial value of M is M min . Step 2: The number of buffered data V, the number of correctly received frames R, and the number of incorrect frames F are all set to zero. Step 3: If the data frame of the CRFID is received, let V = V + l p . If the data frame is correct, let R = R + 1; otherwise, let F = F + 1. Further, judge that, if R = N, N source data is restored and run Step 4. If F = M + 1, the round transmission fails and run Step 4; otherwise, run Step 3. Step 4: Calculate the current goodput G = V/∆t, and ∆t is the transmission time of the current round measured by the reader. If G > G (1 + θ) and l p < l max p , let l p = l p + 8, and if G < G (1 + θ) and l p > l min p , let l p = l p − 8; otherwise do not change the data frame length, where G is the goodput of the last round. Calculate the number of redundant frames M for the next round according to Equation (7). Step 5: The reader sends a QueryRep command to request data and returns an ACK frame if receiving the RN16 frame of the CRFID. The ACK contains the same random number as in the RN16 and the transmitting information such as the frame length and the number of redundant frames calculated in Step 4. Then, run Step 2. Pseudo codes of the CRFID and the reader operation procedure are shown in Algorithm 1 and Algorithm 2, respectively. (7) for the next round 20: end Frame Format The data frame format used is similar to the data frame format of the EPC C1G2 protocol, as shown in Figure 5. The data payload length in the EPC C1G2 protocol is defined by the EPC length field in the Protocol Control (PC) field, which ranges from 16 to 496 bits and is an integer multiple of 16 bits. Since the unit of increasing or decreasing the frame length in our scheme is 8 bits, the field is modified so that each bit represents 8 bits. Considering that the device only supports 96 bits, the maximum value of the field is 96 bits, that is, 01100. The ACK format of EPC C1G2 consists of 2 bits of command bits and 16-bit RN. However, in our proposed scheme, the frame length and the number of redundant frames need to be adjusted according to energy-harvesting and channel conditions, which are fed back to the CRFID by the reader through the ACK. Therefore, two additional 5-bit fields are added in the ACK, respectively indicating the frame length and the number of redundant frames in the next round, and the ACK format is as shown in Figure 6. (7) for the next round 20: end Frame Format The data frame format used is similar to the data frame format of the EPC C1G2 protocol, as shown in Figure 5. The data payload length in the EPC C1G2 protocol is defined by the EPC length field in the Protocol Control (PC) field, which ranges from 16 to 496 bits and is an integer multiple of 16 bits. Since the unit of increasing or decreasing the frame length in our scheme is 8 bits, the field is modified so that each bit represents 8 bits. Considering that the device only supports 96 bits, the maximum value of the field is 96 bits, that is, 01100. The ACK format of EPC C1G2 consists of 2 bits of command bits and 16-bit RN. However, in our proposed scheme, the frame length and the number of redundant frames need to be adjusted according to energy-harvesting and channel conditions, which are fed back to the CRFID by the reader through the ACK. Therefore, two additional 5-bit fields are added in the ACK, respectively indicating the frame length and the number of redundant frames in the next round, and the ACK format is as shown in Figure 6. Platforms The experimental platform is shown in Figure 7. We use the Universal Software Radio Peripheral (USRP) N210 software-defined radios and WISP as backscatter nodes for our instantiation of our scheme. The USRP N210 is equipped with an SBX40 daughter board that can be used as a detector and reader. We use the open source code written by Nikos on GitHub to use USRP as a reader. Based on the experimental conditions and design, the SBX-40 daughter board provides the multiple-input multiple-output (MIMO) function and provides 40-MHz bandwidth. The working frequency is 400 MHz to 4400 MHz. The platform used in the experiment is 64-bit Ubuntu 14.04, GNU Radio 3.7.4. The selected CRFID tag is WISP4.1 with dipole antenna, and the microcontroller is MSP430F2132. The commercial reader model used is the ImpinJ Speedway R420, which is connected to the Laird circularly polarized directional antenna S9028PCL. Up to 4 antennas can be connected at the same time, and the antenna gain is 9 dBi. Our goal in the evaluation is to demonstrate that our proposed scheme can significantly improve system goodput. 16bits 5bits 5bits Figure 6. Format of the ACK message. Platforms The experimental platform is shown in Figure 7. We use the Universal Software Radio Peripheral (USRP) N210 software-defined radios and WISP as backscatter nodes for our instantiation of our scheme. The USRP N210 is equipped with an SBX40 daughter board that can be used as a detector and reader. We use the open source code written by Nikos on GitHub to use USRP as a reader. Based on the experimental conditions and design, the SBX-40 daughter board provides the multiple-input multiple-output (MIMO) function and provides 40-MHz bandwidth. The working frequency is 400 MHz to 4400 MHz. The platform used in the experiment is 64-bit Ubuntu 14.04, GNU Radio 3.7.4. The selected CRFID tag is WISP4.1 with dipole antenna, and the microcontroller is MSP430F2132. The commercial reader model used is the ImpinJ Speedway R420, which is connected to the Laird circularly polarized directional antenna S9028PCL. Up to 4 antennas can be connected at the same time, and the antenna gain is 9 dBi. Our goal in the evaluation is to demonstrate that our proposed scheme can significantly improve system goodput. Trimming Overheads Erasure coding implementations: It is obvious that the redundancy introduced using erasure codes increases energy consumption as its number increases. However, traditional reliability improvement methods, such as data multiplexing or Automatic Repeat Requests (ARQ) [35], are too costly and even impossible to implement due to strict energy constraints and channel asymmetry. Erasure codes allow the reliability of data transmission to be increased by transmitting redundant data. In our work, we investigated the potential for communication using erasure codes and investigated the trade-offs between reliability and energy consumption. For our platforms with limited energy and size, we mainly focus on Cauchy-matrix-based and Vandermonde-matrix-based Reed-Solomon (RS) code through existing open source implementations. As a benchmark experiment, we simulated three cases: the normal case, that is, the data packet without encoding; the full copy, that is, the data is sent with an exact copy; and the on-demand retransmission, that is, if the ACK command that the data has been successfully sent is not received, the data will be resent. We use simulation and energy measurement on real hardware platforms to describe the overhead of redundant packets by setting different coding rates r to evaluate these methods. Our results clearly show that erasure coding has the same or less overhead compared to Trimming Overheads Erasure coding implementations: It is obvious that the redundancy introduced using erasure codes increases energy consumption as its number increases. However, traditional reliability improvement methods, such as data multiplexing or Automatic Repeat Requests (ARQ) [35], are too costly and even impossible to implement due to strict energy constraints and channel asymmetry. Erasure codes allow the reliability of data transmission to be increased by transmitting redundant data. In our work, we investigated the potential for communication using erasure codes and investigated the trade-offs between reliability and energy consumption. For our platforms with limited energy and size, we mainly focus on Cauchy-matrix-based and Vandermonde-matrix-based Reed-Solomon (RS) code through existing open source implementations. As a benchmark experiment, we simulated three cases: the normal case, that is, the data packet without encoding; the full copy, that is, the data is sent with an exact copy; and the on-demand retransmission, that is, if the ACK command that the data has been successfully sent is not received, the data will be resent. We use simulation and energy measurement on real hardware platforms to describe the overhead of redundant packets by setting different coding rates r to evaluate these methods. Our results clearly show that erasure coding has the same or less overhead compared to traditional data replication and on-demand retransmission and can provide higher reliability in our scenario. When erasure coding is used, as the code overhead and data rate increase, we observe that the recovery rate increases steadily compared to the other two methods of recovering data. In particular, erasure codes based on Reed-Solomon clearly outperform simple data replication and ARQ, as shown in Figure 8. All in all, based on measurements on hardware platforms with very limited actual energy consumption, we can prove that the computational cost of encoding is feasible and we can also notice the energy consumption on the nodes. scenario. When erasure coding is used, as the code overhead and data rate increase, we observe that the recovery rate increases steadily compared to the other two methods of recovering data. In particular, erasure codes based on Reed-Solomon clearly outperform simple data replication and ARQ, as shown in Figure 8. All in all, based on measurements on hardware platforms with very limited actual energy consumption, we can prove that the computational cost of encoding is feasible and we can also notice the energy consumption on the nodes. Probing energy state: As mentioned earlier, the ADC costs too much energy and should be avoided when tracking the maximum energy-harvesting rate. In our work, instead of measuring the voltage on nodes, we use existing low watermark threshold detectors that already exist on such nodes. This type of detector is very common on passive sensor platforms to control the state of the tag, when it should sleep to avoid interruptions, and when it should wake up to continue operation. Therefore, our algorithm is interrupted when the voltage exceeds the threshold or when the voltage is below the threshold, and this information is used as a one-bit proxy for the actual voltage. The voltage threshold is chosen to be 2 V, which is slightly higher than the minimum voltage of 1.8 V required to operate the microcontroller. This information is entered into the sleep time tracker, which decides how long to wait after exceeding the threshold before starting the transmission. Compared with ADC, this method saves about 100× in energy. Evaluation To model the changing energy-harvesting and channel conditions, we evaluate the performance with different values of Table 1. Probing energy state: As mentioned earlier, the ADC costs too much energy and should be avoided when tracking the maximum energy-harvesting rate. In our work, instead of measuring the voltage on nodes, we use existing low watermark threshold detectors that already exist on such nodes. This type of detector is very common on passive sensor platforms to control the state of the tag, when it should sleep to avoid interruptions, and when it should wake up to continue operation. Therefore, our algorithm is interrupted when the voltage exceeds the threshold or when the voltage is below the threshold, and this information is used as a one-bit proxy for the actual voltage. The voltage threshold is chosen to be 2 V, which is slightly higher than the minimum voltage of 1.8 V required to operate the microcontroller. This information is entered into the sleep time tracker, which decides how long to wait after exceeding the threshold before starting the transmission. Compared with ADC, this method saves about 100× in energy. Evaluation To model the changing energy-harvesting and channel conditions, we evaluate the performance with different values of E b /N 0 and different combinations of U max and τ. E b /N 0 is the signal-to-noise ratio (SNR) per bit. All the corresponding parameters are listed in Table 1. We consider that the CRFID has 7400-bit data in buffer that need to be sent to the reader. Our proposed scheme is simulated to obtain the goodput performance and the corresponding frame length and number of redundant frames, which are compared with the theoretical optimal solution obtained from Reference [36]. The theoretical optimal value is used as a benchmark under specific energy-harvesting and channel conditions. We also evaluate the performance of our proposed scheme by the actual implementation and compared it to the fixed 96-bit frame length strategy adopted by EPC C1G2 and the DFCA scheme. In order to be fair, all solutions use the optimized communication procedure as shown in Figure 4. Figure 9a shows the variation of the goodput of the four schemes as E b /N 0 increases under the harvesting conditions of U max = 6 V and τ = 2. We clearly see that, in Figure 9a, as E b /N 0 increases, system goodput gradually increases due to the decreasing the bit error rate (BER). However, as E b /N 0 gets larger, the improvement of goodput is reduced and eventually converges to a fixed value. This is because the frame successful transmission rate eventually tends to 1 with the decreasing BER. The theoretical goodput in Figure 9a is larger than our scheme, the DFCA scheme, and the fixed frame length scheme, but the theoretical goodput depends on accurate estimation about channel-quality and harvesting condition, which is difficult to achieve in practice. The goodput achieved by our solution is very close to optimal. proposed scheme is simulated to obtain the goodput performance and the corresponding frame length and number of redundant frames, which are compared with the theoretical optimal solution obtained from Reference [36]. The theoretical optimal value is used as a benchmark under specific energy-harvesting and channel conditions. We also evaluate the performance of our proposed scheme by the actual implementation and compared it to the fixed 96-bit frame length strategy adopted by EPC C1G2 and the DFCA scheme. In order to be fair, all solutions use the optimized communication procedure as shown in Figure 4. Figure 9a shows the variation of the goodput of the four schemes as increases, system goodput gradually increases due to the decreasing the bit error rate (BER). However, as 0 / b E N gets larger, the improvement of goodput is reduced and eventually converges to a fixed value. This is because the frame successful transmission rate eventually tends to 1 with the decreasing BER. The theoretical goodput in Figure 9a is larger than our scheme, the DFCA scheme, and the fixed frame length scheme, but the theoretical goodput depends on accurate estimation about channel-quality and harvesting condition, which is difficult to achieve in practice. The goodput achieved by our solution is very close to optimal. (a) (b) (c) E N is small, which means poor channel quality, the frame length used is small and the number of redundant frames is large. As 0 / b E N increases, the frame length is larger and redundant frames are fewer. Our scheme is close to the theoretical value, but because our scheme uses the online measurement goodput feedback to the node to change the corresponding frame length and the number of redundant frames, the curve has certain fluctuations. Figure 9b,c shows the optimal frame length and the number of redundant frames used by the theoretical method and our proposed scheme, respectively, under E b /N 0 corresponding to Figure 9a. It can be seen that, when E b /N 0 is small, which means poor channel quality, the frame length used is small and the number of redundant frames is large. As E b /N 0 increases, the frame length is larger and redundant frames are fewer. Our scheme is close to the theoretical value, but because our scheme uses the online measurement goodput feedback to the node to change the corresponding frame length and the number of redundant frames, the curve has certain fluctuations. Figure 10 shows the goodput, frame length, and number of redundant frames for several scenarios under energy-harvesting conditions of U max = 5 V and τ = 3. Compared with Figure 9, the maximum chargeable voltage of the CRFID node is reduced to 5 V and the RC circuit time constant τ is set to 3, which reduces the energy-harvesting conditions. Overall, trend of goodput variation in Figure 10a is the same as in Figure 9a, but the overall goodput is smaller than Figure 9a due to poor energy-harvesting conditions. As can be seen from Figure 10a,b, increasing the frame length does not increase the goodput indefinitely. This is because charging time becomes a major factor that mainly affects goodput in the case of better channel conditions and poor energy-harvesting conditions. Therefore, under this condition, obtaining a larger transmission frame length at the cost of a longer charging time may cause a decrease in goodput. Figure 10 shows the goodput, frame length, and number of redundant frames for several scenarios under energy-harvesting conditions of max 5 U = V and 3 τ = . Compared with Figure 9, the maximum chargeable voltage of the CRFID node is reduced to 5 V and the RC circuit time constant τ is set to 3, which reduces the energy-harvesting conditions. Overall, trend of goodput variation in Figure 10a is the same as in Figure 9a, but the overall goodput is smaller than Figure 9a due to poor energy-harvesting conditions. As can be seen from Figure 10a,b, increasing the frame length does not increase the goodput indefinitely. This is because charging time becomes a major factor that mainly affects goodput in the case of better channel conditions and poor energy-harvesting conditions. Therefore, under this condition, obtaining a larger transmission frame length at the cost of a longer charging time may cause a decrease in goodput. (a) (b) (c) Figure 11 shows the communication energy consumption of our scheme and fixed frame length strategy under max 5 U = V and 3 τ = . It can be seen from Figure 11 that, in the case of low our scheme consumes more energy than the fixed frame length strategy, while in the case of high 0 / b E N , our scheme consumes less energy. This is because, when 0 / b E N is low, our scheme guarantees a successful packet-receiving rate by increasing the number of redundant packets, so that the energy consumption is increased. Also, when the signal-to-noise ratio is better, the number of redundant packets is reduced and our scheme eliminates the need to return an acknowledgment to per frame to save energy. It is worth mentioning that our paper does not consider the calculation energy consumption because it is of a smaller order of magnitude compared to communication energy consumption. Moreover, the core algorithm of the method used in this paper runs on the reader side, and the calculation amount of the CRFID is small. Figure 11 shows the communication energy consumption of our scheme and fixed frame length strategy under U max = 5 V and τ = 3. It can be seen from Figure 11 that, in the case of low E b /N 0 , our scheme consumes more energy than the fixed frame length strategy, while in the case of high E b /N 0 , our scheme consumes less energy. This is because, when E b /N 0 is low, our scheme guarantees a successful packet-receiving rate by increasing the number of redundant packets, so that the energy consumption is increased. Also, when the signal-to-noise ratio is better, the number of redundant packets is reduced and our scheme eliminates the need to return an acknowledgment to per frame to save energy. It is worth mentioning that our paper does not consider the calculation energy consumption because it is of a smaller order of magnitude compared to communication energy consumption. Moreover, the core algorithm of the method used in this paper runs on the reader side, and the calculation amount of the CRFID is small. Conclusions In this paper, we achieve fast and reliable bulk burst data transmission by optimizing goodput based on burst transmission when there are critical and emergency data to be transmitted. First, we optimize the EPC C1G2 protocol by introducing burst transmission mechanism and erasure codes and control the optimal number of transmission frames and sleep time of CRFID according to the current energy-harvesting and channel conditions. Then, we fragment large data packets into blocks and design an online adjustment strategy to adjust the frame length and coding redundancy dynamically by the feedback of the reader at runtime. Our results show that our proposed scheme significantly outperforms the current fixed frame length approach and the DFCA scheme, and the goodput is close to the theoretically optimal value under different energy-harvesting and channel conditions. Our proposed scheme in this paper enables passive sensing communication to adapt to dynamic energy-harvesting and channel conditions and to facilitate their application in the field of mobile sensing and pervasive computing. Future research work will consider the issue of data frame retransmission and how to deal with collisions and to ensure data transmission priority to further improve system performance in multi-tag scenario. In addition, this method can be combined with ambient back-scattering technology to make use of all the energy available in the environment such as temperature difference energy supply, mechanical vibration energy, etc. to improve the intelligence of smart devices and to establish a true smart system. That is to say, in the near future, data communication will not be restricted by the outside world, and it can flexibly realize the conversion between different protocols, thereby achieving barrier-free communication between various IoT devices.
2019-12-12T10:14:51.986Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "4b1dd51c795845e7d5f5b28f9a27a8aeeb653ea9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s19245418", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ef122e246e8b570fee9c9c938dd3597f4acbdbc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
135892225
pes2o/s2orc
v3-fos-license
FIELD- AND STRESS-INDUCED MAGNETIC ANISOTROPY IN NANOCRYSTALLINE Fe-BASED AND AMORPHOUS Co-BASED ALLOYS For nanocrystalline alloy Fe73.CuNb3Si3.B9 thermomechanical treatment was carried out simultaneously with nanocrystallizing annealing (1) or after it (2). It was shown that a change in magnetic properties for the case is essentially greater than for the case 2. Complex effect of thermomagnetic and thermomechanical treatments on magnetic properties was studied in the above-mentioned nanocrystalline alloy as well as in the amorphous alloy FeCo70.6SiB9.,. During the annealings both field and stress were aligned with the long side ofthe specimens. It was shown that the magnetic field, AC or DC, decreases an effect of loading. Moreover, the magnetic field, AC or DC, applied after stress-annealing can destroy the magnetic anisotropy already induced under load. INTRODUCTION Nanocrystalline Fe-based alloys are known as a unique magnetically soft material with a low coercive force, low magnetic losses, and high magnetic permeability, which is caused by very fine grain size (10-12 nm), random distribution ofcrystal axes ofgrains, and vanishing total magnetostriction. It was known that (Glazer et al., 1991) using thermornechanical treatment, magnetic anisotropy of the easy-plane type characteristic of Co-based alloys (Nielsen et al., 1985) may be * Corresponding author. 290 V. A. LUKSHINA et al. obtained in the Fe73.sCulNb3Si13.sB9 alloy. This work is a continuation of Glazer et al. (1991). Its aim is to study for Fe-based nanocrystalline alloy the effect of the conditions of thermomechanical and thermomagnetic treatments on the magnitude of induced magnetic anisotropy and to compare the data with those for Co-based amorphous alloy FesCo70.6Si15B9.4. EXPERIMENTAL The method of rapidly quenching the melt onto a rotating copper drum was used to prepare amorphous ribbons ofboth Feand Co-based alloys of 20 lm in thickness and mm in width. In order to obtain a nanocrystalline state, the Fe-based ribbons were annealed in air at 530C for h. Below, we will call this treatment nanocrystallizingannealing (NCA). For both alloys the thermomechanical (TMechT), or thermomagnetic (TMT), or thermomechanomagnetic (TMechMT) treatment, that involve annealing and cooling of the sample under a tensile load, or in a magnetic field, or upon both factors, was carried out in a vertical tubular furnace; a load was fastened to the ribbon using a special clamp and removed after the termination of the treatment. The longitudinal magnetic field H--400 Oe, DC or AC, was created by coil winded around the furnace. For Fe-based material the treatment was performed using two regimes: 1. The NCA was carried out simultaneously with TMechT, or TMT, or TMechMT. An amorphous sample was subjected to NCA with a tensile load, or a magnetic field, or both a load and a magnetic field simultaneously. 2. NCA and TMechT, or TMT, or TMechMT were performed sequentially; the sample that was preliminarily annealed to obtain the nanocrystalline condition was then subjected to one of three treatments. From a ribbon subjected to such a treatment, samples of 80-100 mm in length were cut from a portion that was located in the zone of the controlled uniform heating. Magnetic properties (hysteresis loops) were measured by the ballistic method in a field directed along the ribbon. Similar to thermomagnetic treatment in transverse DC magnetic field the thermomechanical one increases the incline of the hysteresis loops; FIGURE Appearance of the hysteresis loop for Fe73.sCulNb3Si13.sB-specimens after stress-or field-treatment in transverse DC magnetic field. with increasing load, the slope of the loop increases (Glazer et al., 1991). The constant of induced magnetic anisotropy Ku was determined from a relation Ku=-O.5MsHs (see Fig. 1), where Ms is the saturation magnetization and Hs is the saturating field (in which saturation magnetization is reached). The error of measuring Ku was 5-7%. RESULTS AND DISCUSSION The results of the investigations are shown in the figures and in the tables. For nanocrystalline alloy Fig. 2 shows the variation of the constant of induced magnetic anisotropy depending on the load during TMechT at 530C for h according to regimes and 2 (curves and 2, respectively). It can be seen that, first, the magnitude of Ku induced by TMechT increases with increasing load, the magnitude of Ku after TMechT by regime is substantially larger than that obtained after treatment by regime 2. The times for reaching maximum Ku upon treatment by regimes 1 and 2 are also substantially different. When TMechT is performed at 530C Ku reaches its maximum in a few minutes ifTMechT is combined with NCA, but only in more than h if TMechT follows NCA. It was noted also that ribbons treated by regime increase in length. The larger the load, the greater the elongation; at maximum loads it reaches 13-15 mm, which is equal to about 15% of the gage length of the ribbon. The elongation occurs in approximately the same The Table I shows the Ku behavior for Fe-based specimens subjected to NCA simultaneously with TMechT in a magnetic field, AC or DC, at 530C for h. In addition, the percent of the Ku value decrease after TMechMT is shown in comparison with that after TMechT. One can see that the magnetic field decreases an effect of loading to 50 MPa; the less the load, the more the decrease. At loads more than 50 MPa the change of Ku is less than 5 % which is within the error of measuring Ku. under load r=600MPa and cooling: under load cr=600MPa (a), without any load (b); without any load in longitudinal DC (c) or AC (d) magnetic field H 400 Oe. The Table II shows the values of Ku after different coolings for both alloys. Also, it shows a percent of Ku decrease after cooling without any load and in magnetic fields in comparison with Ku after cooling under load. It is seen that after cooling without any load Ku decreases by 16% for Fe-based and 14% for Co-based alloys, and after cooling in a magnetic field much larger, particularly in AC one. It means that cooling in magnetic field can destroy magnetic anisotropy induced under load. CONCLUSION 1. The magnitude of the constant of induced magnetic anisotropy that can be obtained by TMechT increases with increasing tensile stresses. 2. The combined effect ofNCA and TMechT, all other conditions being the same, produces a greater induced anisotropy than that obtained when TMechT follows NCA. 3. Magnetic field can interfere with the induction of magnetic anisotropy during stress-annealing and destroy the already stress-induced one.
2019-04-28T13:09:33.272Z
1999-01-01T00:00:00.000
{ "year": 1999, "sha1": "9f6fc6018b6a6020fa1656d4488a444bfb84ad88", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/1999/813950.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "628cf41ae40adf310453ef2807feb2fdd31ade0e", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
35398087
pes2o/s2orc
v3-fos-license
Effect of covalency and interactions on the trigonal splitting in NaxCoO2 We calculate the effective trigonal crystal field Delta which splits the t2g levels of effective models for NaxCoO2 as the local symmetry around a Co ion is reduced from Oh to D3d. To this end we solve numerically a CoO6 cluster containing a Co ion with all 3d states and their interactions included, and its six nearest-neighbor O atoms, with the geometry of the system, in which the CoO6 octahedron is compressed along a C3 axis. We obtain Delta near 130 meV, with the sign that agrees with previous quantum chemistry calculations, but disagrees with first-principles results in the local density approximation (LDA). We find that Delta is very sensitive to a Coulomb parameter which controls the Hund coupling and charge distribution among the d orbitals. The origin of the discrepancy with LDA results is discussed. I. INTRODUCTION The doped layered hexagonal cobaltates Na x CoO 2 have attracted great interest in the last years due to the high thermopower and at the same time low thermal conductivity and resistivity for 0.5 < x < 0.9, 1,2 and the discovery of superconductivity in hydrated Na x CoO 2 . 3 Further attention was motivated by the fact that firstprinciples calculations in the local density approximation (LDA) 4-6 predicted a Fermi surface with six prominent hole pockets along the Γ − K direction, which are absent in measured angle-resolved photoemission (ARPES) spectra. 7,8 To explain the discrepancy, several calculations including correlation effects were made. [9][10][11][12][13][14][15] These studies used an effective model H eff for the t 2g 3d states of Co, split by the trigonal crystal field ∆ into an a ′ 1g singlet and an e ′ g doublet. 16 Except for some simplifications used in the different works, H eff has the form whered † iβσ creates a hole on an effective t 2g orbital at site i with spin σ. The first term is the effective trigonal splitting mentioned above, the second term describes the hopping between orbitals at a distance δ and the remaining terms are effective interactions discussed for example in Ref. 17. In most works, ∆ andt βγ δ were derived from fits to the LDA bands and the interaction parameters were estimated. These fits give either ∆ = −10 meV 9 or ∆ = −130 meV. 10 With these parameters and realistic values of the Coulomb repulsion U eff , correlations are not able to reconcile theory with experiment, as shown by different dynamical-mean-field-theory (DMFT) studies. 12,13,15 The pockets still remain in the calculations. Using instead an H eff derived from a multiband Co-O model H mb through a low-energy reduction procedure, 17 and the value ∆ = 315 meV obtained from quantumchemistry configuration-interaction calculations, 18 these pockets are absent and the electronic dispersion near the Fermi energy agrees with experiment. 15 In this procedure, no LDA results were used. The parameters of H mb were taken from previous fits of of polarized x-ray absorption spectra, 19 and the parameters of H eff other than ∆ were obtained fitting the energy levels of an undistorted CoO 6 cluster (O h symmetry) and calculating the effective hopping between different CoO 6 clusters, 17 following similar ideas that were successful in the superconducting cuprates. [20][21][22] In these systems, low-energy reduction procedures that eliminate the O degrees of freedom, simplifying the problem to an effective one-band one, 20,23-29 have been very successful, in spite of the fact that doped holes enter mainly at O atoms [30][31][32] . Optical properties related with O atoms were calculated using these one-band models, which do not contain O states. 20,21 Summarizing previous results, if ∆ is taken as a parameter, a positive ∆ has the effect of shrinking the pockets, and for large enough ∆, the pockets disappear from the Fermi surface, reconciling theory with ARPES experiments. 12,13,15 A positive value has been obtained by quantum-chemistry methods 18 and a negative one is obtained fitting the LDA dispersion with H eff . 9,10 Thus, the origin of the discrepancy between different methods and the actual value of ∆ remains a subject of interest. It is known that in general, the LDA underestimates gaps and has difficulties in predicting one-particle excitations energies. Thus one might suspect that the parameters of H eff , including ∆ calculated with LDA are not accurate enough when covalency and interactions are important. This is the case of NiO, for which agreement with experiment in LDA+DMFT calculations is only achieved once the O bands are explicitly included in the model, 33 or when the O atoms have been integrated out using low-energy reduction procedures, which take into account correlations from the beginning. 33,34 In covalent materials, the crystal-field splitting of transition-metal ions is dominated by the hopping of electrons between these ions and their nearest ligands. 35 In particular for Na x CoO 2 , an estimate based on point charges gives ∆ = −25 meV. 36 This shows that the effect of interatomic repulsions is small and of the opposite sign as that required to explain the ARPES spectra. The effects of covalency of Co and its nearest-neighbor O atoms and all Co-Co interactions are included in a CoO 6 cluster in the realistic (D 3d ) symmetry. In this work, we solve numerically this cluster and calculate the effective splitting ∆, neglecting interatomic repulsions. We also analyze the effects of different parameters on ∆. The main result is that ∆ ≃ 130 meV and very sensitive to a parameter which controls the Hund rules. It is also sensitive to the cubic crystal-field splitting 10Dq. A possible reason of the discrepancy with the LDA results is discussed. In Section II, we describe the model, parameters, and briefly the formalism. Section III contains the results. Section IV is a summary and discussion. II. THE MODEL AND ITS PARAMETERS The multiband model from which H eff is derived, describes the 3d electrons of Co and the 2p electrons of the O atoms, located in the positions determined by the structure of Na 0.61 CoO 2 at 12 K. 37 In this work we restrict the calculation to a cluster of one Co atom and its six nearest-neighbor O atoms. The relevant filling for the calculation of ∆ corresponds to formal valences Co 4+ and O 2− , or 41 electrons to occupy the 3d shell of Co and the 2p shells of the six O atoms. This corresponds to 5 holes in the CoO 6 cluster. Thus, it turns out to be simpler to work with hole operators (which annihilate electrons) acting on the vacuum state in which the Co ion is in the 3d 10 configuration and the O ions are in the p 6 one. The most important physical ingredients are the interactions inside the 3d shell H I and the Co-O hopping (t ηξ j below), parameterized as usual, in terms of the Slater-Koster parameters. 38 . We include a cubic crystal field splitting ǫ t2g − ǫ eg = 10Dq The Hamiltonian for the CoO 6 cluster takes the form The operator d † ξσ creates a hole on the orbital ξ of Co with spin σ. Similarly p † jησ creates a hole on O 2p orbital η at site j with spin σ. The first two terms corresponds to the energy of the e g orbitals (x 2 − y 2 , 3z 2 − r 2 ) and t 2g orbitals (xy, yz, zx) written on a basis in which x, y, z, point to the vertices of a regular CoO 6 octahedron (symmetry O h ). The compression along the axis x+y +z reduces the symmetry to D 3d and splits the states of symmetry xy + yz + zx (a ′ 1g in D 3d 16 ) from the other two t 2g ones (e ′ g in D 3d ). H I contains all interactions between d holes assuming spherical symmetry [the symmetry is reduced to O h by the cubic crystal field 10Dq and to D 3d by the last (hopping) term of Eq. (2)]. The expression of H I is lengthy. It is included in the Appendix [Eq. (A4)] together with a brief description of its derivation for the interested reader. A more detailed discussion is in Ref. 17. The form of H I is rather simple and well known when either only e g orbitals 39 or only t 2g orbitals [as in Eq. (1)] 40,41 are important, although the correct expressions were not always used. 40,42 In the general case, H I contains new terms which are often disregarded. For example in a recent study of Fe pnictides, 43 a simplified expression derived previously 44 was used. More recently, to estimate the effective Coulomb interaction for transitionmetal atoms on metallic surfaces, only density-density interactions were included. 45 Some of the effects of these simplifications were discussed in Ref. 17. All interactions are given in terms of three free parameters F 0 ≫ F 2 ≫ F 4 . For example the Coulomb repulsion between two holes or electrons at the same 3d orbital is U = F 0 + 4F 2 + 36F 4 , and the Hund rules exchange interaction between two e g (t 2g ) electrons is J e = 4F 2 + 15F 4 (J t = 3F 2 + 20F 4 ). Thus F 2 is the main parameter responsible for the spin and orbital polarizations related with the first and second Hund rules respectively. Note that in Eq. (2) there is no trigonal splitting. This means that we take the bare value of the splitting ∆ 0 = 0 (neglecting the effect of interatomic repulsions). The dressed value ∆ that enters the effective Hamiltonian Eq. (2) is calculated as where E(Γ) is the energy of the lowest lying state that transforms under symmetry operations according to the irreducible representation Γ of the point group D 3d . 16 As in previous calculations for the regular CoO 6 octahedron (with symmetry O h ), 17 the diagonalization is simplified by the fact that several linear combinations of O 2p orbitals do not hybridize with the Co 3d ones, forming non-bonding orbitals. However in the present case, the reduced D 3d symmetry increases the bonding 2p combinations to seven, and a different basis should be used, but still the size of the relevant Hilbert space is small enough to permit the diagonalization numerically by the Lanczos method. 46 As a basis for the present study, we take parameters determined previously 19 from a fit of polarized x-ray absorption spectra of Na x CoO 2 to the results of a CoO 6 cluster with 4 and 5 and holes including the core hole. In the present case, we have neglected the O-O hopping for simplicity (this allows a reduction of the relevant Hilbert space). Thus, the parameters of H mb in eV are 19 The choice of the origin of on-site energies ǫ eg = 0 is arbitrary. The resulting values of U = 4.516 eV and charge transfer energies are similar to those derived from other x-ray absorption experiments. 47 We note that while above ǫ t2g − ǫ eg = 10Dq = 1.2 eV, the effect of hybridization increases the splitting between t 2g and e g orbitals to more than 3 eV. III. RESULTS The splitting ∆ is determined from Eq. (3). We have also calculated the occupation of the a ′ 1g 3d orbital in each state to verify that the expected physics is obtained. For the parameters determined previously [Eq. (4)], we obtain ∆ = 124 meV. The sign agrees with quantumchemistry configuration-interaction calculations 18 which obtained ∆ ≈ 300 meV, although our magnitude is smaller. The difference might be at least partially due to some uncertainty in our parameters determined from a fitting procedure. Motivated by this possibility, we have studied the effect of different parameters on the results. Of course, since we have neglected interatomic interactions, ∆ vanishes if the hopping parameters pdσ and pdπ are zero, and one would expect than an increase in these parameters, has the largest impact on ∆. However, we find that an increase of 50% in the hopping increases ∆ by only 25%. In addition, changes of the oxygen energy ǫ O (the charge transfer energy) or F 0 (which determines the intra-orbital Coulomb repulsion U ) by 1 eV have an effect of only a few percent on ∆. Instead, and rather surprisingly, as shown in Fig. 1, ∆ is very sensitive to F 2 , the most important parameter in the expressions for the exchange between d electrons [J ν with ν = e, t, a or b in Eq. (A4)] and the inter-orbital repulsions (U − 2J ν ) among other interactions. Thus, it is the main responsible for the spin and orbital polarizations resulting in the first and second Hund rules. In particular, the repulsion between different e g (t 2g ) orbitals is reduced with respect to the intra-orbital repulsion U by 2J e (2J t ) (see the Appendix). ∆ becomes negative for F 2 < 21 meV. Curiously, increasing F 4 has a small effect, but in the opposite sense as increasing F 2 . This points to non-trivial effects of the correlations, particularly those involving both e g and t 2g electrons. When both F 2 and F 4 vanish we obtain a small positive value ∆ = 12 meV. If one adds to this result the contribution -25 meV from the interatomic Coulomb repulsion estimated using point charges, 36 one obtains a value close to -10 meV, obtained in one of the LDA calculations. 9 This suggest that the LDA negative results for ∆ might be due to the difficulties of LDA in treating correlations related with the Hund rules. In particular, it is known that orbital-related Coulomb interactions are underestimated in the spin LDA, 48 and empirical orbital polarization corrections 49 are frequently used to cure this problem. This fact has been also analyzed in the framework of a self-consistent tight-binding theory 50 The fact that correlations between both e g and t 2g holes play a role is supported by the dependence of ∆ on the cubic crystal field parameter 10Dq, displayed in in Fig. 2. Note that this parameter in the present case represents only the contribution of interatomic repulsion to 10Dq. The covalency part is included in our calculation and the splitting between hybridized e g and t 2g is larger than 3 eV. Also in the fitting procedure, the best value of 10Dq depends on composition x, being 1.2 eV for x = 0.4 and 0.9 eV for x = 0.6. 19 For the latter value ∆ increases to 134 meV. As it is apparent in Fig. 2, ∆ increases with decreasing 10Dq. This shows that the e g states play an important role. In fact, the results for the regular octahedron show that although these states are absent in the effective Hamiltonian for the cobaltates, they have a larger degree of covalency than the t 2g states. 17 Most of the O holes reside in bonding combinations of e g symmetry. IV. SUMMARY AND DISCUSSION Using exact numerical diagonalization of a CoO 6 cluster, with the realistic geometry of Na x CoO 2 , we have calculated the effects of covalency and interactions on the trigonal crystal-field parameter ∆, which splits the t 2g states in O h symmetry into a ′ 1g and e ′ g in the reduced D 3d symmetry. This parameter enters effective models [of the form of Eq. (1)] for the description of the electronic structure of Na x CoO 2 and only positive values (in contrast to the negative ones obtained from LDA) seem consistent with ARPES data. 12,13,15 . We obtain ∆ ≈ 130 meV. While changes of the order of 1 eV in charge-transfer energy or F 0 (which controls the part of the Coulomb repulsion which does not depend of the symmetry of the orbitals) do not affect ∆ very much, we find that ∆ is very sensitive to the parameter F 2 which controls (among others) interaction constants related with the Hund rules (exchange interactions and decrease of inter-orbital repulsions with respect to intra-orbital ones). To a smaller extent, it is also sensitive to the cubic crystal field 10Dq reflecting the importance of interactions between t 2g and e g states, and the effect on the latter on the effective parameters. Since the LDA underestimates correlations that affect the orbital polarization of the d states, [48][49][50] this is likely to be the reason of the failure of LDA approaches and effective models based on LDA parameters, to reproduce the observed ARPES data. In fact, since the exchange and correlations in LDA are based on a homogeneous electron gas, it is expected that this approximation treats F 0 (the part of the repulsion which does not distinguish between different orbitals) in mean field, but does not contain the effects of F 2 and F 4 , which depend on the particular orbitals. The exchange of the electron gas taken into account in the LDA helps to follow the first Hund rule (maximum spin), but the second one, related with orbital polarization, is not well described and seems crucial to establish effective energy differences between different orbitals inside an incomplete d shell. Acknowledgments Useful comments of G. Pastor and C. Proetto are thankfully acknowledged. AAA is partially supported by CONICET, Argentina. This work was sponsored by PIP 112-200801-01821 of CONICET, and PICT 2010-1060 of the ANPCyT. Appendix A: Interactions inside a d shell The part of the Hamiltonian that contains the interaction among the 10 d spin-orbitals is 51 where d + λ creates an electron or a hole at the spin-orbital λ (H I is invariant under an electron-hole transformation) and (neglecting screening by other electrons) V λµνρ = dr 1 dr 2φλ (r 1 )φ µ (r 2 ) e 2 |r 1 − r 2 | ϕ ν (r 1 )ϕ ρ (r 2 ), (A2) where ϕ λ (r 1 ) is the wave function of the spin-orbital λ. Assuming spherical symmetry, these integrals can be calculated using standard methods of atomic physics 52 in terms of three independent parameters F j , j = 0, 2, 4, which are related to decomposition of the the Coulomb interaction e 2 /|r 1 − r 2 | in spherical harmonics of degree j. To remove uncomfortable denominators, the three free parameters are defined as F 0 = R 0 , F 2 = R 2 /49 and F 4 = R 4 /441, where R k = e 2 ∞ 0 ∞ 0 r k < r k+1 > R 2 (r 1 )R 2 (r 2 )r 2 1 r 2 2 dr 1 dr 2 , (A3) R(r) is the radial part of the wave funcion of the d orbitals and r < (r > ) is the smaller (larger) between r 1 and r 2 . The angular integrals are given in terms of tabulated coefficients. 17,52 Screening reduces F 0 significantly, but not F 2 and F 4 . The final result can be written in the form below. 17 To express it in a more compact form, we introduce different sums which run over a limited set of orbitals as follows. The sums over α run over the five d orbitals, those over β, γ run only over the t 2g orbitals xy, yz, zx, and those over χ (ζ) run over the pair of orbitals x 2 −y 2 , xy (zx, zy).
2013-08-23T15:52:27.000Z
2013-08-19T00:00:00.000
{ "year": 2013, "sha1": "a015e4bbb37274a95e31067b5a31927fb0dd4bf2", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/3939/1/splitco.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a015e4bbb37274a95e31067b5a31927fb0dd4bf2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
259616173
pes2o/s2orc
v3-fos-license
IPMA'S ANALYSIS ON FACTORS AFFECTING INDRIVE INDONESIA'S CUSTOMER LOYALTY Purpose : The purpose of this study was to identify the factors that influence customer loyalty to indrive application users in the city of Bandung, Indonesia using IPMA analysis on SmartPLS Theoretical framework: This research is a development of theoretical aspects of the online transportation industry in Indonesia which consists of app design and trust variables as moderator variables between e-service quality and customer satisfaction. Then there is price, value for money and perceived quality which affect customer satisfaction, and customer satisfaction will affect customer loyalty. Design/methodology/approach: This study uses a quantitative method with data sources derived from surveys through the distribution of online questionnaires to 160 inDrive application users. The data analysis technique used is SEM-PLS and IPMA Analysis using SmartPLS software. Findings: Customer satisfaction is the variable that most influences customer loyalty of inDrive application users in Bandung based on IPMA Analysis, because it has the highest performance value when compared to other variables Research, Practical & Social implications: InDrive management must pay attention to customer satisfaction for each inDrive application user in the city of Bandung in order to achieve, maintain the consistency and sustainability of the inDrive company in the city of Bandung to maintain customer loyalty. Originality/value: The study is the first study conducted to analyze the factors that influence customer loyalty in indrive application users in the Bandung City, Indonesia, so that it can be a reference and additional reference on academic knowledge and managerial aspects. Doi: https://doi.org/10.26668/businessreview/2023.v8i6.2320 INTRODUCTION Indonesia is showing progress towards digitization, as evidenced by the increase in total internet users in Indonesia in 2022 reaching 210 million people with a penetration rate of 77.02%. 2019 amounted to 73.70% and 2021 reached 77.02% (APJII, 2022). Technological advances show an increase in consumer welfare which has an impact on changes in Indonesian consumers in the digital era, one of which is the convenience felt by consumers by downloading variations of applications that help make daily activities easier to do (Kamal et al., 2023). The online transportation industry in Indonesia is showing significant development because it helps consumers to travel from one point to another with the convenience of ordering through an application. This development extends especially to the multifunctional online transportation challenge for every service provider to be able to maintain customer loyalty in the digital era and survive among competitors (Wu et al., 2021). The mobility service industry in Indonesia is a staple because it helps consumer mobility to move from the starting point to the final destination and has an impact on the global economy (Labee et al., 2022). The digital era has succeeded in bringing technological changes to the online transportation industry, namely the existence of ride sharing services because of the convenience in hiring drivers to take consumers, reducing congestion, being flexible (Shah & Kubota, 2022), drivers and users are also integrated in real time on a system (Shibayama & Emberger, 2020), so that from these various benefits, competition in the digital-based ridehailing market enters the competitive market realm, so that more and more similar services emerge and compete to show their best service (Chen et al., 2022). The online transportation market in Indonesia is the market with the largest value compared to other ASEAN countries, reaching $8 billion by 2022 . Digital-based services are a company's ability to provide digital-based services using various supporting electronic platforms which are increasingly important in the digital era, because there are various benefits that consumers feel, namely the convenience of accessing each service anytime and from anywhere without being bound by time and place barriers (Alsuwaidi & Sultan, 2023). Online transportation services have advantages when compared to conventional transportation, this is due to affordable rates, transparency and accessibility (Alamsyah & Rachmadiansyah, 2018). The development of the ride-sharing and car-sharing industry, which is more familiarly known as online transportation in Indonesia, has received attention from the Indonesian government, in the form of adjustments to service rates that apply throughout Indonesia. This increase in rates is due to adjustments to the increase in minimum wage, driver insurance, fuel increase and adjustments for each zone (Ministry of Transportation, 2022). As a result, many consumers are switching to public transportation, driving less, switching to conventional transportation, moving to other online transportation service providers and even preferring to walk (Statista, 2023). This shows that affordable service rates can satisfy consumers (Ahmed et al., 2022), so that it can make consumers loyal to continue using online transportation services. InDrive is one of the big brands of online transportation service providers whose existence is threatened due to the phenomenon of displacement of Indonesian consumers towards adjusting service rates, based on statista's research, inDrive users are also the smallest user brand, namely only 4.9% of consumers from the statista survey use inDrive, the rest are 82.6% use Gojek, 57.3% use Grab, and 19.6% use Maxim, these results are obtained from a survey in Jabodetabek, consumers tend to use more than one application and Gojek is the top choice for security reasons (Huda, 2022). Based on a comparison of rating ratings between Gojek, Grab, Maxim, and inDrive through the Google Play Store, inDrive gets a rating of 4.5, which lags behind its three competitors. The small rating from inDrive is due to the many negative reviews expressed by consumers on the inDrive application. Previous research from Hendayani & Dharmawan (2020) who researched at one of the Indonesian logistics companies, namely JNE, stated that comments and reviews from consumers on the internet and social media can be used as a reference for improvement for service companies because they come from the voice of consumers, where this can show a form of company attention and change to improve quality of service. InDrive app which is the fastest growing app in the world. There is an increase in the number of users downloading the inDrive application on their respective smartphones from 42.6 million users in 2021 to 61.8 million downloads worldwide in 2022 (InDrive, 2023) InDrive provides flexibility for users to enjoy a trip with satisfaction and according to service rates that reasonable because the bidding process can be carried out, as well as prioritizing security in transit (Febrinastri, 2022). InDrive is a new name after rebranding from inDriver (independent driver), the aim of the rebranding is to renew the concept and business strategy of inDrive which promotes fairness and transparency in determining service rates, as well as a form of company support in challenging injustice by launching a service rate bargaining feature (Kompas, 2022). InDrive uses a different business strategy from its competitors, inDrive does not use a strategy of burning money by giving lots of bonuses (Kompas, 2022) because it puts forward a strategy of transparency and negotiating service rates. This study aims to analyze the factors that affect customer loyalty in inDrive application users, because customer loyalty is an important issue that affects the sustainability and profitability of the company (Larsson & Broström, 2020), as well as increasing competitive advantage (Ahmad et al., 2021), growth and sustainability of an organization (Alatyat et al., 2023). Furthermore, there is no research that focuses on examining the factors that influence customer loyalty from users. inDrive in Indonesia so this research will contribute from a Putra, W. P., Hendayani, R., Hidayah, R. T. (2023) Ipma's Analysis on Factors Affecting Indrive Indonesia's Customer Loyalty theoretical point of view to add new reference contributions to the field of marketing management for inDrive as online electronic transportation service providers. This study uses SEM-PLS which will be analyzed further in the importance performance matrix analysis (IPMA) in order to be able to provide more specific and targeted managerial advice for inDrive in customer loyalty based on the greatest performance and importance obtained through IPMA analysis. E-Service Quality E-service quality is the extent to which an e-retailer can provide what the prospective customer wants, by using the website effectively to make the transaction successful (Venkatakrishnan et al., 2023). E-service quality is a conceptual model of service quality or service quality in an e-commerce industry (Gama & Astiti, 2020). Price Price is something that is stated to have value in the form of a unit of currency that is useful for conducting transactions or exchanges between sellers and buyers in conducting transactions in a number of money rates in order to obtain goods and services (Satriadi et al., 2021). Another opinion suggests that price is an element of the marketing mix that is able to measure customer needs and the quality of goods and services offered (Sari et al., 2021). Perceived Quality Perceived quality is the initial impression for a user of products and services regarding the quality of products and services that have been used, or can be referred to as the actual moment of interaction between products and services, including the consumer's assessment of the overall superiority of the product or service (Kenyon & Sen, 2015). The opinion of other experts is disgusting that perceived quality is the result of using measurements that are carried out indirectly because there is a possibility that consumers do not understand or lack information on the product in question (Firmansyah, 2019). Value for Money Value for money will provide benefits to the business in the long run, so that it will create and serve customer satisfaction for the commensurateness of the money spent for the Intern (Haverila & Twyford, 2021). Another opinion according to Rajaguru (2016) in Aruan & Kusumawardani (2019) states that value for money is the antecedent of consumers to satisfaction based on consumer experience of using products and or services that are felt or obtained from the value of sacrifices that have been made and have succeeded in fulfilling consumer satisfaction, in other words the efforts expended by consumers in the form of money are commensurate with the services provided felt. App Design Mobile app is a software application designed by a technology company for a service provider company that is designed in such a way that an application is obtained that enhances the ability of the application to meet consumer needs (Yang, 2013 in Baran & Barutçu, 2022), mobile applications are a form of company adaptation to handling and communicating with consumers for the many features available and making it easier for consumers to shop. The design of the applications presented to service providers is a significant factor in influencing consumers to shop and choose an application, so it requires a reliable designer in making the design of the application that makes it easier for users to operate the application (Baran & Barutçu, 2022). Trust Trust is a belief in a promise of someone who can be trusted so that that person can fulfill his obligations in a relationship (Giantari, 2021). Purchases are made when the prospective customer already has trust, because trust is important in fostering purchase interest (Sawlani, 2021). The basis of conducting business activities is trust or trust (Giantari, 2021). Trust in a company or brand can be based on the experience of the customer itself (Bae & Kim, 2022). Customer Satisfaction Customer satisfaction is the most important thing because it has a significant influence on the development of a business (Adhari, 2021). Customer satisfaction is a standard of performance for a business, so customer satisfaction can be used as a reference or feedback to be able to continue to meet customer expectations (Grigoroudis, 2019). The opinion of other experts suggests that customer satisfaction is a feeling of satisfaction or disappointment from consumers after enjoying goods and services, namely through the process of comparing expectations with the reality obtained, so that high pleasure will form an emotional attachment that binds consumers to certain brands (Candiwan & Wibisono, 2021). Customer Loyalty Customer loyalty is a condition that has been achieved and felt by a consumer who is used to buying products and frequently interacting with a company for a certain period of time, so that the experience of using the product creates loyalty to follow all the offers offered by the company (Rifa'i, 2019). Success in a business can be seen from a comparison of strategies in determining market share and customer loyalty, market share strategy can be seen through evaluation by taking into account existing competitors but loyalty can be seen from customer retention making purchases at the company (Griffin, 2019). Hypothesis Service quality is an important component in driving customer satisfaction (Lien et al., 2017), because based on the quality of the service that the company delivers to consumers, they will get a customer experience for each transaction, which will have an impact on whether consumers are willing to continue purchasing in the future or not (Dehghanpouri et al., 2020), so a company must maintain service quality because it has an impact on company performance, purchase intention and customer satisfaction (Jaiyeoba et al., 2018). The business industry that offers its products online must maintain the accuracy of delivering products and services in a serious and timely manner, so as to be able to help companies to always maintain and improve services in order to increase customer satisfaction (Abdirad & Krishnan, 2022). Therefore, the first hypothesis in this study is: H1: e-service quality has a positive effect on customer satisfaction in users of the inDrive application in Indonesia Quality of service is a key factor that can affect loyalty which consists of various dimensions, namely word of mouth, and intention to continue, so that based on enjoying the service it will be a supporting factor forming loyalty (Zhao & Bacao, 2020), in order to maintain loyalty and meet the demands of consumers (Su et al., 2022). Business industries that use services through digital media find results based on previous research that website quality and application quality are important factors in increasing consumer willingness to return to using these services, recommending services to others, this shows that service quality in electronic media is good effect on customer loyalty (Gao & Li, 2019 industry reveals that service quality perceived by consumers has a significant effect on consumer loyalty (Yadav & Rai, 2019). Therefore, the second hypothesis in this study is: H2: e-service quality has a positive effect on customer loyalty in inDrive application users in the Indonesia Pricing from product providers is one of the centers of attention of consumers, with higher prices or tariffs that will make consumers tend to switch to competitors because consumers have a poor price perception (Rama, 2017), so businesses must carefully determine pricing decisions, because price is important information that consumers will look for before buying a product (Rama, 2020), the appropriate price will create consumer satisfaction, so that the set price must be fair and able to compete in the market. Research from Dinesh & Raju (2022), revealed the findings that based on the processing results carried out by statistical analysis of 422 online customers from India, it was found that price perceptions had a positive and significant effect on customer satisfaction. The findings from other expert research also show that consumer price perception is an important indicator that influences customer satisfaction and repurchase intention (Yasri et al., 2020).Therefore, the third hypothesis in this study is: H3: price has a positive effect on customer satisfaction in inDrive application users in Indonesia Perceived consumer satisfaction is an important aspect that companies must maintain and improve by looking at product or service performance comparisons with consumer expectations (Oliveira et al., 2023), with products delivered exceeding expectations will shape consumer perceptions of satisfaction (Alarifi & Husain, 2023), if the service performance is below expectations, it will make consumers feel dissatisfied with the products and services received (Abdirad & Krishnan, 2022). Perceived quality felt by each consumer who uses preparation and departure services has a greater influence on customer loyalty than the delivery stage in the stage of services research (Xie & Sun, 2021).Therefore, the fourth hypothesis in this study is: H4: perceived quality has a positive effect on customer satisfaction in users of the inDrive application in Indonesia A consumer really wants the performance of a product or service purchased in line with expectations, so that it will create consumer satisfaction, this is synonymous with the costs incurred commensurate with the benefits received (Haverila et al., 2023). Research from Lierop et al., (2018) reveals in the results of research that has been conducted in the form of a literature Putra, W. P., Hendayani, R., Hidayah, R. T. (2023) Ipma's Analysis on Factors Affecting Indrive Indonesia's Customer Loyalty review that value for money is found as a variable or driving factor to achieve customer satisfaction in public transport. Other research that has been conducted on the application-based ride-hailing industry through distributing online questionnaires to 400 respondents in Bangladesh reveals that value for money has a positive and significant effect on customer satisfaction (Ahmed et al., 2021). Therefore, the fifth hypothesis of this study is: H5: value for money has a positive effect on customer satisfaction in users of the inDrive application in Indonesia The sustainability and success of the company in the long term is caused by two main factors, namely customer satisfaction and customer loyalty (Agarwal & Dhingra, 2023), because the foundation that will strengthen consumer loyalty comes from consumers who are satisfied using products and services and have an impact on commercial profits (Guimaraes & Paranjape, 2014), so companies must pay attention to important aspects of the relationship between satisfaction and loyalty because it has a very intuitive relationship (Al-dweeri et al., 2017). Previous research comparing services between traditional banks and financial technology revealed findings that there is a better effect of customer satisfaction on customer loyalty in traditional banks compared to financial technology (Mainardes & Freitas, 2023). Therefore, the sixth hypothesis of this study is: H6: customer satisfaction has a positive effect on customer loyalty in users of the inDrive application in Indonesia The quality of service delivered from the company to each consumer influences consumer behavior in the future, especially referring to the form of consumer loyalty to reuse products or services, so that brand loyalty can go down and up along with the satisfaction felt based on the service delivery received by consumers (Alzaydi, 2023). Customers who are happy and satisfied with a company's service will show that the service is acceptable, so that consumers will be more loyal (Shankar & Jebarajakirthy, 2019), if not accepted it will reduce consumer loyalty due to poor service quality (Alzaydi, 2023). Therefore, customer satisfaction can act as a mediator connecting service quality to customer loyalty (Fernandes & Solimun, 2018). Therefore, the seventh hypothesis of this study is: H7: customer satisfaction mediates the effect between e-service quality and customer loyalty on inDrive application users in Indonesia Customers are the main indicator that must be prioritized for every business, and one way is through marketing activities (Haraisa, 2022), and customer trust is important to maintain because it has an impact on the continuity of the company's interpersonal relationships with Putra, W. P., Hendayani, R., Hidayah, R. T. (2023) Ipma's Analysis on Factors Affecting Indrive Indonesia's Customer Loyalty customers. Trusts is something that is seen by the company as an important tool in every industry, because everything is developed through consumer interaction with other people (Setiawan & Sayuti, 2017). Trust is also a process that is built over time, maintained, developed and tested periodically (Uzir et al., 2021), so that trust is the main principle in fostering customer relationships in determining future transactions. Previous research on the tourism marketing industry revealed that trust will influence and lead to achieving customer satisfaction, because the findings show that trust and employee satisfaction in the hotel sector are very important in organizational commitment (Yao et al., 2019). Therefore, the eighth hypothesis of this study is: H8: trust has a positive effect on customer satisfaction in inDrive application users in the Indonesia Previous research revealed that in marketing strategies that are starting to move to digital marketing, it is argued that the website is an applicable example of a push strategy and the use of mobile applications is found as a push strategy (Kim et al., 2016). The advantage of using an application is that it is more flexible than using a website which must always be logged in first, consumers find it easier to access a service in an application without time and place restrictions using a smartphone while traveling, this is a form of adjustment to consumer uncertainty situations (Dwivedi et al., 2021). Therefore, a website or application is needed that does not only provide quality alone, but provides clear and complete information about a company's products or services, because according to previous research in motivating consumers to purchase behavior or use a product or service is to use design, user friendly application (Laureti et al., 2018). Therefore, the ninth hypothesis of this study is: H9: app design moderates the effect between e-service quality and customer satisfaction on inDrive application users in Indonesia Designers who design an application are the main component as a liaison between service providers and users, so an appropriate design is needed to create a good image (Birkmeyer et al., 2021), so that a good initial impression will be formed on the appearance of the design of an application. The design of a website and the application that is displayed must meet the aesthetics of online customers in increasing customer visit intention, loyalty, trust, and customer exploration (Nia & Shokouhyar, 2020). So that it will give rise to a different point of view from each customer's perspective on the products and services presented on the website and applications, so that from a website that is arranged according to the aesthetic appearance Intern. Journal of Profess. Bus. Review. | Miami, v. 8 | n. 6 | p. 01-22 | e02320 | 2023. 11 Putra, W. P., Hendayani, R., Hidayah, R. T. (2023) Ipma's Analysis on Factors Affecting Indrive Indonesia's Customer Loyalty it will give customers a taste and form perceptions about the quality provided (Peng et al., 2017). Therefore, the tenth hypothesis of this study is: H10: trust moderates the influence between app design (second moderator) which moderates the effect of e-service quality on customer satisfaction in inDrive application users in Indonesia Referring to the results of previous research and the hypotheses that have been developed, then a research model is created as depicted in the following. METHODOLOGY This research is a quantitative research with a descriptive approach, with the aim of being able to describe the things that cause customer loyalty in inDrive application users. The population of this study is all users of the inDrive application whose number is unknown, so the sampling technique used is non-probability with purposive sampling, ie. Research Putra, W. P., Hendayani, R., Hidayah, R. T. (2023) Ipma's Analysis on Factors Affecting Indrive Indonesia's Customer Loyalty respondents must meet specific criteria, namely: (1) users of online transportation services, (2) users of the inDrive application service with a minimum of 3 times use of the application, and (3) a minimum use of the inDrive application is 1 time in 1 week. The research sample was obtained through calculations from the g power software, so that 160 respondents were found to fill out online questionnaires which were distributed via the google form. The research implementation time used in this study was cross sectional because this research was only carried out in one research time period from the beginning of the research to the completion of the research after successfully answering all research questions and drawing conclusions based on the statistical analysis carried out. RESULTS AND DISCUSSION The importance-performance map analysis, which is also called the importanceperformance matrix analysis, is one of the tests that can be performed in PLS-SEM on path coefficient estimates in an analysis that takes into account the average latent score of the variable (Hair et al., 2017). The purpose of conducting an IPMA analysis is to assist managerial parties in determining and identifying which variables have relatively high importance for the target construct which shows the results of the variables studied in PLS-SEM showing a strong total effect, but having low performance, so that it becomes the basis for constructing improvements. who has a high priority scale of attention. IPMA provides insights for researchers regarding the importance of latent variables, thus providing priority directions for managerial action in determining improvement suggestions for increasing variables that have a high level of importance but have relatively low performance (Garson, 2016). IPMA's priority distribution level of importance and performance is divided into 4 quadrants as follows: 1. Quadrant 1 which has high performance and high importance, including to In "keep up the good work" 2. Quadrant II has low performance and high importance, included in"concentrate here" 3. Quadrant III, has low performance and low importance, included in "low priority" 4. Quadrant IV has high performance and low importance, included in the "possible overkill" The following presents a picture of the IPMA analysis of this research which has been differentiated by quadrant: Table 1 is presented below which explains the results of the performance and importance of each variable in this study: Based on the results table, it can be stated that customer satisfaction is the variable that most influences customer loyalty for inDrive application users in Indonesia because it has the highest performance value when compared to other variables. Therefore, inDrive management should pay attention to customer satisfaction for every inDrive application user in the Indonesia so that they can achieve, maintain the consistency and sustainability of the inDrive company in the Indonesia to maintain customer loyalty. In this study, there are several factors that affect customer satisfaction of indrive users in the Indonesia, namely e-service quality, price, value for money, perceived quality, trust, and app design (Ahmed et al., 2021;Venkatakrishnan et al., 2023). In other words, in achieving and maintaining indrive customer satisfaction, the inDrive Indonesia's management must also pay attention to these six indicators as factors that influence customer satisfaction. After achieving customer satisfaction, it will have a direct impact on customer loyalty, this is supported by previous research which states that the foundation that strengthens customer loyalty comes from consumers who are satisfied using products and services and have an impact on commercial profits (Guimaraes & Paranjape, 2014). CONCLUSION Based on the research objectives set out in the introduction section, this study aims to analyze what factors have the most influence on inDrive user customer loyalty in the Indonesia. The results obtained based on IPMA data processing at SmartPLS show that customer satisfaction is the variable that most influences customer loyalty for inDrive application users in Indonesia, because it has the highest performance value when compared to other variables. Therefore, inDrive management should pay attention to customer satisfaction for every inDrive application user in the Indonesia so that they can achieve, maintain the consistency and sustainability of the inDrive company in the Indonesia to maintain customer loyalty. Increase the price because it is in quadrant II by improving the tariff setting system for services offered to consumers, carrying out the principle of price fairness and being more affordable than competitors providing similar online transportation services in order to increase customer loyalty. Perceived quality is in quadrant II so that all forms of service from the driver from the start of pickup to delivery to the end point of the consumer's destination must maximize travel safety, travel safety and helpful drivers.Value for money is in quadrant II by improving all forms of services and things that are conveyed to consumers both through applications and services from drivers that have benefits that are in accordance with the efforts of money spent by consumers to increase inDrive customer loyalty. Increasing the trust factor because it is in quadrant II such as improving inDrive services so that they are reliable, honest, trustworthy, and guaranteeing the security of all transactions carried out under the supervision of inDrive to increase inDrive customer loyalty. App design is in quadrant II so it needs more attention to be improved. It is recommended to inDrive managers to be able to improve and improve the appearance of the inDrive application design to make it more user friendly, improve the appearance so that it is attractive and more innovative, increasing the use of technology in applications, as well as improving the application navigation display which can provide more structured information to increase inDrive customer loyalty. Customer satisfaction is in quadrant I so that it has good performance and performance, so inDrive managerial must maintain the level of customer satisfaction of inDrive users by ensuring satisfaction with the quality of services provided, providing good service experience, and being aggressive and innovative to always make changes for the sake of achieved success in meeting online transportation needs through the inDrive application to increase inDrive customer loyalty. This research has limitations. First, the sample from this study only came from respondents using the inDrive application from the Indonesia. Second, the sample size is 160 respondents from inDrive users from the Indonesia. Third, the data obtained from this study comes from October 2022 to March 2023. Furthermore, the sample size in this study only comes from the perspective of application users, as is done for objects outside the online transportation industry. the result will get a different result. It is recommended that future studies take research samples from service providers in order to understand perceptions of passenger satisfaction and loyalty. Future research can also explore research results using a mixed method, namely a combination of quantitative methods and qualitative methods to produce more robust findings.
2023-07-11T18:39:20.837Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "97e2c751d4b4d51614bcae0a6e29c64e45ba3bdd", "oa_license": "CCBYNC", "oa_url": "https://openaccessojs.com/JBReview/article/download/2320/902", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb5709164d0d5f06b1fa94b9a68f528756f9195a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
229012677
pes2o/s2orc
v3-fos-license
Low friction and high load-carrying capacity of a slider bearing with an inhomogeneous surface affinity In fluid lubrication systems, lower friction means less energy consumption, whereas higher film thickness means higher load-carrying capacity and lower probability of wear. Traditionally, friction coefficient increases with oil film thickness, which cannot meet the design requirements of modern equipment, such as MEMS. In this study, we attempt to tackle this challenge by introducing an inhomogeneous surface affinity on a static slider bearing surface. A model combining the limiting shear stress and slip length is adopted to analyse the effect of boundary slip on the hydrodynamic performance of a slider bearing. The model is firstly verified by comparing the calculated results with the experimental data, and then parameter study is conducted. Results indicate that lower friction and higher film thickness can be realised simultaneously by a specific design of the inhomogeneous surface affinity. 1. Introduction The boundary condition of fluid flow is one of the most important factors determining the behaviour of hydrodynamics. In 1738, Bernoulli [1] was the first to propose the no-slip boundary condition of fluid flow, which was confirmed by a large number of experiments. Almost all classical mechanics textbooks adopt this hypothesis, which is widely used in the analysis of fluid flow in engineering. However, with the development of micro/nanotechnology, experimental studies [2][3][4][5][6][7] have shown that the boundary slip at the solid/liquid interface exists in some cases. The boundary slip is small and has little effect on the macro flow characteristics of the fluid. However, for micro/nanoelectromechanical systems (MEMS), the effect of boundary slip cannot be ignored. The theoretical analysis of boundary slip mainly focuses on the development of the mathematical model [8][9][10][11][12][13]. Three main slip models currently exist, namely, linear slip length model [8], critical shear stress model [9] and a complex model combining slip length and critical shear stress [10,11]. The slip length model is widely used in physics for its simplicity. However, the critical shear stress model is more popular in engineering. In this model, slip occurs only when the shear stress at the solid/liquid interface reaches a critical value. On the basis of experimental results, Spikes and Granick [10] proposed a complex model by combining the two previous models. In this new model, the slip behaviour is governed by two parameters, namely, critical shear stress and slip length. The boundary slip occurs when the shear stress at the interface exceeds a critical value, and the shear stress is proportional to the liquid viscosity and slip velocity under slip conditions and inversely proportional to slip length. This model is verified through experiments by Guo et al. [14] later. Salant et al. [11] recently proposed a similar model with that of Spikes and Granick [10] but using a new parameter, slip coefficient, instead of slip length. In this study, the slip model developed by Spikes and Granick [10] was applied. The model was firstly verified by comparing the calculated results with the experimental data. Subsequently, a parameter study was conducted to show how to achieve lower friction and higher film thickness design simultaneously by a specific design of inhomogeneous surface affinity. 2. Mathematical model In the current slip model, a limiting shear stress τ is assumed to exist at the solid/liquid interface, and slip occurs if the shear stress at the solid/liquid interface reaches the limiting shear stress and the slip velocity is proportional to any additional shear stress according to traditional slip length model [15]. In the region without slip, τ is infinite. As shown in Figure 1, Surface 1 is stationary, whereas Surface 2 moves in the X direction at speed U. The limiting shear stress of Surface 1 is assumed to be much smaller than that of Surface 2, indicating that slip only occurs on Surface 1. Therefore, the velocity boundary conditions in X direction can be described as follows: at z h, u τ τ for τ τ Base on the conservation of mass, the modified Reynolds equation can be obtained as [6] : 3.1.Verification of the model To verify the slip model, the calculated results were compared with the published experimental data [14]. Figure 2 illustrates the measured glycerol solution film thickness under different speeds. In their experiments, a slider-on-disc test rig was used, and optical interferogram was applied to obtain film thickness. The length and width of the applied slider are 4 and 9 mm, respectively. To generate the boundary slip on the slider surface, an oleophobic coating (EGC) was coated on the original steel surface. Glycerol solutions were used as lubricants, and the properties of the applied lubricant are shown in Table 1. It can be seen the contact angle is greater than 90 degrees and the contact angle hysteresis is quite small, which means the adhesion force between the glycerol solution and the EGC is weak and it is possible to generate boundary slip at the solid/liquid interface. By adjusting the values of slip length b (50 µm) and τ (200 Pa), the theoretical values could be fitted with the experimental data exactly, which verified the new slip model. 3.2.Boundary slip on the whole slider surface (homogeneous surface affinity) To verify the boundary slip effect on hydrodynamic lubrication, additional calculations were conducted in a relatively large range. Figures 3 and 4 display the change of film thickness and friction coefficient with speed and load, respectively. In the calculation, the whole slider surface is assumed to be homogeneous oleophobic, and the boundary slip is allowed to occur on the whole slider surface. The properties of 90% glycerol listed in Section 3.1 were used here. The value of slip length b and limiting shear stress τ calculated in Section 3.1 were also applied here. Compared with the no slip condition, the friction coefficient decreases significantly in the whole speed and load range because of boundary slip, which is beneficial to energy saving. However, the traditional film thickness (no boundary slip) is much higher than that of the boundary slip condition, which may lead to the wear of bearing surfaces, especially under low speeds and high load conditions. How to realise higher film thickness and lower friction coefficient by surface affinity designing will be discussed later. Figures 3 and 4 prove that the requirements of lower friction and higher load ability cannot be realised through a homogeneous oleophobic surface. Therefore, the partial slip was designed using an inhomogeneous surface, which is realised by manipulating surface wettability on specific areas. Figure 5 shows the design of the inhomogeneous surface affinity of Surface 1. Different patterns were adopted, and the specific parameters of the design are show their advantages in terms of film thickness and friction coefficient. Film thickness increases apparently and is much higher than that of no-slip one. Their corresponding friction coefficients are much lower than those in the traditional no-slip case, thereby satisfying the requirement of modern bearing design and is beneficial to the environment. Specifically, Pattern (a) performs better than Pattern (b) because of its higher film thickness and lower friction coefficient. Film thickness (um) Figure 8 and 9 illustrates the normalised 3D pressure distribution (P p h /ηUL , X=x/L, Y=y/B) of conventional slider bearing and slider bearing with Pattern (a). The working conditions are identical to that of Figure 2. Evidently, boundary slip behaviour affects the pressure distribution significantly. The maximum pressure of slider with Pattern (a) is around 2 times higher than that of conventional slider bearing. The location of the maximum pressure of slider with Pattern (a) locates at the boundary of slip and no-slip areas. Conclusion The boundary slip behaviour was investigated in this study, and a complex slip model developed recently was adopted. The predicted results coincided with the experimental data, which verified the slip model. Parameter studies were conducted, and two main conclusions were obtained as follows: 1) Film thickness and friction coefficient decrease compared with traditional hydrodynamic lubrication if boundary slip occurs on the whole slider surface. 2) Higher film thickness and lower friction coefficient can be achieved simultaneously through partial slip on the specific area of the slider surface.
2020-11-05T09:09:21.985Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "82fa2d8a3856372a65c6d5c807e6b1165c87aff2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1654/1/012035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dc0ec95a730ed7095b586e3243e33d98e148f3dd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
222093918
pes2o/s2orc
v3-fos-license
Enrichment of beneficial cucumber rhizosphere microbes mediated by organic acid secretion Resistant cultivars have played important roles in controlling Fusarium wilt disease, but the roles of rhizosphere interactions among different levels of resistant cultivars are still unknown. Here, two phenotypes of cucumber, one resistant and one with increased susceptibility to Fusarium oxysporum f.sp. cucumerinum (Foc), were grown in the soil and hydroponically, and then 16S rRNA gene sequencing and nontargeted metabolomics techniques were used to investigate rhizosphere microflora and root exudate profiles. Relatively high microbial community evenness for the Foc-susceptible cultivar was detected, and the relative abundances of Comamonadaceae and Xanthomonadaceae were higher for the Foc-susceptible cultivar than for the other cultivar. FishTaco analysis revealed that specific functional traits, such as protein synthesis and secretion, bacterial chemotaxis, and small organic acid metabolism pathways, were significantly upregulated in the rhizobacterial community of the Foc-susceptible cultivar. A machine-learning approach in conjunction with FishTaco plus metabolic pathway analysis revealed that four organic acids (citric acid, pyruvate acid, succinic acid, and fumarate) were released at higher abundance by the Foc-susceptible cultivar compared with the resistant cultivar, which may be responsible for the recruitment of Comamonadaceae, a potential beneficial microbial group. Further validation demonstrated that Comamonadaceae can be “cultured” by these organic acids. Together, compared with the resistant cultivar, the susceptible cucumber tends to assemble beneficial microbes by secreting more organic acids. Introduction Fusarium wilt disease is a persistent and widespread soil-borne disease worldwide. For a long time, breeders have been developing many resistant varieties that generally express high levels of disease resistance genes and/ or produce active proteins to defend against the Fusarium oxysporum fungal pathogen 1 . In recent years, increasing numbers of studies have shown that certain beneficial microorganisms can be recruited by plants in their rhizosphere to resist the invasion of pathogens 2 . For example, Berendsen et al. 3 indicated that Arabidopsis thaliana could recruit three bacterial species in the rhizosphere upon foliar pathogen infection; Kwak et al. 4 found that the tomato variety Hawaii 7996 could recruit members of the flavobacterium strain TRM1 to suppress disease. Natural disease-suppressive soils even formed via increases in the abundance of beneficial microorganisms (e.g., Pseudomonas and Bacillus) by monoculture during a long period 5,6 . These beneficial microorganisms can produce secondary metabolites such as 2,4-diacetylphloroglucinol to antagonize pathogens, thus improving the diseasesuppressive ability of the soil. Other studies have shown that root exudates such as citric acid, malic acid and fumaric acid could recruit beneficial rhizosphere microorganisms [7][8][9][10] . Moreover, it was recently shown that specialized triterpenes from A. thaliana could recruit and maintain an A. thaliana-specific microbiota 11 . Plant breeders have begun to consider the contribution of rhizosphere microbes in the development of resistant varieties 9 , including the regulation of the output of root exudates by controlling ABC transporters 12 and attempt to transfer microorganisms to the next generation by implanting microbes into flowers 13 . Since the interactions occurring in the rhizosphere between plants and microorganisms are relatively complex and variable, breeding work that encompasses the rhizosphere microbiome is progressing slowly. Due to the different root secretion patterns even within the same crop, the composition of the rhizosphere microbial community is variable. Interestingly, crops sensitive to pathogens tend to form disease-suppressive soils more easily than do resistant varieties 5 . Further studies indicated that traditional susceptible cultivars tend to maintain stronger interactions between plant and beneficial soil microorganisms compared to those of modern resistant varieties [14][15][16][17] . Karasov et al. 17 found that the diversity of Pseudomonas in wild A. thaliana leaves was abundant. Recently, a study of domesticated plants seemed to reveal a relatively limited microbiota assembly compared to that of their wild counterparts. Furthermore, the genetic diversity of crop microbiota is likely reduced compared to that of wild plants 18 . Plant domestication probably alters root exudate profiles and thus impacts rhizosphere microbial community structure and function 19 . Whether rhizosphere microbial resistance can compensate for the weakness of plant resistance is relatively unknown. Moreover, which kinds of rhizospheric interactions occur during rhizosphere microbial resistance are unclear. Here, we grew two types of cucumber with contrasting phenotypes (resistant and highly susceptible to Fusarium wilt disease) in the soil and hydroponically to provide controlled and in situ experimental data on these cultivars' exudates and their rhizobacterial communities. To identify the specific roles of exudate compounds in the recruitment of beneficial bacteria, we employed statistical analyses to evaluate amplicon sequences of the 16S rRNA genes and used nontargeted metabolomics. We aimed to address (i) whether disease-susceptible cultivars have advantages over resistant cultivars in terms of beneficial bacterial enrichment; (ii) whether these interactions are caused by changes in root exudates; and (iii) if so, which types of exudate compounds are responsible for recruitment. Recruitment of rhizosphere bacterial communities of two cucumber cultivars Bacterial communities in the rhizosphere of two cucumber cultivars were characterized with Illumina HiSeq sequencing. In total, 1,010,289 high-quality sequences were obtained, and each sample contained between 66,236 and 107,307 (84,191 ± 12,531) reads. All the sequences clustered into 8023 operational taxonomic units (OTUs) with 97% similarity. The OTU numbers for each cultivar were 6224 in the Foc-susceptible cucumber (FSC) and 6830 in the Foc-resistant cucumber (FRC), and the most abundant phyla were Proteobacteria (64.4%), Cyanobacteria (12.2%), Bacteroidetes (9.2%), Acidobacteria (4.2%), Actinobacteria (3.5%), and Verrucomicrobia (1.9%). We rarefied (without replacement) 89,948 sequences for each sample to calculate the Shannon index, which is often used to assess the evenness and abundance of microbial communities. The rhizosphere soil of Foc-susceptible cucumber exhibited higher bacterial community evenness than did the Foc-resistant cucumber (Wilcoxon test, p < 0.05; Fig. 1a). Principal coordinate analysis (PCoA) with the Bray-Curtis distance showed a significant (p = 0.003, R = 0.89 in Adonis) difference in the rhizosphere communities between the Focresistant cucumber and the Foc-susceptible cucumber (Fig. 1b). Comparative analysis of the rhizosphere microbiome between the Foc-resistant and Foc-susceptible cucumber cultivars revealed a distinctly different abundance of specific groups at the family level (Fig. 1c), with a higher abundance of the families Comamonadaceae and Xanthomonadaceae in the rhizosphere of the Foc-susceptible cultivar compared with the Foc-resistant cultivar ( Fig. 1c and Supplementary Table 1). At the lower taxonomical level, we found that the abundance of genera belonging to Comamonadaceae, likely Methylibium, Hydrogenophaga, and Rubrivivax, and some genera belonging to Xanthomonadaceae, such as Pseudoxanthomonas, Lysobacter, Stenotrophomonas, and Pseudomonas, were significantly increased in the rhizosphere soil of the Foc-susceptible cucumber compared to the Foc-resistant cucumber (Supplementary Table 2). We obtained 16 different isolates from the treatments of the FSC and FRC; these isolates belong to the Bacillaceae, Pseudomonadaceae, Comamonadaceae, Xanthomonadaceae, Enterobacteriaceae, Oxalobacteraceae, and Weeksellaceae. Through an inhibition assay of F. oxysporum, we found that five strains of Comamonadaceae (strains G11 and FM2), Pseudomonas (strains M8 and G2), and Stenotrophomonas (strain G47) could reduce the growth of F. oxysporum in vitro (Fig. 1e). It was also found that these five strains obtained more single colonies in the FSC (Supplementary Table 3). Correspondingly, the abundance of genera belonging to Comamonadaceae and of the Stenotrophomonas and Pseudomonas genera significantly increased in the rhizosphere soil of Foc-susceptible cucumber. This suggested that Focsusceptible cucumber cultivars could recruit more beneficial microbes to resist F. oxysporum. Root exudate profiles of the two cultivars Root exudates of the Foc-resistant and Foc-susceptible cucumber cultivars were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF-MS) after collection from a sterile hydroponic system. A total of 708 chromatographic peaks were detected, and 236 compounds were identified across all the samples. Then, we divided compounds into several categories based on their chemical nature, namely, sugars (28 compounds), sugar alcohols (3), sugar acids (7), smallmolecule organic acids (22), nucleotides (4), long-chain organic acids (14), esters (18), alcohols (16), amino acids and amides (28), and others (97; Fig. 2a). The overall exudation patterns of the Foc-susceptible cucumber plants were found to be distinct (p = 0.043, R = 0.320 in Adonis) from those of the Foc-resistant cultivar (Fig. 2b). The relative abundance of 157 compounds of the 236 total identified compounds was significantly (p < 0.05) different between the two cultivars. Specifically, 79 compounds showed higher relative abundance in the FSC, and 78 compounds showed higher relative abundance in the FRC. To further explore the major compounds that caused the difference in root exudate profiles between the two cultivars, a Principal Component Analysis (PCA) loading matrix extracted and random forests classification approach were applied. Loading matrix of the PC1 axis which explained 81.9% of variance, from principal components analysis (PCA) were extracted to identify the top 10 compounds (top 4% of total 236 compounds) that were important to differentiate the exudate profiles of the two cultivars (Fig. 2c). These compounds included pyruvic acid and lactic acid (short-chain carbon organic acids), two amino acids (N-carbamylglutamate and N-acetylbeta-alanine), and six other compounds, including (2 R,3 S)-2-hydroxy-3-isopropylbutanedioic acid, fructose-6-phosphate, urocanic acid, oxamide, ascorbate, and dihydroxyacetone (Supplementary Table 4). For the random forest classifier (R package RandomForest, ntree = 1000), the first 35 of the 236 compounds (top 15%) sorted by loading variable importance were selected ( Fig. 2d and Supplementary Table 5). Then, the top 10 important compounds selected from the PCA and the top 35 important compounds from the random forest classification were further evaluated for significant differences between the cultivars using a t-test with p < 0.05 and a log 2 (fold change) <1.5. Finally, 34 compounds were recognized as playing the most important role in the separation of the two root exudate profiles (Supplementary Table 6). Functional features of root exudates and functional shifts mediated by microbial alterations in the rhizosphere of FSC It was shown that root exudates were significantly associated with rhizosphere microorganisms according to Mantel's test (R = 0.5966, p = 0.003, Supplementary Table 7). Pathway enrichment analysis was performed to further exploration whether root exudates of the Foc-susceptible cultivar are involved in the recruitment of beneficial bacteria. The results revealed that small organic acid metabolism pathways (such as pyruvate acid metabolism and the citric acid cycle) and amino acid metabolic pathways (valine, leucine and isoleucine biosynthesis and glycine, serine and threonine metabolism) were significantly (Wilcoxon test: p < 0.05) different ( Fig. 3b and Supplementary Table 8). Then, 24 compounds involved in these different pathways were selected to check if they were covered in the 34 represented compounds causing major differences in the root exudate profiles. Ten compounds were ultimately selected to represent the core difference in root exudate profiles and metabolic pathways (Supplementary Table 9). Among these 10 compounds, six were small organic acids, including four that mainly participate in pyruvate acid metabolism and the citric acid cycle (Fig. 3c). To determine whether these ten compounds or core differential pathways influenced the assembly of rhizosphere microbial communities and variation between the two cultivars, we employed the FishTaco framework for microbial communities to predict the functional information within the microbial communities and to determine the causes of the shifts in rhizosphere microorganisms. Marked upregulation of both protein synthesis and secretion and bacterial chemotaxis were observed (Wilcoxon test: p < 0.05) for the Foc-susceptible cultivar (FSC), which may be related to the rapid response of soil microorganisms to the roots and corresponded to the enrichment of OTUs from members of the Comamonadaceae and Oxalobacteraceae families (Supplementary Fig. 1). Many amino acids and small-molecule fatty (see figure on previous page) Fig. 1 Analysis of rhizosphere bacterial communities between two cultivars. a The Shannon-Wiener index of Foc-resistant cultivar (green) and Foc-susceptible cultivar (orange) rhizosphere bacterial communities calculated with all clustered OTUs. The horizontal bars within the boxes represent the medians. The tops and bottoms of the boxes represent the 75th and 25th quartiles, respectively. All outliers were plotted as individual points. b Principal coordinate analysis (PCoA) with Bray-Curtis dissimilarity performed on the taxonomic profile (OTU level for a 16S rRNA dataset) of the rhizosphere of the two cucumber cultivars. The R-value and P-value were evaluated via the Adonis test. c Relative abundance (%) of members of the major bacterial phyla, excluding low-abundance OTUs (mean abundance <0.02%), present in the rhizosphere microbial communities of Focresistant cultivar (FRC) or Foc-susceptible cultivar (FSC) samples. d The relative abundance (%) of members of specific genera enriched in the rhizosphere soil of the FSC and of the entire genus present here was significantly different (t-test: p<0.05) between the two cultivars. e Inhibitory effects of six isolates on F. oxysporum belonging to Comamonadaceae (G11, FM2), Pseudomonas (M8, G2), and Stenotrophomonas (G47). acids were also more abundant in the rhizosphere of the FSC, including the citric acid cycle. The results showed that enrichment of the citric acid cycle was mainly attributed to shifts in the relative abundance of the Comamonadaceae family, as this family contains many genes involved in the citric acid cycle ( Fig. 3a and Supplementary Fig. 2). Moreover, the pathway enrichment analysis results based on root exudate data showed that the selected small-molecule organic acids of 10 core root exudates were enriched in the root exudates of the Focsusceptible cucumber (Supplementary Table 6) associated with the citric acid cycle. Therefore, we suspect that the four organic acids may promote recruitment of Comamonadaceae in the rhizosphere of the Foc-susceptible cultivar by enriching compounds in the TCA cycle. Impacts of selected small-molecule organic acids on Comamonadaceae members Four small-molecule organic acids that were significantly enriched in the exudates of the Foc-susceptible cultivar were mixed together and repeatedly added to condition the soil. Sequencing results of the communities produced a total of 632,721 sequences, with 15,344-63,539 (35,151 ± 14,705) reads per sample. Higher evenness measures were observed for the rhizosphere soil of the control treatment compared to the SMOA treatment (Fig. 4a, Wilcoxon test: p = 0.031). Principal coordinate analysis (PCoA) showed a clear difference (Adonis: p = 0.003, R = 0.17) in rhizosphere community composition between SMOAs and control (Fig. 4b). Comparative analysis of the microbiome between the SMOAs and control revealed a distinct differential abundances of specific family groups (Supplementary Table 9), including higher relative abundance of the family Comamonadaceae in the soil after the application of the four small-molecule organic acids (Fig. 4b). In total, we obtained 14 different isolates (Supplementary Table 11) from the SMOAs conditioned and control soils, which belong to Bacillaceae, Pseudomonadaceae, Comamonadaceae, Xanthomonadgaceae, Sphingomonadaceae, Burkholderiaceae, Alcaligenaceae, Oxalobacteraceae, and Rhizobiaceae. We then found that Comamonadaceae strain G43 had a strong inhibitory effect on the growth of Foc (Fig. 4c). Interestingly, more single colonies were isolated, similar to strain G43, from the SMOA-conditioned soils than from the control soils (Supplementary Table 11). Further alignment indicated that strain G43 was the same as Comamonadaceae strain G11. Discussion The rhizosphere microbiome is the first line of soil pathogen defense and plays a vital role in the prevention of pathogen invasion. Among the reported mechanisms, the evenness and richness of the rhizosphere microbiome are central players in providing stability and resilience to stress and invasion 2,20 . High evenness was observed in the Foc-susceptible rhizosphere, which could be attributed to a relatively high variety of exudates supporting more microbial niches. If these niches are left nonstimulated/ uninhabited in the Foc-resistant rhizosphere, this could provide a window for successful invasion by other Fig. 3 Prediction of the major pathways mediated by microbial communities and KEGG pathway enrichment analysis of root exudates between two cultivars. a To identify the shifts in rhizosphere communities caused by these potential beneficial bacteria enriched in the Focsusceptible cultivar rhizosphere, deconvolution of significant community-wide functional shifts into individual taxonomic contributions was performed. The right bar plot represents relative contributions driving functional shifts by the taxa of Foc-susceptible samples, and the left bar plot represents relative contributions reducing functional shifts by the taxa of Foc-susceptible samples. b Different metabolic pathways of the root exudates of two cucumber cultivars. Each point represents a different metabolic pathway, and the size of each point represents the degree of change in each metabolic pathway. c Heatmap analysis of 10 compounds selected by PCA, random forest classification, and pathway enrichment analyses; these compounds were significantly (t-test, p<0.05) different in terms of their relative abundance between the root exudates of the two cucumber cultivars. pathogens 21 . In this study, the higher evenness found in the Foc-susceptible rhizosphere compared with the Focresistant rhizosphere may result from sufficient coevolution between the host plant and soil microbiome. In other words, the soil microbial metabolic capacity for the resources from FRC roots has not developed 21 . It has been shown that traditional cultivars tend to maintain relatively high evenness of rhizosphere microbiomes 17,22 and relatively strong interactions between plants and the environment 21 . For instance, mycorrhizal associations have been shown to be increased in older wheat cultivars compared to modern landraces 14 . Similarly, Germida et al. 23 found that ancient landraces could recruit a more diverse rhizosphere bacterial community than could modern cultivars. We also found higher abundances of Comamonadaceae and Xanthomonadaceae in the Focsusceptible cultivar rhizosphere community than in the Foc-resistant one, both of whose members have been reported to be abundant in disease-suppressive soils [24][25][26] . Xanthomonadaceae genera falling within this family, such as Pseudoxanthomonas, Lysobacter, and Stenotrophomonas, have been used as biocontrol agents of several pathogens, including Fusarium oxysporum [27][28][29][30] . It has been widely observed that fluorescent Pseudomonas species produce the antifungal compound 2,4-diacetylphloroglucinol (DAPG) to resist pathogens 5 . All these beneficial genera were more abundant for the Focsusceptible cultivar than for the Foc-resistant cultivar, and five strains of Comamonadaceae (strains G11 and FM2), Pseudomonas (strains M8 and G2), and Stenotrophomonas (strain G47) showed strong inhibitory effects on the growth of pathogens in vitro (Fig. 1e), which may contribute to the resistance of other pathogens and natural attenuation of Fusarium oxysporum. Previous research has shown that crops sensitive to pathogens tend to form disease-suppressive soils more readily than do resistant varieties when under continuous pathogen attack 5 ; this phenomenon was supported by our results, as more beneficial bacterial groups were recruited in the rhizosphere of the susceptible crop than in that of the resistant crop. However, some researchers have reported that, compared with susceptible varieties, resistant varieties can recruit more beneficial microorganisms 31 . Recently, Mendes et al. 32 investigated the composition of the rhizosphere bacterial community of common bean cultivars with different levels of resistance to the fungal pathogen F. oxysporum and found that beneficial bacteria, such as Bacillaceae, Pseudomonadaceae, Solibacteraceae, and Cytophagaceae, were abundant in the rhizobacterial community of the FRC cultivar. On the other hand, our study showed that the FSC cultivar was more enriched in beneficial rhizosphere microbes. This is probably due to the different crops and cultivars used and the mechanism by which resistance breeding influences plant physiology. More work needs to be done with other crop species. Root exudates are important for regulating and controlling the composition and function of rhizosphere microorganisms. Previous research has shown that plant species, even different genotypes of the same species, may vary in terms of their rhizosphere microbiome composition and root exudates 19 . In our experiment, the root exudate profiles varied between the FSC and FRC. Interestingly, four small-molecule organic acids (citric acid, fumaric acid, succinic acid, and pyruvic acid) were observed to be the main driving force for the separation of the two exudate patterns and were produced in higher quantities by the FSC compared with the FRC. These small-molecule organic acids reportedly impacted specific beneficial bacterial groups, such as crop growthpromoting rhizobacteria. For example, malic acid and citric acid could chemotactically attract Pseudomonas fluorescens WCS365, which could competently colonize tomato roots. Infection of Pseudomonas syringae pv. into Arabidopsis could induce root secretion of malic acid and thus promote colonization and biofilm formation on the root surface by strain FB17 7,33 . Similarly, watermelon roots could secrete citric acid and malic acid to induce root colonization of the plant growth-promoting rhizobacterial strain Paenibacillus polymyxa SQR-21 34 . In our experiment, the four small-molecule organic acids added to the soil enriched the relative abundance of Comamonadaceae, an important beneficial bacterial group, and Comamonadaceae strain G43, whose sequence is the same as that of G11, showed a strong inhibitory effect on the growth of Foc. These results showed that FSC enriched more beneficial bacteria by regulating root exudates. Indeed, plants employ complex defense strategies to protect themselves from infection by pathogens. Some physical structures and chemical components of plants show antidisease effects, such as cell wall keratin, wax deposition, lignin, special pores, water holes, and lenticels and induction of various pathogenesis-related (PR) proteins, such as chitinase and glucanase 35 . Additionally, defense responses are induced by plants; these responses mainly include the release of various reactive oxygen species, the expression of defense genes and the development of the hypersensitive response (HR) 36,37 . In recent years, a consensus has gradually been reached in which rhizosphere microorganisms help plants resist pathogens via the secretion of antimicrobial substances, forming biofilms, and via competition for space and nutrients by occupying pathogen niches 38 . Direct and indirect (via microbial associations) plant pathogen defense strategies coexist. We divided the concept of plant responses to soil-borne pathogens into two aspects: (i) plant resistance, in which plants act as executors by improving physical structures and secreting various new chemical components; and (ii) rhizosphere resistance, in which rhizosphere microorganisms recruited by plants act as executors to confront pathogens by perceiving them, activating the plant immune system, and secreting various effective antimicrobial chemicals (Fig. 5). In this study, two cucumber cultivars displaying different Fusarium wilt resistance were used to assess the level of these two different strategies. The fundamental data indicated that the FRC showed stronger disease resistance than did the FRC (<15% vs >60% of disease incidence), especially plant resistance (Supplementary Table 12), while stronger rhizosphere resistance was found in the FSC than the FRC. In recent years, breeders have been focusing on the role of microorganisms in plant disease resistance. Owing to the complexity of microbial community-plant interactions and uncertain influences of root exudates on rhizosphere microorganisms, compared with strategy i, strategy ii has received less attention by plant breeders. However, improved tools and reduced costs associated with microbiome analyses can add new standards for plant breeding programs. For example, plant phenotypes that target beneficial rhizosphere microbiomes can be evaluated by profiling root exudates or by using community compositional profiles (i.e., diversity and evenness metrics) as criteria to evaluate a cultivar. Conclusion In this experiment, using high-throughput sequencing, we determined the rhizosphere microbial communities of two cucumber cultivars with different resistance to Fusarium wilt. The evenness of the rhizosphere microbial community of the Foc-susceptible cucumber cultivar was higher than that of the Foc-resistant cultivar; specifically, the relative abundance of Comamonadaceae and Xanthomonadaceae was greater in the Foc-susceptible cultivar than in the Foc-resistant one. At a lower taxonomic level, higher abundances of the genera Pseudoxanthomonas, Lysobacter, Stenotrophomonas, Pseudomonas, Methylibium, Hydrogenophaga, and Rubrivivax were found in the rhizosphere soil of the Foc-susceptible cultivar compared with the resistant cultivar. As an important medium in the rhizosphere microbial communityplant interaction, different root exudation patterns were found between Foc-resistant and Foc-susceptible cucumber cultivars. The small-molecule organic acids pyruvate acid, citric acid, succinic acid, and fumarate in the cucumber root exudates of susceptible cultivars could recruit Comamonadaceae, which has been shown to be abundant in disease-suppressive soils in many studies. Taken together, our results indicate that the susceptible cucumber cultivar could enrich beneficial microbes (rhizosphere resistance) to compensate for the weakness of the "plant resistance" to pathogens; this overall process may be caused by regulation of the root exudate profile. Materials and Methods Soil sampling, plant material, and rhizosphere sampling in the pot experiment A pot experiment was conducted to evaluate the rhizosphere bacterial communities of two cucumber cultivars with different resistance levels to Fusarium wilt disease. The soil was collected from Baimao town, Changshu city, China (31°35′36.19″N, 120°54′54.93″E, 300 km away from Nanjing) with no history of cucumber cultivation. The topsoil (20 cm) was air dried, sieved (2mm sieve) to remove plant debris and rocks, homogenized and stored in plastic bags at room temperature. The seeds of two cucumber cultivars, Focresistant cucumber (FRC) Lifeng (disease incidence <15%) and Foc-susceptible cucumber (FSC) B80 (disease incidence >60%), were provided by the Vegetable Research Institute, Guangdong Academy of Agricultural Sciences, China. Before planting, the cucumber seeds were surface sterilized with 75% ethanol for 30 s followed by 5% NaClO for 5 min. The sterilized seeds were placed in Petri dishes with wet autoclaved filter paper in a growth chamber (25°C, 70% relative humidity in the dark). After 2 days of pregermination, the seedlings were transferred to 200 mL pots (5 cm × 8 cm × 5 cm) filled with 150 g of soil (one seedling per pot). There were 18 seedlings of each cultivar (6 replicates and 3 pots per replication) potted, for a total of 36 pots; the pots were then randomly placed in a growth chamber (28/26°C day/night cycle, 70% relative humidity, and 180 μmol light m −2 s −1 ) and irrigated as needed. The plants were killed at the early flowering stage (30 days after planting), and roots with closely attached soil were harvested from the pot after removing the loosely attached soil by shaking. Three cucumber plants of each replicate were pooled into one sample. In total, 12 rhizosphere soil samples were obtained (2 cucumber cultivars × 6 rhizosphere soil samples) and stored at −70°C for further microbiota analysis. DNA extraction, 16S rRNA gene amplification and amplicon sequencing Before DNA extraction, 0.5 g of roots with tightly adhered soil was placed into a 2-mL centrifuge tube containing 1 mL of phosphate-buffered saline solution and several sterilized glass beads, after which the mixture was vortexed at maximum speed for 15 min. The suspension (without root materials) was then transferred to a new 2 mL centrifuge tube and centrifuged for 30 min at 15,000 rpm. The supernatant was discarded, and the precipitate was used for DNA extraction. Total DNA was extracted from the precipitate using a Power Lyzer PowerSoil DNA Isolation Kit (Qiagen, Germany) according to the manufacturer's protocol. The DNA quality and quantity were measured via a 1.2% agarose gel and a NanoDrop 1000 spectrophotometer (Thermo Scientific, USA). For taxonomical profiling of the bacterial communities, PCR of the 12 DNA samples that targeted the 16S rRNA gene (V4 region) was performed. The primers 515F/806R (F: GTGYCAGCMGCCGCGGTAA; R: GGACTACNVG GGTWTCTAAT) were used for PCR 39 , with an amplicon length of~292 bp. For amplification, 50 μL reaction mixtures consisted of 1 μL of each primer (10 μM), 25 μL of sterilized ultrapure water, 25 μL of 2× Premix Taq (Takara Biotechnology, Dalian Co., Ltd., China), and 3 μL of DNA (20 ng/μL). A Bio-Rad S1000 (Bio-Rad Laboratory, CA) instrument was used to perform PCR amplification with the following amplification procedure: 95°C for 5 min; 30 cycles of 94°C for 30 s, 52°C for 30 s, and 72°C for 30 s; and then 72°C for 10 min. The PCR products were run on a 1.2% agarose gel and The DNA marker used was DNA Marker (100-2000 bp; B500350 Sangon Biotech (Shanghai) Co., Ltd.), and those with clear bands between 290 and 310 bp were combined for sequencing. The PCR products were then mixed and sequenced on an Illumina HiSeq 2500 platform with the same sequencing information as reported previously 40 . The sequences were assigned to each of samples based on their unique barcode. All the clean reads were trimmed to a minimum length of 200 and had a Phred score of at least 20 by using the split_libraries_fastq.py script (QIIME 1.9.0) 41 . The reads were clustered into OTUs using the UPARSE strategy 42 by dereplication with USEARCH 10. All the reads were mapped to their representative sequences using the usearch_global method (USEARCH 10). The OTU table was converted to BIOM format 1.3.1 using BIOM convert for downstream analysis in QIIME 1.9.0. The Greengenes database (V.13.5) was used for taxonomic annotations with the representative sequences. Summary information of the represented taxonomic groups was generated by the use of the summarize_taxa.py script. Root exudate collection and GC-TOF-MS detection The seeds of both cultivars were surface sterilized as mentioned above and then placed in tissue culture bottles (400 mL) containing 100 mL of MS agar media 43 in a growth chamber (28/26°C day/night temperature cycle, 70% relative humidity, and 180 μmol light m −2 s −1 ) for seven days. Afterward, the sterile seedlings were carefully transferred to conical bottles (one seedling per bottle) containing sterile water and allowed to grow for another seven days. The sterility of the seedlings was tested by plating 100 μL of water from each conical bottle onto an LB 44 plate and incubating at 30°C for 3 days. The contaminated seedlings were discarded. Each cultivar was grown as 3 replicates, with 1 replicate including 3 seedlings, that is, 2 cucumber cultivars × 9 individual seedlings, resulting in 18 samples. All the bottles were fully randomized during the experiment. For cucumber growth, we transferred the sterile cucumber seedlings into sterile Hoagland media under gentle shaking (50 rpm) for 2 h each day on a shaker. To collect root exudates, the seedlings were placed in sterile water for three days, and the container was gently shaken as described above. Finally, nine samples of exudates for each cultivar were obtained, stored immediately at −80°C, and then dried with a lyophilizer (LGJ-18 S Beijing Songyuanhuaxing Technology Develop Co., Ltd., China). For extraction, the lyophilized root exudates (with a dose equal to the amount collected from one cucumber seedling) were put into 2 mL EP tubes and then extracted with 1 mL of extraction liquid (V methanol :V water = 3:1), after which 10 μL of adonitol (0.5 mg/mL stock in water) was added as an internal standard; the contents was then mixed for 30 s by vortexing. The mixtures were homogenized in a ball mill for 4 min at 45 Hz, ultrasound treated for 5 min (while incubating in ice water), and centrifuged for 15 min at 13,000 rpm and 4°C, after which the supernatant (0.75 mL) was transferred to a new 2 mL GC/MS glass vial. After completely drying in a vacuum concentrator without heating, 40 μL of methoxy amination hydrochloride (20 mg/mL in pyridine) was added; the sample was then incubated for 30 min at 80°C, after which 50 μL of the BSTFA (bis(trimethylsilyl) trifluoroacetamide) reagent (1% TMCS (trimethylchlorosilane), v/v) was added, followed by incubation for 1.5 h at 70°C. The GC-TOF-MS analysis and raw peak analysis were performed as reported by Li et al. 45 . Impacts of four small-molecule organic acids present in the root exudates on the soil microbiome A soil application experiment was conducted to test the effects of four select small-molecule organic acids (SMOAs; pyruvic acid, citric acid, fumaric acid, and succinic acid) on the soil microbiome. These four compounds were abundant in the FSC root exudate samples. The soil used in the experiment was the same soil as that mentioned above. Organic acid solutions in water were prepared such that they contained each of the selected compounds in equal amounts (2.5 mM citric acid, 2.5 mM pyruvic acid, 2.5 mM succinic acid, and 2.5 mM fumaric acid) and at a final total concentration of 10 mM. Before adding the compound mixture, 15 g of soil was placed into each well of six-well plates. The plates were then preincubated in a growth chamber at 30°C for 1 week to allow the soil microbiome to acclimate for distinguishing seedling rhizospheres from the bulk soil. Each well then received 1.5 ml of compound mixture solution twice a week for 8½ weeks (17 total applications) in a growth chamber at 30°C 40 . Sterile ultrapure water was added to each well as a control. Each treatment consisted of 18 wells in three plates. All the plates were randomly arranged during the incubation period. Finally, soils in the 18 wells of each treatment were collected, and three wells were pooled together into one sample. In total, 12 samples for the two treatments (2 treatments × 6 soil samples) were obtained and stored at −80°C. For taxonomical profiling of the bacterial communities, 12 samples targeting the V3-V4 region of the 16S rRNA gene were analyzed. The amplification was conducted with the primers 341F/806R (F: CCTAYGGGRBGCASCAG; R: GGACTACHVGGGTWTCTAAT), with an amplicon size of 465 bp. The PCR amplification and 16 S rRNA sequencing were the same as those described above. Before sequence processing, target sequences were extracted from the raw sequences based on matches to the 515F/806R primers, assuring the same region of the 16 S rRNA gene as that of the rhizosphere samples. Afterward, sequence processing was performed in the same manner as that described above. Isolation, identification, and in vitro anti-Foc assays Strains were isolated from the rhizosphere soil of Focsusceptible and Foc-resistant cucumber varieties by the dilution plate technique. A 0.5-g aliquot of roots with tightly adhering soil was placed into a 2-mL centrifuge tube containing 1 mL of sterile water and then vortexed at maximum speed for 15 min. Then, 100 μL of soil suspension was used for dilution. Specifically, 100 μL of the soil suspension was pipetted into a dilution tube containing 0.9 mL of sterile water. The tube was subsequently vortexed for~10 s. After vertexing, 0.1 mL of this solution was removed and placed into a second dilution tube containing 0.9 mL of sterile water. TSA agar was used for all plate media. Strains were also isolated from SMOA-conditioned and control soils by the dilution plate technique. The soil (0.5 g) was mixed with 5 mL of autoclaved water and placed on an orbital shaker for 30 min, after which 100 μL of soil suspension was used for dilution as described above. We selected plates with fewer than 100 single colonies, and there was a total of three plates used for each treatment. Then, single colonies were selected and purified twice. In total, 238 single colonies from the FSC rhizosphere and 238 from the FRC rhizosphere were isolated. Each of 189 single colonies was isolated from the SMOAconditioned soil and that of the control treatment. DNA extraction, amplification, and sequencing were performed as described by Zhang et al. 46 . The sequences were quality filtered and demultiplexed according to their barcode, and the taxonomy of the sequences was classified using the Greengenes database (V.13.5) as described above. The antagonistic activity of the isolates against F. oxysporum was evaluated with the method reported by Bordoloi et al. 47 . Briefly, a 4-mm agar plug of F. oxysporum was placed in the middle of a PDA plate, and the strain was inoculated between the edge of the plate and the plug. The zone of inhibition was measured after incubation at 28°C for 5 days. Statistical methods For downstream analyses after sequence processing, to describe the rhizosphere community structure, a minimum number of sequences was extracted randomly for each sample to calculate the Shannon index estimated via QIIME by the alpha_diversity.py script. A nonparametric t-test was used to determine if the Shannon indices differed between the FRC and FSC. Before calculation of the beta diversity, the cumulative-sum scaling (CSS) method 48 was used to standardize the OTU profiles by the normalize_table.py script, and Bray-Curtis similarity matrices were prepared using the beta_diversity.py script. Adonis was used to determine whether the beta diversity differed between the two cultivars. Principal coordinate analysis (PCoA) plots were generated from Bray-Curtis similarity matrices created using the R package ggplot2. To determine the percent change in taxa, we used t-tests for all family-or genus-level taxa with relative abundances >0.001 to measure the significant difference in these abundances between the two cultivars, with the P-values corrected according to the Benjamini-Hochberg method. For functional predictions, the UCLUST method was used to select closed-reference operational taxonomic units 49 at 97% sequence similarity using the Greengenes database (V. 13.5). Functional inferences according to the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database were made using PICRUSt 50 . Downstream FishTaco analyses were performed according to the methods of David et al. 51 . Briefly, the top 99 phylotypes with the maximum relative abundance across our OTU table from PICRUSt and normalized with MUSICC were selected. Then, a precomputed OTU-KO table from the PICRUSt analysis, output from MUSICC, and OTU relative abundance table was prepared for input into the FishTaco frame. Multitaxon mode was selected for each pairwise comparison between two cultivar samples. Finally, the R package ggplot was used to visualize the function-contribution distribution. For root exudate analyses, principal component analysis (PCA) 52 was used to visualize the root exudate structure of the two cultivars using the R package vegan. The pvalues were corrected by the Benjamini-Hochberg FDR procedure for multiple comparisons 53 . To identify the exudate compounds with the greatest contribution to the classification, a random forest approach was applied, and the default parameters in the R implementation of the algorithm (R package RandomForest, ntree = 1000) were used. Enrichment analysis of the metabolic pathways was performed and plotted using the online platform MetaboAnalyst 54 (http://www.metaboanalyst.ca/ faces/home.xhtml). All the plots were created using the R package ggplot2.
2020-10-02T13:25:24.227Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "1cac3b3c3be921b943bf39683cb9d59560a188df", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41438-020-00380-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc642d4dca877f2a0af634ebe020db24c2bcb6aa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233075126
pes2o/s2orc
v3-fos-license
Temperature of the brakes and the Braking Force The kinetic energy of the braking vehicle is changed into heat and the resulting heat increases the temperature of each part of brakes. The changed temperature affects the coefficient of friction between the brake lining and brake drum of brake disc. Unless the brakes are actuated hydraulically there is the warning brake pads and brake fluid. Object of examination in this article is the impact of repetitive braking to change of these parameters and the impact of time to change the boiling point of the brake fluid. Introduction The accident rate is one of the unintended effects of road transport. If we compare the different causes of accidents, the speed of driving is one of the most dangerous mistakes of drivers. Based on the statistics of the police [1] of Slovak Republic can be stated, that the Slovak republic had an accident, whose cause was the speed, the rate of the total number of 13.67 %, but he ration of the number killed was up 28.96 %. When it forms the column of vehicles on the road, the driver at the high speed must repeatedly brake from high speed to the speed of column and accelerated again. During this activity it occurs in the vehicle's brakes to transform the vehicle kinetic energy into heat, which causes the increase in temperature of the brake parts. In hydraulic brakes there is the raising the temperature of the brake fluid. We wanted find out in describing experiment just these thermal changes of individual parts. The aim was to determine the boiling point of the brake fluid at the time and the impact of absorbed moisture on it, because the brake fluid is strongly hygroscopic. We performed the measurement on passenger vehicle KIA cee'd 1.6 CRDi. When the vehicle was occupied by two persons the front axle was laden the mass m1 = 852 kg and rear axle m2 = 614 kg. Therefore the distance the center of gravity from the front axle was 1.11m. The measurement of temperature was performed on the brake disc of the front axle, brake caliper and piping with brake fluid at the place of connection to the calipers. Change of the temperature of the brake components As a test case we took the situation in overtaking the line of the traffic. The driver has to reduce the driving speed from 120 km/h to 60 km/h and accelerate again to its original speed. The purpose was to repeat this cycle several times. In order to perform the measurement of temperature at selected points we practised the measurement on laboratories on roller tester. It was necessary under braking converted the same amount of energy into heat as by driving on the road. It is more loaded during the braking the wheels of the front axle, so we focused on the change of temperature of the front axle. [2,3] To detect the change of temperature of the selected components of the front brake, it is necessary to determine what rate of kinetic energy the vehicle was converted into heat by the brakes of the front axle. We suppose the driver draw the braking force that causes deceleration of 3 m/s 2 . When the wheelbase of vehicle is L= 2.95 m and the known position the centre of gravity we can determine by calculation the load of the single wheel of the front axle during the braking to value m 1 = 485kg, as is shown in Figure 1. To achieve the deceleration of 3 m/s 2 then must act on the wheel the braking force F′ B1 = 1 455N. This value of braking force must be set even if the measurement is performed on the roller brake tester. Now it is necessary to determine the time of braking during the test on the roller brake tester. We will be average out the amount of kinetic energy which must be in the brakes of the one front wheel to turn on heat. This amount of energy is marked as ∆E k . This energy is equal to the difference between the kinetic energy of the weight on this wheel at the speed of 120 km/h and 60 km/h. It is possible to write the following equality: (1) or, Where, m 1 = 485 kg -it is the weight on the one front wheel during the deceleration of the vehicle 3 m/s, V 1 -120 km/h -it is the initial speed of braking test, V 2 -it is the final speed of braking test, ∆E k -it is the difference of kinetic energies at the beginning and end of the braking test, E k 120 -it is the kinetic energy at the beginning of the brake test, E k 60 -it is the kinetic energy at the end of the braking test. Using the information from the previous text, the change of kinetic energy of the vehicle is: (3) ∆E k = 200 833 J It must be transformed in the brake of one front wheel in to energy 200 833 J into heat. Thus, it was determined the amount of brake force and amount of energy that should be transformed into the heat. As it is a compensation of the real driving situation it is necessary define the time of living the braking force to operate. We determine it based on the track over which must apply the load to do the work 200 833J. The braking force should apply on the track 138.03 meters. When the roller speed of roller brake tester is V = 4.8 km/h, the time of measurement must be: s -It is the track on which must act the braking force to carry out the work Braking force must be applied for a period of 103.5 seconds to carry out its work. To be able to find out the change of braking force it must be ensured the equal actuating force. It will be used during braking the load cell to determine the actuating force to draw the desired braking force. For each repeated measurement will be used the same actuating force as was found out at the first measurement. After each measurement it will be detected the surface temperature of the brake disc of the front brake, the surface temperature of the brake caliper, temperature of pipe the brake fluid at the connection to the brake caliper and it will be read off the braking force at the perimeter of the wheel. [4] The boiling point of the brake fluid During a dynamic driving tends to increase the temperature of the brake components. Therefore, is the issue of brake fluid boiling point very important. The brake fluid is a highly hygroscopic substance which absorbs moisture from the surroundings and affects its boiling point. During the experiment was the brake fluid placed in the bowl, which has within the cap outlet with diameter 1 mm, figure 2, medium bowl. Thus, stored liquid was exposed to the activity of surrounding (heat, light, change of moisture) for seven months. After this time was reviewed its boiling point by the means of device Bosh BFT 100, figure 3. The brake fluid is a highly hygroscopic substance and its boiling point varies with the amount of absorbed moisture. What effect has the moisture on its boiling point has been verified by adding the distilled water. Into 100ml of brake fluid was gradually added one ml of distilled water, this substance was mixed perfectly and then was measured the boiling point. For comparison was measured the boiling point of the new liquid. [3] 60 120 Measurements Is was used to measure the vehicle Kia cee´d 1.6 CVVT and the following measuring devices:  The roller brake tester Motex 75 19 (Figure 4). It is a diagnostic device that allows to measure and continuously monitor the braking forces at the periphery of individual wheels of one axle. The peripheral speed of the brake cylinders of roller brake tester is 4.8 km/h-1.  Pedometer Corrsys Datron is used to measure the force exerted on the brake pedal. It consists of a sensor, which is attached to the brake pedal ( Figure 5), cable and evaluation device. The amount of control force is displayed on the digital display.  Seconds counter A) The change of temperature of the brake components The results of the measurement and the measured values are summarized in the graph at the figure 7. In order to compare the change of amount of braking force on the peripheral of the wheel, at each measurement by using pedometer was inferred to the brake actuator the same operating force 350N. So was ensured that the change of amount of braking force has been caused as a result the change of friction properties of lining in changing its temperature. The friction of lining improves with increasing temperature. At the sixth braking cycle has been reached the temperature of brake disc 179°C. At this temperature was measured the maximum braking force of 2 300N. Next braking cycles caused an increase of temperature of the brake components but the inferred braking force has been lower and had a downward trend. B) The change of the boiling point of brake fluid Changes of the boiling point of the brake fluid are shown in Table 1. Conclusions The experiment verified the change of temperature of braking components in vehicle by repeated braking from high speed and the impact of the temperature to the amount of braking force. The second aim was to find out the change of the boiling point of braking fluid with the time and with the amount of absorbed moisture. That the result was comparable was used constant braking force authenticated by using pedometer. This was adjusted such that the vehicle has reached during the first braking deceleration of 3 m/s 2 . From the comparison of the temperature is evident that the repeating braking can leads to the increase of temperature of the braking lining and also to increase in the braking. When the temperature of brake disc reaches the value 179°C it was recorded the decrease of braking force. This decrease lasted until the end of experiment. In terms of operation is important the boiling point of the brake fluid, because at the end of experiment was the temperature of pipe with brake fluid 117°C. During the measurement was verified that by repeated heavy braking of the vehicle and by omissions of braking fluid is realistically achieve the condition, that the temperature of braking fluid reaches the boiling point. In the time to achieve this is significantly reduced the performance of brake.
2020-12-10T09:02:52.776Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "86c936abea88df088082af914ce2655256e0c432", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26552/tac.c.2017.1.3", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ac40305422d5a609df8fa12c5e387f1c444465a4", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
1232133
pes2o/s2orc
v3-fos-license
Edge Current of FQHE and Aharanov-Bhom Type Phase When two non-identical quasi-particles in the Hall fluid encircle each other, relative AB type phase developes.As the quasi-particles advance towards the edge in a similar circular way, the developed current should have connection with this AB type phase through the {\it Shift} quantum number or Berry's topological phase. We have pointed out the role of relative AB type statistical phase in the development of edge current.In fact,the physics of the current flow in FQHE is sketched here from the topological point of phase transformation. Introduction A fundamental feature of the microscopic theory is the existence of a one-to-one correspondence between the quasi particle states of the bulk and the primary fields that build the spectrum of the edge states.Microscopically the bulk states can be constructed by implementing the idea of flux-attachment by coupling suitable set of Chern-Simon gauge fields [1]. The effective action in the field theory is a Chern-Simon gauge theory that fits the K-matrix classification [2] reflecting the underlying hierachical construction of FQH states. The composite picture of the edge states have number of branches that is encoded in the rank of the K-matrix. Wen has shown that edge excitations in FQH states provide an important probe to detect the topological orders in the bulk FQH states. At the edge, the electrostatic potential varies very slowly and adiabatic conductance takes place between series of alternating compressible and incompressible strips forming channels at the edge. The QH liquid for ν = 1/m contains only one component of incompressible state that lead to one branch. A generic QH state with ν = 1/m contains many branches of edge excitations. The concept of edge channels is extended from the integer to the fractional quantum Hall effect, and the contribution of an adiabatically transmitted edge channel to the conductance is calculated from the point of view of interacting-electron picture [3]. For a sufficiently small ∆µ (chemical potential) the current carried by quasi-particles in a compressible region at the edge depends on the on the difference of the electron filling factors in the two adjacent incompressible regions [4]. It is well known that these quasi-particles in FQHE are not fundamental particles and obey fractional statistics in two dimensions. Any fractional statistics objects are collective particles of a nontrivial condensed matter state. The fractional statistics as pointed out by Leinaas and Myrheim [5] relies on the property that when particles with infinitely strong short range repulsion are confined in two dimensions, paths with different winding numbers are topologically distinct and cannot be deformed into one another. The particles [6] are said to have statistics θ producing a phase factor e iπθ = (−1) θ when exchange of particles takes place over a half loop. Non integral values of θ imply fractional statistics. It is believed [7] that fractional statistics are the consequences of incompressibility at a fractional filling and may possible be observable in an experiment specially designed for this purpose. The fractional statistics can be derived heuristically in the Composite Fermion theory [8] when one CF goes around another encircling an area A, the total phase associated with this path is given by where N enc is the number of composite fermions inside the loop. The first term on the right-hand side is the usual AB phase and the second term is the contribution from the vortices bound to composite fermions indicating that each enclosed CF effectively reduces the flux by 2p flux quanta. Wilczek shed considerable light on fractional statistics of quasi particles [9].In two dimensions these quasi-particles are similar as vortex attached at the point particle. If a vortex is dragged adiabatically around a closed loop,the system will acquire an extra non dynamical phase which can be gauged away resulting continuously from one incompressible liquid state to another at different filling fraction. The concept of fractional statistics has been reformulated by Haldane [10] as a generalization of Pauli exclusion principle and a definition independent of the dimension of space. In the FQHE, the Pauli like definition of statistics can be introduced in the quasi particles which are flux carrying charged bosons in the lowest Landau level.If an object carrying flux φ α and charge q α orbits around another object carrying flux φ β and charge q β the relative statistical phase θ αβ becomes exp(iθ αβ ) = exp±iπ(g αβ + g βα ) where With these view we have recently shown [11] that due to interchange of two nonidentical composite fermions residing in two consecutive Landau levels,the relative AB type phase developed and shift quantum number can be visualized through it. In another issue [12] we have showed that this shift vector have connection with the change of edge current through the difference of filling factor ν n − ν n−1 between two consecutive incompressible states of the edge. In fact a more meaningful idea of the shift [13] has been given through deviations of Berry's topological phase of composite particles in hierarchies. Motivated by the recent two works, one which highlights the importance of non equilibrium noise measurement through statistical phase [14] and other is FQHE qubits in connection with topological quantum computation [15], we realize that in the transport process of edge current, the relative AB type phase should play a major role. Hence we here want to find the origin of the edge current developed through the statistical interaction of composite particles in the AB type topological phase. Topological Aspect of Hall fluid, Berry Phase and Shift quantum number In the Hall fluid the statistical interaction takes the most significant role. Being long ranged it is treated non-perturbatively. A non-dynamical gauge field A µ is associated with the flux which in 2+1 dimension is the very cause of the appearance of Chern-Simon term in the Lagrangian. This licenses a conservation of topological current J µ which include a topological invariant term in the (2 + 1) dimension [9] H = θ 2π in the action. In fact it is the Hopf invariant describing basic maps of S 3 to S 2 . If ρ denotes a four dimensional index then we find which connects the Hopf invariant with chiral anomaly. This Hopf term plays a role somewhat similar to the role played by the Wess-Zumino interaction in connection with 3 + 1 dim. Skyrmion term. There is an analogous statistical interaction in (3+1) dimensions given by Haldane [16] considering the 2D Hall surface as a boundary surface of a 3D sphere, having radius R in a radial (monopole) magnetic field B =hS/eR 2 (> 0). This 2S = N φ is an integer which defines the total number of magnetic flux through the surface. For the parent state ν = 1/m the total flux is S = 1 2 m(N − 1). The field strength S in the first level hierarchy is which is formed when p (p= even integer) excitations are added in the parent state ν = 1 m . These show that the filling factors for hierarchical state satisfy a slight complicated relation. In the language of Wen and Zee [1,2] this S is the shift, a topological quantum number which is developed due to the coupling between the orbital spin and the curvature of the space having spin s = 1 2 K II . On a sphere,the shift for a hierarchical state is given by For a ν = 1 m parent state this shift is simply S = 2(n − 1)+ m having orbital spin s = n − 1 + m 2 that is associated with the orbital angular momentum in cyclotron motion. In the effective theory, this introduction of shift leads to a modification of the Lagrangian in equation (2) as follows where the second term is the electromagnetic coupling and the third one is the coupling to the curvature of space. The appearance of shift in the hierarchies of FQHE is nontrivial [13]. The quasiparticles in these levels are formed when additional fluxes are attached with quantized particles. In fact the quantization of Hall particles is the indication of Quantum Hall effect involving gauge theoretic extension of coordinate by C µ ∈ SL(2C) which is visualized through the field strengthF µν . Apart from the internal extension, the external strong magnetic field induces gauge extensions B µ ∈ SL(2C) through the gauge field F µν .In the language of differential geometry these two gauges act as two fibres at each particle points of the base space S 2 . The effective theory of the Hall fluid (Abelian) can be accurately presented if not only the two vortices but also their interactions are taken into account. In the light of Haldane [16], we consider the Hall surface on 3D sphere and in the presence of strong external magnetic field, the chiral symmetry breaking of composite fermion associated with internal and external gauge fields F µν andF µν are represented by In particular the θ term in the Lagrangian leads to vortex line and the corresponding gauge field acts like a magnetic field. The topological Lagrangian of Hall fluid can be described by the added Chern-Simon terms in the Lagrangian through the anomaly Here every term corresponds to a total divergence of a topological quantity, known as Chern-Simons secondary characteristics class defined by Assuming a particular choice of coupling θ = θ ′ = θ" in the Lagrangian the topological part of the action in (3+1) dimension become where µ e , µ i andμ are the corresponding magnetic charges which are connected with the respective charges through the Dirac quantization condition and Pontryagin density. It gives rise the topological phase of Berry on the parallel transport over a closed path of a Hall hierarchy state. Here the first term is associated with Berry phase factor of Hall particle due to external magnetic field µ e . The second term gives rise to the inherent Berry phase factor µ i associated with the chiral anomaly of a free electron (in absence of an external magnetic field) and the third one effectively relates the coupling of the external field with the internal one which give rise the phase factorμ. This µ ef f actually visualizes the filling factor through the relation ν = n 2µ ef f . where n denotes the nth Landau level. In fact this µ ef f satisfies the Dirac quantization condition e ′ µ ef f = n 2 (15) showing that each quasi particle in the n th Landau level having charge e ′ behaves as a composite fermion. It will behave as fermion in the ground state following the Dirac conditionẽ µ = ±1/2 which follows the equationẽ This implies that (n ± 1)/2 is the magnetic strength µ of the added quanta whose removal makes the composite fermion in the higher Landau levels to behave fermion in the ground state. Here for µ = ±1/2, ±3/2, ... the quanta behave like fermion and µ = ±1, ±2, .... it shows bosonic behavior. We have found this change of magnetic charge asμ that can be visualized through shift S by the relation where n = 1, 2, 3.. denotes the hierarchy levels. Our picture shows that a motion of composite particle in the Hall fluid moving in a circular orbit will be quantized through its acquirance of the Berry's topological phase. Conceptually the appearance of this shift quantum number S, in the topological phase of quasi-particle is obvious, since the coupling between the two gauges(that act as fibres) with the curvature is prominent during parallel transport over a closed path. With this view we have found [11]the role of shift in the relative AB type phase as the composite fermion and the additional quanta encircles each other for producing fermions in the lowest Landau level. In addition, we have shown elsewhere [12] in the context of edge current flowing through the compressible level that this shift can be related with the difference of Landau filling factor between two consecutive incompressible levels.ẽ For a sufficiently small chemical potential ∆µ, the change of current in a compressible strip is, ∆I = e h ∆µ∆ν = e h ∆µ(ν n − ν n−1 ) that can be expressed in terms of shift Now we will proceed to find the role of A-B type statistical phase in the development of edge current as the composite particles advance on the Hall surface. The Edge current in A-B Type Phase The concept of edge channels for IQHE and FQHE in combination with the adiabatic transport of quasi-particle is successful in explaining the anomalous dependence of Hall conductance. Edge channels are defined with the correspondence of bulk landau level.On approaching the boundary of the 2DEG a Landau level which in the bulk lies below the Fermi level rises in energy because of the presence of confining potential. The intersection between the nth Landau level and the Fermi level defines the location of the nth edge channel for filling factor in the nth hierarchy. In general the current injected into pth edge channel is [3] where the current I p in a compressible band is in between two incompressible bands of filling factors ν p and ν p−1 . The tunnelling current I through the wire is I(V ) ∝ V α , where the exponent α is determined by the scaling dimension of the tunnelling operator. Lopaz and Fradkin [17] pointed out recently that in case of tunnelling of electrons from Fermi liquid into a hierarchical FQH state, the tunnelling exponent is α = 1/ν. The physics behind the tunnelling in the edge has been focussed on the charge and neutral modes that are propagating with different velocities. This latter one has been identified as topological mode which is responsible for Fermi statistics.Representing the respective charge mode and topological mode by φ c and φ T , a general edge operator is The authors have shown that charge Q depend only on α c but the statistics θ of these excitations is connected with both α c and α T . In a recent communication Zulicke and Mac Donald et.al. [18] addressed the chiral phase field φ n (θ) as a superposition of edge-density fluctuations. where φ c is the phase field of the charged edge-magneto plasmon mode which corresponds to fluctuations in the total edge-charge density and φ n is its orthogonal complement known as neutral mode.The authors have expressed these two modes in terms of the parent state φ 0 and daughter state φ 2p+1 with the respective filling factors ν 0 = 1 2p+1 and ν i = 1 (2p+1)(4p+1) , that comprise the ν = 2 4p+1 QH state. The addition of electrons to the edge with concomitant change of 2p + 1 − n flux quanta is viewed as adding the electron to the outer edge and transferring at the same location n fractionally charged quasiparticles from the outer QH droplet to the inner one. We realize from the above works that both the charge and neutral modes are transferred from the inner edge to outer edge leading to flow of current and change of statistics. We are now interested whether this current and statistics are interrelated during the course of transfer towards the edge. Inspired by the works on topological transformation of current in the FQHE system in connection with Quantum Computation [15] through fractional statistics, we now proceed to evaluate the role of AB type Statistical phase in edge current flow two ways. 1. In a particular edge the composite particles in the consecutive branch (Landau level) encircles during transformation. 2. From inner edge the composite particles transform to the outer edge picking up integral multiple of flux from the bulk. It is now known that composite particles in FQHE are the composite of fluxes attached with charged particle.When an electron is attached with a magnetic flux, its statistics changes and it is transformed into a boson.These bosons condense to form cluster which is coupled with the residual fermion or boson (composed of two fermions). Indeed the residual boson or fermion will undergo a statistical interaction tied to a geometric Berry phase effect that winds the phase of the particles as it encircles the vortices. Also we observe that the attachment of vortices to electrons in a cluster will make the fluid an incompressible one. Indeed as two vortices cannot be brought very close to each other, there will be a hard core repulsion in the system which accounts for the incompressibility of the Quantum Hall fluid. In fact the Hall particles are quantized by acquiring Berry's topological phase as discussed in sec.-2. As the quasi-particles encircles another in their way of topological transport, the Aharanov-Bhom type statistical phase is developed. At first, we concentrate on one edge of a QH system where we find current in the compressible band (eqn.-23) depends on the filling factors of the consecutive incompressible Landau levels n th and (n − 1) th respectively. During this movement of quasiparticles, the charge dressed with flux advance following circular path.As one encircles another relative AB type phase developed [10] φ s = exp ± iπ 2 (q n µ n−1 + q n−1 µ n ) where q n , q n−1 are the respective charges and µ n , µ n−1 are the corresponding magnetic strength of the flux attached.These composite particles follow Dirac Quantization condition q n µ n = n/2 having respective filling factors ν n = n 2µn and ν n−1 = n−1 2µ n−1 . Now the intertwining of these Composite particles against each other results After a few mathematical steps and using eqn.-23 we have where K = e h ∆µ. This implies that edge current or its change can be realized through the acquirance of AB type statistical phase whenever two quasi-particles in the consecutive Landau level encircle each other.In other words the noise in the current flow is the very cause of these type of phase factor. In the second case,we consider the edge tunnelling through the bulk of FQHE. We assume the transfer of the composite particle from the inner edge in the n t h Landau level having filling factor ν n picking up even integral (2m) of flux ν 1 through the bulk of QH system and forming a new composite particle in the (n + 1) t h Landau level of the outer edge. The filling factor of the effective particle becomes ν ef f = n+1 µ ef f .In the light of Haldane [16] and Jain [19], we consider that the monopole strength µ ef f of the state Φ 1 2m Φ n can be obtained by noting that the product of two monopole harmonics µ 1 and µ n gives a monopole harmonic at µ 1 + µ n i,e monopole strength add as follows which can be considered as µ ef f = 2mµ 1 + µ n . Here statistical interaction takes place between the composite particle of the inner edge and outer edge which result current propagation. We further assume that path of the particles do not intersect each other. Encircling one type of fluxes around another in the consecutive Landau level relative AB type phase produces that is the very cause of edge current flow. Following Haldane [10], we consider encircling of the composite particle in the inner edge having flux µ n with charge q n that is equivalent to the filling factor ν n = n 2µn , around the composite particle in the outer edge having corresponding flux and charge respectively µ ef f and q ef f = n + 1 2µ ef f = n + 1 2mµ 1 + µ n . This generate the relative AB type phase developed by their fluxes and charges as Since the quasiparticles satisfy the Dirac quantization relation we can write the above equation as After a few mathematical steps we found similar equation (as eq.-29) of the relative AB type statistical phase developed due to the transfer of a composite particle from inner edge to the outer edge through the bulk carrying the integral multiple of flux alongwith. We see that we get identical result in our two different approaches of edge current flow. From another point of view using equation 31 we have With our previous knowledge we see that this phase factor has relationship with shift quantum number. Above all we can comment from the above equation that irrespective of µ 1 and µ n being fermionic or bosonic flux, the phase factor depends upon the number of particles-N ,the Landau level-n, and the odd integer-m that is the inverse of parent filling factor ν = 1/m by the following expression. Recently it has been found that fractional statistics play an important role in topological transformation in connection with Quantum computation [15]. Also Kane [14] showed that statistical phase by combining AB effect can be used in noise measurement. The result we find fully support these views. The current obtained from the contacts of the Hall edge considering the effect of bulk also can be visualized through AB type quantum phase. Discussions Quantization of Hall particles in the hierarchical states ensure the acquirance of Berry's topological phase that visualize the resultant chirality of the hierarchical state. Interchange of two identical quasi particles develop statistical phase. Whereas two dissimilar quasi particles on encircling each other produces relative AB type phase. In this paper, we shed light on the latter phase for its role in the edge current flow. At the edge, nonzero current appears in the hierarchies which originate from the non-vanishing anomaly in terms of the deviation of the topological phase through the difference of filling factors. The quasi particles responsible for current flow, encircle one around other in course of their advancement towards edge of Hall surface. As a result relative AB type statistical phase evolved for the intertwining of fluxes around the charges of the quasi particles in the following two cases. 1. In a particular edge the quasi-particles in the consecutive branch (incompressible Landau level) encircles during transformation. 2. From inner edge the quasi-particles flow to the outer edge picking up integral multiple of flux from the bulk of the Hall system. We find that in both the cases AB type statistical phase is directly connected with edge current. And the second case combine the physics of the edge and the bulk for the current flow. Hence classical current can be visualized through quantum phase. In future these findings will help us to work for Quantum Computation with FQHE qubits [13] and Spin propagation in the Spintronics devices [20]. Acknowledgement I like to express my gratitude to all the authors in my references.
2014-10-01T00:00:00.000Z
2004-08-15T00:00:00.000
{ "year": 2004, "sha1": "c0f9d231c9e6f40516d1efa4eb252728e6f798e4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c0f9d231c9e6f40516d1efa4eb252728e6f798e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255890520
pes2o/s2orc
v3-fos-license
Influence of Temperature, Photoperiod, and Supplementary Nutrition on the Development and Reproduction of Scutellista caerulea Fonscolombe (Hymenoptera: Pteromalidae) Simple Summary Parasitoids are the natural enemies of many pests. Using parasitoids is a valuable method for controlling pests. However, to effectively use parasitoids, it is necessary to understand their optimal living conditions. Scutellista caerulea Fonscolombe (Hymenoptera: Pteromalidae) is an important enemy of pestiferous scale, Parasaissetia nigra Nietner (Hemiptera: Coccidae). To identify the optimal conditions for the population growth of S. caerulea, we assessed how temperature, photoperiod, and supplementary nutrition affected its development and reproduction. Our results revealed that the most suitable conditions for the population growth of S. caerulea was at 30 to 33 °C, with 12 to 14 h of daily light, and the provision of sucrose or honey as supplemental diet. These results provide a reference for the indoor rearing of S. caerulea. Abstract Scutellista ciruela Fonscolombe has a significant controlling effect on the rubber tree pest, Parasaissetia nigra Nietner. To identify the optimal conditions for the population growth of S. caerulea, we assessed how temperature, photoperiod, and supplementary nutrition affected its development and reproduction. The results demonstrated that the number of eggs laid and parasitism rates of S. caerulea were the highest at 33 °C. The developmental rate of S. caerulea was the fastest and the number of emerged adults the highest. The number of eggs laid and the parasitism rates increased when the light duration increased within a day. Females did not lay any eggs when the whole day was dark. At a photoperiod of 14:10 (L:D), the developmental duration was the shortest and the number of emerged adults was the highest. Adult life span was the longest under a 12:12 (L:D) photoperiod. During the adult stage, supplementary nutrition, such as sucrose, fructose, honey, and glucose, increased the life span of S. caerulea. The life span of S. caerulea was longer when provided with a supplementary diet of sucrose or honey, compared to other tested diets. The results suggested that the most suitable conditions for S. caerulea’s population growth were the following: 30 to 33 °C, with 12 to 14 h of daylight, and the provision of sucrose or honey as supplemental diet for the adults. Introduction Environmental factors, such as temperature, humidity, and photoperiod, affect the growth, reproduction, and behavioral activities of parasitoids and other insects and ultimately lead to population changes [1]. Temperature directly affects parasitoid development, these insects was performed in a laboratory at 25 to 27 • C and 70 to 90% relative humidity. All insects were fed with pumpkin fruits. Scutellista caerulea Fonscolombe pupae were collected from a rubber plantation at Xinjin Farm in Qiongzhong, Hainan Province (19.03 • N, 109.84 • E). After emergence, the insects were reared with P. nigra to generate a population sufficient for the experiment. Development and Reproduction of S. caerulea at Different Temperatures To determine the effects of temperature on the development and reproduction of S. caerulea, several temperature experiments were conducted. All experiments were performed in artificial climate chambers (MGC-350HP-2, Shanghai Yiheng Scientific Instrument Limited company, Shanghai, China) (70% RH and 12:12 (L:D) photoperiod). The tested temperatures were 18,21,24,27,30,33, and 36 • C. The tested hosts were adult P. nigra with a hard shell and black body, which laid eggs for 1 to 2 days. Four replicates were conducted for each treatment. To assess how temperature influenced the oviposition and parasitism rates, two newly emerged virgin females (<5 h old) were paired with two newly emerged virgin males (<5 h old) for one day. The two females were then put together in a petri dish (Φ 9.0 cm), and 30 P. nigra were added to the dish for parasitization. Afterward, the dishes were kept in artificial climate chambers at different temperatures. After 24 h, the scales were dissected under a stereomicroscope (JSZ8, Nanjing Jiangnan Yongxin Optical Limited company, Nanjing, China) to count the number of eggs laid by the females and calculate the parasitism rates. The parasitism rates of S. caerulea were calculated as percentages of parasitized P. nigra over the total tested P. nigra. To test how temperature affected the developmental duration and number of emerged adults of S. caerulea, 30 P. nigra, reared on pumpkin fruits, were covered with a transparent plastic cup (Φ 7.5 cm and a height of 8.5 cm). The edge of the cup was glued with a circular sponge, and the bottom of the cup was opened with a 1.1 cm diameter hole. Two newly emerged virgin S. caerulea females were paired with two newly emerged virgin males for one day. The two females were then released into the plastic cup. After 24 h at 27 • C, the S. caerulea adults were removed and the P. nigra were reared at different temperatures. The developmental duration and the number of emerged S. caerulea adults were recorded daily, and the developmental threshold temperature and effective accumulated temperature were calculated. The development rate (v) of S. caerulea refers to the development duration (D) in days, v = 1/D. The developmental threshold temperature and effective accumulated temperature were calculated using both the linear regression method and the optimum seeking method to ensure a rather complete view [32,33]. For the linear regression method, the developmental threshold temperature was calculated using the following formula: where C is the developmental threshold temperature, v is the development rate, n is the number of temperature treatments, and T is the temperature in the experiment. The effective accumulated temperature was calculated using the following formula: , where K is the effective accumulated temperature, v is the development rate, n is the number of temperature treatments, and T is the temperature in the experiment. For the optimum seeking method, the developmental threshold temperature was calculated using the following formula: , where C is the developmental threshold temperature, n is the number of temperature treatments, T is the temperature in the experiment, and D i is the development period at this temperature. The effective accumulated temperature was calculated according to the following formula: , where C is the developmental threshold temperature, K is the effective accumulated temperature when the developmental threshold temperature is C, n is the number of temperature treatments, T is the temperature in the experiment, and D i is the development period at this temperature. To determine how temperature affected the female ratio of S. caerulea, 20 newly emerged virgin females, 20 newly emerged virgin males, and 200 P. nigra were put in cages (30 cm × 30 cm × 30 cm), which were kept at different temperatures. After 24 h, the P. nigra were taken out of the cages and continued to feed at different temperatures until the parasitoid adult emergence, and then the number and sex of the emerged S. caerulea was determined. The female ratio of S. caerulea was calculated as the percentage of emerged females of the total emerged S. caerulea. To assess how temperature affected the life span of the adult S. caerulea, experiments were conducted in three test tubes (Φ 1.2 cm and a length of 6.0 cm, with mesh lids). Five newly emerged virgin females and five newly emerged virgin males were place in each tube, respectively. Absorbent cotton, dipped with 15% sucrose water, was placed on each tube's wall for nutrition, and the test tubes were assigned to different temperature treatments. The tubes were checked daily between 8:00 a.m. and 10:00 a.m. local time to count the number of dead females and males. Development and Reproduction of S. caerulea at Different Photoperiods To determine the effects of photoperiod on the development and reproduction of S. caerulea, several photoperiod experiments were conducted. All experiments were performed in artificial climate chambers (70% RH and 27 • C). To assess the number of eggs laid and the parasitism rates of S. caerulea at different photoperiods, the tested photoperiods Tested hosts were adult P. nigra with a hard shell and black body, which laid eggs for 1 to 2 days. Four replicates were conducted for each treatment. To assess how photoperiod influenced oviposition and parasitism rates, two mated females were then placed in a petri dish, and 30 P. nigra were added to the dish for parasitization. The dishes were then maintained at different photoperiods. After 24 h, the scales were dissected under a stereomicroscope to count the number of eggs laid by the females and to calculate the parasitism rates. To test how photoperiod affected the developmental duration and the number of emerged adults of S. caerulea, 30 P. nigra reared on pumpkin were covered with a transparent plastic cup. Two mated females were then released into the plastic cup. After 24 h under a 12:12 (L:D) photoperiod, the S. caerulea adults were removed, and the P. nigra were reared under different photoperiods. The developmental duration and number of emerged adults of S. caerulea were recorded daily. To determine how photoperiod affected the female ratio of S. caerulea, 20 newly emerged virgin females, 20 newly emerged virgin males, and 200 P. nigra were placed in cages (30 cm × 30 cm × 30 cm) maintained under different photoperiods. After 24 h, the P. nigra were taken out of the cages and continued to feed under different photoperiods until the parasitoid adult emergence, and then the number and gender of S. caerulea that emerged from the P. nigra were recorded. To assess how photoperiod affected the life span of the adult S. caerulea, experiments were conducted in three test tubes. Five newly emerged virgin females and five newly emerged virgin males were placed in each tube, respectively. Absorbent cotton, dipped with 15% sucrose water, was placed on each tube's wall for nutrition, and the test tubes were assigned to different photoperiod treatments. The tubes were checked daily between 8:00 a.m. and 10:00 a.m. local time to count the number of dead females and males. Development and Reproduction of S. caerulea under Different Supplementary Nutrition To determine how supplementary nutrition affected the number of eggs laid by females, a mated female was then placed in a petri dish. Absorbent cotton was dipped with either a 20% sucrose, melezitose, fructose, honey, glucose, or trehalose solution and placed on the dish as supplementary nutrition. Water and no nutrition were used as the controls. Fifteen P. nigra (adults with a hard shell and black body, which laid eggs for 1 to 2 days) were placed in the dish for S. caerulea parasitization. The dishes were kept in artificial climate chambers (27 • C, 70% RH, and 12:12 (L:D) photoperiod). Every 24 h during the egg-laying period, scales were dissected under a stereomicroscope to determine the number of eggs laid and the number of scales parasitized. Fifteen new scales were replaced daily. Each treatment was repeated four times. To determine how supplementary nutrition affected the life span of adult S. caerulea, three test tubes were used. Five newly emerged virgin females and five newly emerged virgin males were placed in each tube. Absorbent cotton was dipped into 20% sucrose, melezitose, fructose, honey, glucose or trehalose solution, respectively, and placed on the tube walls as supplementary nutrition. Water and no nutrition were used as the controls. The test tubes were kept in artificial climate chambers (27 • C, 70% RH and 12:12 (L:D) photoperiod). The tubes were checked daily between 8:00 a.m. and 10:00 a.m. local time to count the number of dead females and males. Each treatment was repeated four times. Data Analysis Data in the figures are stated as means ± standard errors. Parasitism rates and female ratios were analyzed by logistic regression, and developmental duration was analyzed using two-way ANOVA, while other data were analyzed using one-way ANOVA. A p value of <0.05 was considered to be statistically significant. All the data were analyzed using SPSS 23.0 for Windows (http://www.spss.com). The figures and tables were prepared using Microsoft Excel 2016. Effects of Temperature on the Development and Reproduction of S. caerulea We observed that S. caerulea could lay eggs at 18 to 36 • C, but it could only complete generational development at 21 to 33 • C. Therefore, the data in this article on developmental duration, the number of emerged adults, female ratio, and adult life span, were only statistically based on the range of 21 to 33 • C. On the other hand, the analysis focusing on the number of eggs laid by females and the parasitism rate (based on the standard of laying eggs) were based on data collected at 18 to 36 • C. Temperature significantly influenced the number of eggs laid by S. caerulea (one-way ANOVA, F (6, 21) = 26.808, p < 0.001). The mean number of eggs laid by S. caerulea at 33 • C was significantly greater than the number of eggs laid at the other temperature treatments (Figure 1a). Temperature also significantly influenced the parasitism rates of S. caerulea (Logistic Regression, Wale = 59.401, p < 0.001). The parasitism rate was significantly greater at 33 • C (77.78 ± 4.01%) than that observed at other temperatures, except for the rate at 30 • C (Table 1 and Figure 1b). Temperature significantly influenced the developmental duration of S. caerulea (twoway ANOVA, F (4, 30) = 3534.106, p < 0.001). From 21 to 33 • C, the development of S. caerulea increased with increase of temperature. The developmental durations of females and males were the longest at 21 • C. These were significantly longer than developmental durations at other temperatures. The effects of temperature on developmental duration of S. caerulea males and females were different. At 21 and 24 • C, the development of males was faster than that of females (two-way ANOVA, F (1, 30) = 40.529, p < 0.001). The interaction of temperature and sex significantly influenced the developmental duration of S. caerulea, (two-way ANOVA, F (4, 30) = 7.194, p < 0.001) (Figure 1c). The number of emerged adults significantly increased when the temperature increased (one-way ANOVA, F (4, 15) = 39.230, p < 0.001). The number of emerged adults was significantly higher at 33 • C (18.3 ± 1.5) than the number of emerged adults observed at other temperatures, except for at 30 • C ( Figure 1d). Temperature significantly influenced the developmental duration of S. caerulea (twoway ANOVA, F (4, 30) = 3534.106, p < 0.001). From 21 to 33 °C, the development of S. caerulea increased with increase of temperature. The developmental durations of females and males were the longest at 21 °C. These were significantly longer than developmental durations at other temperatures. The effects of temperature on developmental duration of S. caerulea males and females were different. At 21 and 24 °C, the development of males was faster than that of females (two-way ANOVA, F (1, 30) = 40.529, p < 0.001). The interaction of temperature and sex significantly influenced the developmental duration of S. caerulea, (two-way ANOVA, F (4, 30) = 7.194, p < 0.001) (Figure 1c). The number of emerged adults significantly increased when the temperature increased (one-way ANOVA, F (4, 15) = 39.230, p < 0.001). The number of emerged adults was significantly higher at 33 °C (18.3 ± Temperature significantly influenced the female ratio of S. caerulea when temperature was in the range of 21 to 33 • C (Logistic Regression, Wale = 7.817, p = 0.005). The female ratio was highest at 24 • C. However, there was no significant difference amongst different temperatures in the range of 21 to 30 • C (Table 2 and Figure 1e). Temperature significantly influenced the life span of adult S. caerulea at 21 to 33 • C (one-way ANOVA, F (4, 15) = 53.198, p < 0.001). The longest lifespan was observed at 24 • C, even though there was no significant difference amongst different temperatures in the range of 21 to 27 • C (Figure 1f). Based on linear regression methods, the developmental threshold temperatures of the female and male S. caerulea were 14.18 • C and 13.93 • C, respectively. The effective accumulated degree-days for females and males were 335.86 and 330.61, respectively. Using the optimum seeking method, the developmental threshold temperatures of the female and male S. caerulea were 15.21 • C and 14.97 • C, respectively. The effective accumulated degree-days for females and males were 307.00 and 302.71, respectively (Table 3). Note: degree-day is the unit of the effective accumulated temperature. Effects of Photoperiod on the Development and Reproduction of S. caerulea Photoperiod significantly influenced the number of eggs laid by S. caerulea (one-way ANOVA, F (12, 39) = 105.334, p < 0.001). When the daily light time was 0 h females did not lay eggs. When the daily light time increased, the number of eggs laid by females significantly increased. When the light times were 24 h and 22 h, the number of eggs was 43.3 ± 4.4 and 43.0 ± 3.5, respectively. There was no significant difference between them, but both were significantly greater than the other treatments (Figure 2a). Photoperiod also significantly influenced the parasitism rate of S. caerulea (Logistic Regression, Wale = 146.124, p < 0.001). When the daily light time was 0 h, the parasitism rate was zero. As the light time increased, the parasitism rates of S. caerulea increased. When the light time reached 14 h or more in one day, there was no significant difference in the parasitism rates among different daily light times. When the light time was 24 h, the parasitism rate was the greatest (89.17 ± 5.7%) ( Table 4 Effects of Supplementary Nutrition on the Development and Reproduction of S. caerulea Supplementary nutrients did not affect the number of eggs laid by S. caerulea (one-way ANOVA, F (7, 24) = 1.458, p = 0.231), but the number of eggs were different when glucose and trehalose were used as supplementary nutrients (Figure 3a). Supplementary nutrients did not affect the number of P. nigra parasitized by females either (one-way ANOVA, F (7, 24) = 1.666, p = 0.167), but the number of P. nigra parasitized by females were higher when using glucose or honey, compared to trehalose, as supplementary nutrients (Figure 3b). Supplementary nutrition significantly affected the life span of adult S. caerulea (twoway ANOVA, F (7, 48) = 218.442, p < 0.001). When S. caerulea adults were provided with supplementary sucrose, the life spans of adult females (33.0 ± 1.3 d) and males (33.2 ± 0.6 d) were the longest. The effects of supplementary nutrients on life spans of S. caerulea males and females were different (two-way ANOVA, F (1, 30) = 57.970, p = 0.026). When supplemented with fructose, females lived significantly longer than males. The interaction of supplementary nutrition and sex also significantly influenced the life span of adult S. caerulea (two-way ANOVA, F (4, 30) = 1.181, p = 0.014). (Figure 3c). Discussion The temperature of the environment is one of the most important factors affecting the development and reproduction of parasitoids [34,35]. Our study demonstrated that 30 to 33 • C is the optimal temperature for the population growth of S. caerulea. The number of eggs laid was the most, the parasitism rate of S. caerulea was the highest, the developmental period was the fastest, and the number of emerged adults was the most at 33 • C. On the other hand, the female ratio was lower and life span was shorter at 33 • C. However, there was no significant difference in the female ratio when the temperature was 30 • C or lower. Additionally, the adult life span should be less important for indoor proliferation than parasitism rate, developmental duration, and the number of emerged adults of this parasitoid, as parasitism of S. caerulea is monoparasitism. The effect of temperature on developmental duration of female and male parasitoids has been found to be different [36]. Our study found that female developmental durations of S. caerulea were typically longer than that of males under 21 and 24 • C. In our study, the female life spans were estimated using females not laying eggs. However, in the field, the females always lay eggs, so future studies should examine the impact of oviposition on life span. At 33 • C the female ratio of S. caerulea was significantly lower than that at other tested temperatures, demonstrating that there were more males at high temperatures. This might indicate that S. caerulea lays fewer female eggs, or the survival rate of females is lower than males, when the temperature is higher [37]. Further study is required to better understand the mechanism of this phenomenon. Under 18 • C (low-temperature condition) and 36 • C (high-temperature condition), female S. caerulea could still lay eggs, but the parasitoids could not complete their development. The reason for this might be direct or indirect. The parasitoid eggs might die at too high temperatures. On the other hand, their hosts might die at too high temperatures, indirectly resulting in the parasitoid eggs failing to develop. For example, P. nigra could not complete development, when the temperature was higher than 35 • C [38]. If S. caerulea parasitizes a high-temperature-resistant host, it is unclear whether it can complete development even at higher temperature, so this remains to be further studied. Another important factor for parasitoid development and reproduction is photoperiod [39,40]. This study demonstrated that the oviposition of S. caerulea was positively correlated with the length of the daily light time. The parasitism rates of S. caerulea increased with the increase of light time when the light time was less than 14 h/d. However, when the light time exceeded 14 h, the parasitism rates remained stable. Considering that when the light time exceeded 14 h the parasitism rate was above 80%, these was little room to grow, so, consequently, we could infer that the parasitism rate reached a plateau at the regimen of 14 h of light. Although at 14 h/d, the number of eggs laid was not the highest, this parasitoid wasp is a single parasitoid, so a host can only feed one parasitoid offspring, and, therefore, the parasitism rate is more important than the number of eggs laid. Therefore, a 14 h daily light time was beneficial to the reproduction of S. caerulea. In addition, studies have shown that photoperiod can affect the developmental duration of parasitoids [41,42]. When the daily light time was between 8 and 16 h, the developmental duration, number of emerged adults, and adult life spans of S. caerulea differed at different photoperiods, but their entire life history was unaffected. For S. caerulea, development was fastest, and number of emerged adults was largest, under a 14:10 (L:D) photoperiod, which indicated that this photoperiod was the most beneficial to the development of their larvae. We also found that a 12:12 (L:D) photoperiod was the most beneficial to the survival of S. caerule adults. The adult life span decreased when the daily light time was shortened or prolonged. It has also been reported that photoperiod affects the sex ratio of parasitoids. For example, some studies have found the female ratio of parasitoids decreased when the daily light time increased [9], while some have found the opposite phenomenon, where a long-day photoperiod could promote more female offspring [10]. However, our study found that photoperiod had no significant effect on the sex ratio of S. caerulea. Therefore, based on the above results, a 12 to 14 h daily light time was a suitable photoperiod for raising S. caerulea indoors. After emergence, most adult parasitoids continue to feed on supplementary nutrition. Supplementary nutrition can increase the number of eggs laid, the parasitism rate, and the life span of parasitoids to benefit their reproduction [43,44]. For example, a previous study by Wu et Chen reported that supplementing the diet with honey, glucose, sucrose, and fructose could increase the number of eggs laid by Telenomus theophilae [45]. We found that S. caerulea adults fed on carbohydrate sources such as sucrose, melezitose, fructose, honey, glucose, and trehalose. In addition, there are many previous studies confirming that supplementary nutrition significantly prolongs the life span of parasitoid wasps [46][47][48]. Our study also found that supplementary nutrition could extend the adult life span of S. caerulea. When sucrose and honey were used as supplementary nutrition, the life spans were longer than those in the other treatments. This may be because honey contains not only sugar, but also vitamins and proteins, which is closer to the food of parasitoid adults in the wild condition [49]. In conclusion, according to our experimental results, when rearing S. caerulea indoors, it is better to choose sucrose or honey for supplementary nutrition. The data obtained from our experiments are valuable for the mass-rearing of S. caerulea indoors, but there are also many deficiencies. We reared S. caerulea indoors with the ultimate goal of releasing S. caerulea into the field to control the rubber tree pest P. nigra, but our results were not entirely applicable to the field. In fact, in the field, the temperature is not in a constant state (as it is in the indoor state) and is always fluctuating [50,51]. The same is true for the daily light time and intensity with the weather or seasonal changes. In addition, adults of most parasitoid species feed on floral nectar and plant exudates in the field. The nutritional content of these foods is more complex. Although they are mainly composed of glucose, fructose and sucrose, they also contain small amounts of other sugars, lipids, amino acids and even protein components [52,53]. Moreover, our experiment did not take into account the effect of humidity. Humidity is another important factor in parasitoid reproduction [54]. Therefore, we need to broaden our experimental design in the future, not only aiming to rear S. caerulea indoors, but also simulating the development and reproduction of S. caerulea in the field, for better control of target pests. Conclusions In conclusion, the present study demonstrated the best conditions for rearing S. caerulea, which should be reared at 30 to 33 • C, with 12 to 14 h of daily light time, and using sucrose or honey as the supplementary nutrition for the adults. To reduce costs for practical propagation, sucrose would be more suitable as the supplementary nutrition. Future studies need to examine the role of biotic and abiotic factors on parasitoid biocontrol under field conditions. Data Availability Statement: The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to the uncompleted subject.
2023-01-17T17:03:12.754Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "501eb0005adde84b2eb45b93d77691c12c5d78bc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/14/1/82/pdf?version=1673595656", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc71751655603b18f7923bd57567ebe2e288e0b5", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
236939481
pes2o/s2orc
v3-fos-license
Exploiting Jamming Attacks for Energy Harvesting in Massive MIMO Systems In this paper, the performance of an RF energy harvesting scheme for multi-user massive multiple-input multiple-output (MIMO) is investigated in the presence of multiple active jammers. The key idea is to exploit the jamming transmissions as an energy source to be harvested at the legitimate users. To this end, the achievable uplink sum rate expressions are derived in closed-form for two different antenna configurations. An optimal time-switching policy is also proposed to ensure user-fairness in terms of both harvested energy and achievable rate. Besides, the essential trade-off between the harvested energy and achievable sum rate are quantified in closed-form. Our analysis reveals that the massive MIMO systems can make use of RF signals of the jamming attacks for boosting the amount of harvested energy at the served users. Numerical results illustrate the effectiveness of the derived closed-form expressions over Monte-Carlo simulations. I. INTRODUCTION Mobile and IoT devices in beyond 5G technologies will fundamentally be empowered by artificial intelligence (AI) processing, which makes them more power hungry due to the high computational loads [1]. Meanwhile, energy harvesting from ambient radio frequency (RF) signals has emerged as a sustainable solution for the tremendous growth in the energy consumption of wireless networks. Moreover, as wireless network densification continues and communication distance is becoming much shorter, wireless energy harvesting will be more meaningful for the applications with limited-capacity power sources. However, there are some practical limitations inhibit harvesting enough energy at the receivers such as the low RF energy to direct current (DC) conversion efficiency and the severe path-loss between the transmitter and the receiver. To this end, smart antennas technologies such as massive MIMO can be used to enhance the performance of energy harvesting and then boost the overall energy efficiency and achievable data rate [2]. Massive MIMO as a concept accommodates the massive connectivity requirement that is essential for future wireless cellular networks to support IoT and machine-type communications (MTC) [3], [4]. Nevertheless, universal wireless connectivity is appealing to the envisioned beneficiaries of these networks as well as to the bad actors where they can wreak havoc by actively eavesdropping and jamming. Moreover, jamming devices used to be implemented on expensive hardware mostly for military purposes, but currently it is possible to obtain a jamming device by modifying the firmware of commodity hardware [5]. Additionally, active jammers need a sufficiently high energy budget for each transmission block, which is allocated between pilot spoofing attacks during channel training phase and jamming the legitimate communications during data transmission phase [6]. Accordingly, different from other works in the context of energy harvesting, we consider utilizing the jamming energy transmitted by the active eavesdroppers to be harvested at the legitimate users. Towards detecting the jamming threats, there have been abundant research works with various effective approaches proposed. Specifically, the authors in [7] have conducted a survey on the methods that detect active attacks on massive MIMO systems. On the other hand, jamming defense strategies for massive MIMO are developed in [8] in which secret keys are employed to encrypt and protect the legitimate communications from the jamming attacks. In [9], a jamming-resistant receiver scheme has been proposed to utilize the high spatial resolution of massive MIMO for enhancing the robustness of uplink transmissions. Moreover, the multi-antenna basestation (BS) can be used in such scenarios for provisioning physical layer security by exploiting the large antenna arrays to simultaneously transmit confidential signals towards the legitimate user nodes and artificial noise (AN) sequences towards eavesdroppers for perturbing their intercepted signals, and hence, improving the secrecy performance. Furthermore, the security aspects of massive MIMO systems with the presence of active and passive eavesdroppers have been extensively studied in multiple works [9]- [14]. For instance, reference [14] investigates an AN-aided transmitter for secure communications in the presence of attackers capable of both jamming and eavesdropping. Nevertheless, we will not delve into similar mechanisms that aiming at strengthening the security or incapacitating the eavesdropper's ability to decode the confidential data, which are beyond the scope of this work. Instead, we focus on exploiting the jamming signals of the active attackers as a viable source for energy harvesting, and thus, increasing the energy efficiency of massive MIMO networks. Specifically, the objective of this paper is to make full use of the RF energy in wireless environment. The concept of energy harvesting has been widely adopted in massive MIMO systems, where some of the prior related works can be outlined as follows: in [15] an energy harvesting strategy has developed and analyzed to power the secondary users of a cognitive radio system through harvesting energy from primary user transmissions. Reference [16] has proposed and analyzed an architecture of self-backhaul and energy harvesting small cell network with massive MIMO. In [17], the trade-off between the achievable rate and harvested energy has been analyzed at massive MIMO receivers, where a low-complexity antenna partitioning algorithm for energy harvesting massive MIMO systems is proposed. Additionally, in a secrecy transmission over a multi-user MIMO system, an AN-injection scheme has been employed to mask the desired information signals for secrecy consideration while the severed users harvest energy from both the information-bearing signal and the AN [18]. Throughout the open literature, exploiting the transmissions of the jammers as an energy source has not been investigated yet. Therefore, this observation motivates this work to study the performance of a massive MIMO system utilizes the jammer attacks to power the legitimate users. The main technical contributions of this work can be summarized as follows: A novel RF energy harvesting scheme for massive MIMO has been proposed to fully utilize the RF energy in wireless environments. Specifically, the legitimate user nodes can harvest energy from the jamming transmissions of the active attackers in order to utilize this energy for sending their payload data to the BS. To the best of our knowledge, this has not been done in the existing studies in the literature. We consider the uplink transmission since we want evaluate the achievable data rate by the users when the proposed energy harvesting scheme is employed. The basic performance metrics of the proposed energy-harvesting scheme are derived for finite/infinite antenna regime at the BS with taking into account the cumulative impact of imperfect channel state information (CSI) and co-channel interference. The achievable uplink data rate is derived based on the worst-case Gaussian technique for the case of finitely many antennas at the BS. This technique is practically useful when the instantaneous CSI is not available. The rest of the paper is structured as follows. The considered system model is presented in Section II. Next, the performance metrics are derived for both limited and unlimited antenna arrays at the BS in Section III. Numerical results and discussions on our proposed scheme are provided in Section IV. Finally, Section V summarizes the concluding remarks. Notation: Z H and [Z] i,j denote the Hermitian-transpose and the (i, j)th element of the matrix Z, respectively. The absolute value and norm operator are denoted by |·| and · , respectively. E[z] and Var[z] are the expected value and the variance of z, and the operator ⊗ denotes the Kronecker product. Ei(z) is the exponential integral function for the positive values of the real part of z. Finally, the notation Z ∼ CN (0, Σ) denotes that Z is a circularly symmetric complex Gaussian distributed with zero mean and covariance matrix Σ. II. SYSTEM, CHANNEL, AND SIGNAL MODELS A. System and channel models We consider the uplink transmission of a multi-user MIMO network that consists of an M -antennas BS to serve K randomly distributed user nodes (U k ) for k ∈ {1, · · · , K} in the presence of N randomly located active jammers (J n ) for n ∈ {1, · · · , N }. Each user node and jammer is herein assumed to be equipped with a single antenna as shown in User nodes Massive MIMO Base-station Jammers Let G ∈ C (M ×K) be the channel matrix between the BS and user nodes, which can be modeled as whereG ∼ CN M ×K (0 M ×K , I M ⊗ I K ) accounts for the independent small-scale Rayleigh fading, and diagonal matrix captures the large-scale fading including path-loss and shadowing. User nodes can harvest energy from the jamming transmissions of the active eavesdroppers through the jamming channel H, which can be defined as whereH ∼ CN K×N (0 K×N , I K ⊗ I N ) captures the independent small-scale Rayleigh fading channel, and D H accounts the energy harvesting channel large-scale fading. Here, the elements of D H can be vectorized as vec A block-fading model has been considered in this analysis, where the channel remains constant during a coherence block of T C symbol times, which is practically computed as the product of the coherence time and the coherence bandwidth. Signals consisting of T C symbols can be transmitted in a coherence block and these signals can be represented by vectors of T C lengths. Specifically, the considered massive MIMO system is operating in time-division duplex (TDD) mode, and the uplink transmission coherence block (T C ) of each user node is divided into three orthogonal time-slots as depicted in Fig. 2. At the beginning of each coherence block, all user nodes simultaneously transmit orthogonal pilot sequences during (T p ) to the BS for estimating their respective channels. Afterwards, user nodes harvest energy during αT , where α ∈ (0, 1) is the time-switching factor and T = T C −T p . The user nodes utilize the harvested energy for data transmission during the remaining time duration (1−α)T . Meanwhile, we assume that the active jammers are constantly transmitting to jam the user nodes. B. Acquisition of channel state information In practice, the channels are estimated during the uplink channel training phase (T p ) at the BS through uplink pilot sequences transmitted by the user nodes. Then, these uplink channel estimates are used by the BS to obtain the downlink channels via channel reciprocity that holds in TDD systems [19]. Specifically, at the beginning of the channel training phase, all user nodes transmit their pilot sequences Φ k ∈ C (1×Tp) , where T p is the pilot sequence length, satisfying . . , K}. Accordingly, the pilot signal received at the BS can be written as where P p is the average pilot transmit power of the user nodes, while N p is an additive white Gaussian noise (AWGN) matrix whose elements are independent and identically distributed (i.i.d.) CN (0, 1) random variables. After projecting Y p onto Φ k , the minimum mean square error (MMSE) estimates of g k can be derived as [20] g The MMSE estimate ofĜ can be then written asĜ = [ĝ 1 ; · · · ;ĝ k ; · · · ;ĝ K ]. The true channel of G can be written in terms of its estimate as where E G is the channel estimation error matrix. From the orthogonality property of MMSE,Ĝ and E G are statistically independent and distributed asĜ ∼ CN (0,D G ) and E G ∼ CN (0, D G −D G ), respectively, whereD G is a diagonal matrix with the k-th diagonal element is given bŷ C. Energy harvesting During the second portion of uplink time-slot having a length of αT , user nodes harvest energy from the jamming transmissions. Thus, the average harvested energy at the k-th user can be expressed as where P E = diag(P E1 , · · · , P En , · · · , P E N ) accounts for the jamming powers of the active attackers for n ∈ {1, · · · , N }, h k is the k-th row of H, η is the RF-to-DC conversion efficiency. User nodes utilize the harvested energy in (6) for transmitting their payload data to the BS during the remaining time duration (1 − α)T , and thus, the uplink transmission power of the k-th user node can be defined as D. Signal model for uplink transmission In this subsection, the signal model for the massive MIMO uplink transmission is presented. Thus, the received signal at the BS after applying the zero-forcing 1 (ZF) detector can be written as whereŴ is the ZF detector at the BS and is defined aŝ In (8), P d = diag(P d1 , · · · , P d k , · · · , P d K ) is an K × K diagonal matrix representing the uplink transmit power for the K user nodes that obtained from (7) In order to capture the joint impact of detection uncertainty, interference, and filtered AWGN, the k-th user data stream received at the BS is written by using (8) as [19] where the first term accounts for the desired signal, and the second term represents the effective noise capturing the collective impacts of interference arises from detection uncertainty with imperfect CSI, inter-user interference, and filtered AWGN, which is expressed as An achievable uplink sum rate expression, that can tightly bound the ergodic sum rate, is derived by invoking the worstcase Gaussian approximation technique as follows [19] whereγ k is the effective signal-to-interference-plus-noise ratio (SINR) of the k-th user node and is obtained as follows where BU k and UI kk are the beamforming gain uncertainty and the interference caused by other users, respectively, which can be defined as Then, by evaluating the expectation and variance terms in (13), the achievable uplink sum rate for finite antenna regime at the BS can be derived in closed-form as follows (see Appendix A for the derivation) B. Uplink sum rate for infinite number of antennas (M → ∞) In this subsection, the asymptotic uplink sum rate is derived when the number of antennas at the BS grows unbounded with respect to the number of served user nodes, i.e., the number of users (K) is kept at arbitrary finite value against M . Thus, the transmit power at the user nodes can be scaled inversely proportional to the number of antennas at the BS as (15) and by invoking (7), the asymptotic SINR of the k-th user node can be derived as Next, by using (16), the achievable uplink sum rate at the BS can be derived as (see Appendix B for the derivation). where A k = ηαζ G k /(1−α)σ 2 n , Ω j is given in (28), and Ei(·) is the exponential integral function [21]. Remark 1: The uplink sum rate and the harvested energy are increasing functions of N , and consequently, massive MIMO systems can take advantage of the jamming attacks for boosting their achievable rate, while evidently maintaining the secrecy performance through some secure communication techniques. C. Optimization of time-switching factor In this subsection, we will show that the time-switching factor can be optimized for jointly guaranteeing the userfairness in terms of the harvested energy and achievable uplink rate of the user nodes. Based on the maximum fairness criterion, a max-min optimization problem can be formulated by first setting the user SINR targets equal to a common SINR (γ k ), and then searching for the maximum value of the common SINR as follows Since the objective function in this optimization problem is a monomial and the constrains are posynomials, this is a geometric program which can be solved by using CVX tool for disciplined convex programming to find the optimal time-switching factor α * . D. Rate-energy trade-off In time-switching protocol, the energy harvesting and data transfers take place in two orthogonal time-slots. Particularly, the harvested energy is a monotonically increasing function of α, while on the contrary the achievable uplink rate is a monotonically decreasing function of α. Then, the rate-energy trade-off can be obtained by first solving for α in (6) and then by substituting it into (15) as follows where ψ d k is defined as Remark 2: The obtained energy-rate trade-off in (19) is optimal owing to the max-min user-fairness when setting the harvested energy targets of each user to a common valueĒ h . Hence, the max-min optimal time-switching factor corresponding to any system operating point can be obtained through applying this optimal energy-rate trade-off. Remark 3: Since users and jammers are spatially-distributed, both transmission and harvesting channels experience distinctive path-losses within a coherence block. Therefore, user fairness must be jointly guaranteed in terms of both energy and rate in order to overcome the near-far effects and attain optimal levels of harvested energy and data rate. IV. NUMERICAL RESULTS In this section, we numerically evaluate the performance of the proposed energy harvesting scheme. To capture the effect of practical transmission impairments, the channel pathloss is modeled as [PL] dB = PL 0 + 10v log d/d 0 , where d, d 0 , PL 0 and v are defined as the distance between the user nodes and the BS, a reference distance, the pathloss at the reference distance, and the pathloss exponent, respectively. Specifically, the simulation parameters are set to n = 2.3, d 0 = 100 m, d G = 100−300 m, d H = 100−500 m, η = 0.7, σ 2 n = 0 dBW, and P P = 0 dBW. The coherence time is set to T C = 1ms having 196 symbols, pilot length is T p = K [19]. In Fig. 3, the average achievable user rate at the BS in the finite antenna regime is plotted against the jamming power by considering different numbers of active jammers (N ). The analytical uplink rate curves are plotted by using (15), and are compared to the Monte-Carlo simulations of the uplink rate expression in (12). It can be readily seen that the achievable rate gradually increases when the jammers transmit higher power levels because the more jamming power, the greater energy will be harvested by the user nodes. Moreover, existence of fairly large number of jammers in the proximity of the served area can contribute to harvest more energy and eventually achieves higher data rate. Our analysis is compared against the Monte Carlo simulations to validate that the derived achievable rate by using the worst-case Gaussian technique provides a tight lower bound to the achievable rate. Next, the relationship between the achievable sum rate at the BS and the harvested energy at the user nodes is investigated in the finite BS antenna regime and depicted in Fig. 4. We used the derived rate-energy trade-off expression in (19) to plot the analytical curves by considering different N . Clearly, the harvested energy increases with N , which leads to increased achievable sum rates due to the higher transmission powers of the user nodes. This observation validates the insights summarized in Remark 1, and Monte-Carlo simulations validate our analysis in (19). In Fig. 5, the achievable uplink sum rate at the BS is plotted versus the time switching factor (α) by varying M . The asymptotic sum rate curves are plotted by using (17) and compared against the Monte-Carlo simulations. It can be seen that the sum rate approaches its maximum at the optimal α and then decreases after that. When α is small, the harvested energy by the user nodes is not sufficient to achieve high sum rate. However, a too large α value means more harvested energy but less residual time for the users' transmission, which also deteriorates the achievable sum rate. Fig. 5 reveals that the theoretical/asymptotic sum rate limits in (17) can be achieved when the number of BS antennas grows large. V. CONCLUSION The feasibility of exploiting the jamming transmissions of the active attackers for energy harvesting in massive MIMO system has been investigated. Towards this end, a wirelesspowered multi-user massive MIMO system has been considered, where user nodes harvest energy from the jammers and utilize it for information transmission. System performance has been analyzed in training-based massive MIMO consisting of imperfectly estimated CSI at the BS and employing the time-switching protocol. The harvested energy, SINR, and sum rate expressions have been derived for finitely many BS antennas, where a tight sum rate expression is obtained in closed-form. The asymptotic performance metrics are also provided when the numbers of BS antennas grow without bound. Optimization of the time-switching factor has been formulated for jointly guaranteeing user-fairness in terms of both harvested energy and achievable rate of the spatiallydistributed users. Our analysis concludes that adopting the proposed scheme in massive MIMO uplink transmissions for boosting the energy harvesting is practically feasible, and to validate this claim, the performance metrics have been analytically and numerically evaluated over different antenna configurations at the BS. APPENDIX A DERIVATION OF (15) By using (5), the termŵ k g k can be simplified as follows: where E G k is the k-th column of estimation error matrix E G . Sinceŵ k and E G k are uncorrelated and E G k is a zero-mean random variable, then E [ŵ k E G k ] = 0. Therefore, Next, the term accounts for the beamforming gain uncertainty BU k can be derived by using (21) and (21) as where X is a K × K central Wishart matrix with N P degrees of freedom and covariance matrix I K , where E[Tr(X −1 )] = K/(M − K) [22]. The next term of the inter-user interference UI kk can be computed from (21),ŵ k g k =ŵ k E G k for k = k. Sinceŵ k and E G k are uncorrelated, then it can be shown that Similarly, the noise term obtains as Then, by substituting (22), (23), (24), and (25) into (13), the achievable rate of the k-th at the BS can be derived as (15). APPENDIX B DERIVATION OF (17) The obtained asymptotic SINR in (16) can be re-written asγ ∞ k = A k Z, where A k = ηαζ G k /(1 − α)σ 2 n and Z = h k P 1/2 E 2 = N j=1 Z j is a sum of N independent random variables that are exponentially distributed with each element having the probability density function (PDF) in the following form f Zj (x) = exp −x β Hj P Ej / β Hj P Ej , x ≥ 0. (26) However, when all Z j s are independent but not identically distributed with distinct average powers, then the PDF of Z is given by [23] fZ (z) = N j=1 Ωj βH j PE j exp −z βH j PE j , z ≥ 0. Thus, the average sum rate can be written as by evaluating this integral using [21], the achievable sum rate can be expressed as shown in (17).
2021-08-07T13:31:12.908Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "e465fa0a86977e0ca984be222c56c1e9f6c889a4", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/47566/1/Main_EH_from_jamming.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "de35af2d9afa7651d5a392c416939bbbc7f170d5", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266944647
pes2o/s2orc
v3-fos-license
ATSFCNN: a novel attention-based triple-stream fused CNN model for hyperspectral image classification Recently, the convolutional neural network (CNN) has gained increasing importance in hyperspectral image (HSI) classification thanks to its superior performance. However, most of the previous research has mainly focused on 2D-CNN, and the limited applications of 3D-CNN have been attributed to its complexity, despite its potential to enhance information extraction between adjacent channels of the image. Moreover, 1D-CNN is typically restricted to the field of signal processing as it ignores the spatial information of HSIs. In this paper, we propose a novel CNN model named attention-based triple-stream fused CNN (ATSFCNN) that fuses the features of 1D-CNN, 2D-CNN, and 3D-CNN to consider all the relevant information of the hyperspectral dataset. Our contributions are twofold: First, we propose a strategy to extract and homogenize features from 1D, 2D, and 3D CNN. Secondly, we propose a way to efficiently fuse these features. This attention-based methodology adeptly integrates features from the triple streams, thereby transcending the former limitations of singular stream utilization. Consequently, it becomes capable of attaining elevated outcomes in the context of hyperspectral classification, marked by increased levels of both accuracy and stability. We compared the results of ATSFCNN with those of other deep learning models, including 1D-CNN, 2D-CNN, 2D-CNN+PCA, 3D-CNN, and 3D-CNN+PCA, and demonstrated its superior performance and robustness. Quantitative assessments, predicated on the metrics of overall accuracy (OA), average accuracy (AA), and kappa coefficient (κ) emphatically corroborate the preeminence of ATSFCNN. Notably, spanning the three remote sensing datasets, ATSFCNN consistently achieves peak levels of OA, quantified at 98.38%, 97.09%, and 96.93% respectively. This prowess is further accentuated by concomitant AA scores of 98.47%, 95.80%, and 95.80%, as well as kappa coefficient values amounting to 97.41%, 96.14%, and 95.21%. Introduction Over the past few decades, hyperspectral Imaging has gained increasing popularity owing to advancements in imaging technologies [1].Hyperspectral imaging involves the acquisition of a series of images, where each pixel contains reflectance spectra ranging from visible and infrared (VNIR: 400-1000 nm) to short-wavelength infrared (SWIR: 1000-1700 nm), typically consisting of dozens or hundreds of channels.Because they contain both spectral and spatial information, hyperspectral images (HSIs) are ideal for applications in various fields including remote sensing [2], cancer detection [3], agricultural crop classification [4], cultural heritage preservation [5]. Efficiently exploiting the comprehensive information contained in HSIs always poses a challenge for researchers.Two common approaches for utilizing this information are clustering and classification.Clustering is the process of identifying natural groupings or clusters within multidimensional data based on some similarity measures [6].On the other hand, classification is a technique used to predict group membership for data instances [7].In real-world applications of HSIs, the datasets are often labeled or partially labeled.For instance, labeled datasets are used in tasks such as brain tumor detection [8], skin cancer classification [9], artwork authentication [10], pigment classification [11], plant disease recognition [12], and fruit and vegetable classification [13].Considering that classification is more suitable for labeled datasets, it has garnered increasing attention in recent years for exploiting the information in HSIs.The classification of HSIs can be broadly categorized into two groups.The first group focuses on spectral aspects, utilizing the spectral information within the images.The second group primarily emphasizes the spatial aspect, utilizing the spatial patterns and relationships within the images. The initial classification of HSIs from a spectral aspect was accomplished through Decision Tree [14].They employed a series of rules for determining the elements in the classification of hyperspectral imaging.In 1997, one study proposed Block-Based Maximum Likelihood Classification and conducted a comparative analysis of this method against conventional statistical methods, demonstrating its superiority [15].In 2000, the research conducted in [16] first utilized support vector machine (SVM) for HSI classification.Later in 2005, the research in [17] demonstrated that the framework combining the method of Random Forest was able to improve the performance of the classification.In 2008, the works in [18] proposed a sparse multinomial logistic regression for the feature selection in the classification of hyperspectral data, and their results have proved its effectiveness.Although these methods have improved the accuracy of HSI classification, they primarily focus on the spectral aspect of the data without considering spatial information. During the same period, some other researchers have explored the spatial aspect of HSI classification by incorporating feature extraction techniques to analyze spatial information.In 2005, the researchers employed the most principal components of the hyperspectral imagery as base images for an extended morphological profile.Their proposed approach outperformed a Gaussian classifier with several different feature extraction and statistical estimation methods [19].In 2009, the research in [20] presented a novel feature-selection approach to the classification of HSIs.The experimental results emphasized the robustness and generalization properties of the classification system in comparison to standard techniques.However, it should be noted that these studies primarily focused on the spatial aspect of classification while neglecting the valuable spectral information in the HSIs. Besides, in recent years, there has been a growing focus on the integration of spectral and spatial information for the classification of HSIs.One notable study conducted in [21] introduced a novel approach that addressed the challenge of handling unordered pixels' spectra while incorporating spatial information.They proposed a neighboring filtering kernel to achieve a spatial-spectral kernel sparse representation, which led to enhanced classification results.Building upon this progress, one research in [22] proposed a matrix-based spatial-spectral framework that aimed to capture both local spatial contextual information and the spectral characteristics of all bands simultaneously for each pixel.Their approach showed improvements compared to previous methods, suggesting the importance of incorporating spatial information alongside spectral data in HSI classification tasks. In the past decade, significant advancements have been witnessed in the domain of deep learning, with its application and development reaching remarkable milestones.Deep learning methodologies have demonstrated their ability to achieve groundbreaking outcomes across diverse domains, including but not limited to speech recognition [23], financial loan default prediction [24], and recommendation system [25].In comparison to conventional methods employed in the past, deep learning approaches exhibit distinct advantages characterized by their elevated precision at higher levels and automatic extraction of features.These unique features render them exceptionally well-suited for handling HSIs, which encompass a greater degree of complexity and comprehensiveness than other forms of imagery.Prominent deep learning models encompass the multi-layer perceptron (MLP) [26], convolutional neural networks (CNNs) [27], recurrent neural networks [28], generative adversarial networks [29], and autoencoders [30]. Among the various deep learning methodologies, CNNs stand out in terms of imaging process and analysis.Notably, CNNs offer automatic feature learning [31], allowing them to extract relevant features from hyperspectral data without explicit feature engineering.Furthermore, CNNs demonstrate flexibility in structure, enabling the incorporation of different layers and architectures tailored to specific classification tasks [32].Additionally, CNNs benefit from parameter sharing and parallel computing, resulting in efficient processing and improved computational performance [33]. Given these advantages, CNNs are the most suitable choice for hyperspectral imaging classification.Several studies have already showcased the applicability and effectiveness of CNNs in this domain.For instance, in 2015, some researchers in [34] proposed a variant 1D-CNN method specifically designed for HSI classification.Through a comparative evaluation against SVM-based and conventional DNN-based classifiers, the proposed method achieved superior accuracy across all experimental datasets.Furthermore, the study in [35] introduced a 2D-CNN model for HSI classification and conducted a comprehensive comparison with various SVM-based classifiers.Remarkably, their model outperformed all other approaches across all the four datasets considered. A potential concern in the realm of hyperspectral imaging classification is that many CNN applications primarily rely on 1D or 2D CNNs, which may not effectively capture all the crucial information embedded within HSIs.Consequently, researchers have endeavored to address this limitation by proposing alternative methodologies.For instance, in 2017, the study in [36] proposed a novel multi-scale 3-dimension deep CNN (M3D-DCNN).This method offers an end-to-end solution by fusing the spatial and spectral features extracted from HSI data.Comparative evaluations against other state-of-the-art techniques demonstrated that the performance of the M3D-DCNN approach was either superior or on par with standard datasets.Later in 2019, the researchers utilized adjacent-depth feature combination modules to extract multi-level refined features for each single-modal input image, taking into account the local information and visual details captured at different depths [37].Nevertheless, their methods may not be suitable for some cases of HSI classification where the number of convolutional layers is no more than two.In 2020, the research in [38] proposed a multi-layer CNN fusion model for classification.This model incorporates pool layer features from shallow and deep CNN models and employs an autoencoder for reconstructing discriminative features.The experimental results showcased the superior performance of their model compared to previous CNN applications using only 1D or 2D CNNs. The attention mechanism offers two notable advantages.Firstly, it enables the amplification of valuable features while minimizing the influence of factors that contribute insignificantly to the results.Secondly, it automatically assigns weights to different features, allowing for adaptive feature selection.These advantages render the attention mechanism highly suitable for integration into CNN structures.Building upon this notion, in 2020, some researchers in [39] proposed an attention-aided CNN model for spectral-spatial classification of HSIs.Specifically, a spectral attention subnetwork and a spatial attention sub-network are proposed for spectral and spatial classification, respectively.The results obtained from their proposed model exhibit superior performance when compared to several state-of-the-art CNN-related models.However, it is important to note that their approach solely employs a 2D-CNN architecture, which fails to fully utilize the wealth of information present in HSIs.Subsequently, in 2021, the study in [40] proposed a spatial-spectral dense CNN framework with a feedback attention mechanism called FADCNN for hyperspectral imaging classification.The proposed architecture combines spectral and spatial features in a compact connection style, thereby facilitating the extraction of comprehensive information through two separate dense CNN networks.Although the experiments and analysis demonstrate that their model achieves excellent accuracy, it is worth noting that their approach overlooks the information exchange among adjacent channels in HSIs. Despite attempts by certain researchers to incorporate the attention mechanism into CNN structures, two prevailing limitations remain.Primarily, their utilization has been restricted to 2D-CNN methods, thereby disregarding the potential benefits of 3D-CNN and the exploitation of information among adjacent channels in hyperspectral imaging.Additionally, the attention mechanism has not been applied to feature fusion in their investigations, focusing solely on 2D-CNNs.However, the attention mechanism has demonstrated its advantageous role in feature fusion, as observed in the field of Social Media's Rumor Detection [41], and Language Processing's Visual Question Answering [42]. Overall, the existing methods for hyperspectral imaging classification suffer from four primary limitations.Firstly, they overlook the inclusion of 3D information from HSIs, which holds the potential for capturing information among adjacent channels.Secondly, they prioritize feature selection optimization while neglecting the crucial process of feature fusion across different streams.Thirdly, the prevailing focus remains on statistical models or feature extraction, with neural networks, a more promising approach in recent years, rarely employed.Lastly, despite the emergence of the attention mechanism as a novel concept in the past decade, its integration with hyperspectral imaging has been scarce, particularly in the context of feature fusion across multiple streams. Motivated by the aforementioned works, the present manuscript undertakes the task of addressing the intricacies inherent in HSI classification.This is accomplished through a holistic methodology that encompasses the entirety of information facets intrinsic to the hyperspectral dataset.At its core, this methodology involves the extraction and feature fusion of all aspects of information, thereby fostering heightened accuracy and stability in the domain of hyperspectral classification.The principal contributions of our study are delineated as follows: (1) Integration of unified information from triple-stream CNNs: In this article, we embark on the utilization and feature fusion of the information derived from 1D CNNs (1DCNN), 2D CNNs (2DCNN), and 3D CNNs (3DCNN) for the purpose of hyperspectral classification.Notably, prior research undertakings have predominantly focused on singular streams of information present within the hyperspectral dataset.While some scholars have concentrated their endeavors on spectral attributes via 1DCNN, others have accentuated spatial characteristics through 2DCNN.Therefore, the utilization of 3DCNN has been a relatively underexplored domain.Nevertheless, the exploration of 3DCNN holds intrinsic significance due to the fact that it uniquely encapsulates the information among the adjacent channels in the hyperspectral dataset.Within this article, we not only extract the information from the triple streams but also propose a novel model to address the intricate challenge of feature fusion.(2) Pioneering application of attention mechanisms: The concept of attention mechanisms, originating and gaining popularity within the domain of Natural Language Processing, has garnered limited application in the realm of CNN models for hyperspectral classification, commencing around 2020.Notably, their integration has predominantly gravitated toward the confines of 2DCNN architectures.Our innovation lies in the pioneering utilization of attention mechanisms for the purpose of feature fusion among the information extracted from the 1DCNN, 2DCNN, and 3DCNN modalities.This endeavor marks one of the initial instances wherein attention mechanisms are harnessed for feature fusion, thereby enhancing the accuracy of hyperspectral classification methodologies.[43].The last fully connected layer serves as the loss layer that computes the loss or error which is a penalty for discrepancy between desired and actual output [45].In the loss layer, various activation functions are theoretically available, including Sigmoid [46], Linear [47], Tanh [48], and ReLU [49]. 1D-CNN As shown in figure 1, the theoretical architecture of a 1D-CNN model typically consists of several parts: the Input Layer, Convolutional Layer C1, Convolutional Layer C2, Max-Pool Layer M1, Dense Layer D1, and Output Layer. In a 1D-CNN model, the input layer consists of a 1D array with dimensions of N 1 × 1.The subsequent C1 Convolutional layer will have F different feature maps, where each feature map is an array with dimensions of N 2 × 1.If the kernel size is S 1 × 1, then their relationship satisfies equation ( 1) ( The Convolutional Layer C2 has a similar structure to the Convolutional Layer C1, with the difference being that N 3 is smaller than N 2 .When the kernel size is S 2 × 1, it satisfies a similar equation: In the Max-pooling layer, if the kernel size is S 3 × 1, there exists the relation N 4 = N 3 /S 3 .Subsequently, we need to flatten the results to a Dense Layer, and their relationship satisfies Finally, for the Output Layer, its dimension N 6 will be the same as the number of different types in the dataset ) The equations presented here denote that, in the ath layer and in the bth feature map, the result is obtained at point m along the 1D array.F represents the total number of all the feature maps in the layer of (a − 1)th.α represents the extended width of the kernel, thus the width of the kernel is represents the activation values in the kernel area of the (i − 1)th layer in each of the features, while ω i a,b,c represents the corresponding weights of this specific kernel.B a,b stands for the bias for the ath layer and the 3) suggests), so that this result can be converted into Y m a,b , which represents the activation value in the ath layer and in the bth feature map.The 2D-CNN network begins with an input layer that takes in a 2D image with dimensions of N 1 × N 1 .The first convolutional layer, C1, consists of F kernels with a size of S 1 × S 1 , resulting in F feature maps with dimensions of N 2 × N 2 , where N 2 = N 1 − S 1 + 1.Similarly, the second convolutional layer, C2, also has F feature maps but with dimensions of N 3 × N 3 , where N 3 = N 2 − S 2 + 1, when using a kernel size of S 2 × S 2 .For the Max-Pooling layer, with a kernel size of S 3 × S 3 , the output size is calculated using N 4 = N 3 /S 3 .The dense layer, D1, has a size of N 5 = F × N 4 × N 4 , where F is the number of filters and N 4 is the output size of the Max-Pooling layer.Finally, the output layer, which corresponds to the number of classification types, has a size of N 6 2D-CNN ) . ( The above equations describe how the result of a specific point (m, n) in the bth feature map of the ath layer is calculated.In the ath layer, F represents the total number of feature maps in the previous layer, which is (a − 1)th layer.α and β represent the extended width and height of the kernel, respectively.Therefore, the size of the kernel is is the activation value in the kernel area of the (i − 1)th layer for the cth feature map.ω i,j a,b,c is the corresponding weight for this specific kernel.B a,b represents the bias for the bth feature map in the ath layer.After obtaining the value of X m,n a,b , which is the sum of the weighted values and bias, the activation function σ (as equation ( 5)) is applied to obtain the activated value Y m,n a,b in the bth feature map of the ath layer.The input to a 3D-CNN model typically consists of a 3D dataset with the dimensions When the kernel used for the convolutional process is of size S 1 × S 1 × S 1 , the following two equations must be satisfied: The convolutional layer C2 has dimensions of N 2 × N 2 × F 2 , with a kernel size of S 2 × S 2 × S 2 between C1 and C2.This requires satisfying the equations During the Max-Pooling process, with a kernel size of S 3 × S 3 × S 3 , the equations N 3 /S 3 = N 4 and F 3 /S 3 = F 4 are used to calculate the resulting dimensions.The value of N 5 in the Dense Layer D1 is determined by Finally, the number of classification types determines the value of N 6 in the Output layer The equations above show the result of point (m, n, p) in the 3D coordinate system in the ath layer and the bth feature map.F represents the total number of all the feature maps in the (a − 1)th layer.α represents the extended width of the kernel; thus, the width of the kernel is (1 + 2α).β represents the extended height of the kernel; thus, the height of the kernel is (1 + 2β).γ represents the extended depth of the kernel; thus, the depth of the kernel is (1 represents the activation values in the kernel area of the (i − 1)th layer in each feature, while ω i,j,k a,b,c corresponds to the weights of this specific kernel.B a,b represents the bias for the ath layer and the bth feature map.Once we obtain X m,n a,b , which is the sum of the weighted values and bias, we need to apply the activation function σ (as suggested by equation ( 9)) to convert this result into Y m,n a,b , which is the activated value in the ath layer and the bth feature map. Motivation Hyperspectral datasets contain both spectral and spatial information.However, 1D-CNNs only extract information from the spectral aspect while ignoring spatial information and information among adjacent slices.On the other hand, 2D-CNNs consider spatial information but fail to extract both spectral information and potential information among adjacent slices.To compensate for the disadvantage, we need to use 3D-CNNs, as they are better suited for capturing information among adjacent slices in hyperspectral datasets.In summary, the imperative lies in obtaining more comprehensive results in the process of feature extraction, which is significant in yielding superior and more robust outcomes during subsequent feature fusion and classification.Therefore, it remains essential to conduct a thorough examination of hyperspectral data from all three pertinent aspects to attain this objective. In addition, even in the applications of former researches that try to combine information from 1D, 2D, and 3D aspects, they fail to explore effective methods for such feature fusion. This article proposes a new model called attention-based triple-stream fused CNN (ATSFCNN) to address these issues.Firstly, we extract features from HSIs using the techniques of multi-scale 1D-CNN, 2D-CNN, and 3D-CNN.Secondly, we propose a new attention-based feature fusion method to combine features obtained from 1D-CNN, 2D-CNN, and 3D-CNN.Compared with other naive fusions among the features, our fusion method can stress the features which have a higher significance to the results.Furthermore, our model is scalable, as features generated from other neural networks can replace those generated by 1D, 2D, and 3D networks.In that case, our new feature fusion method proposed in this article from 1D, 2D, and 3D networks remains applicable. The holistic architecture of ATSFCNN The architecture of ATSFCNN is shown in figure 4. Its architecture consists of four different modules: Data Preprocessing Module, Feature Extraction Module, Feature Fusion Module, and Output Module. Data preprocessing module The data preprocessing module involves conducting principal component analysis (PCA) on the original hyperspectral dataset to preprocess it.This results in a reduction of the dataset's dimensionality from M×N×P to M×N×L, which serves as input for the 2D stream.In addition, the hyperspectral datacube is transformed into a series of 1D arrays.Each pixel in the 2D (M×N) image is extended in the depth direction, creating M×N 1D arrays.Each array has a dimension of 1×P, which is the input for the 1D stream.The 3D stream uses the original hyperspectral datacube with the size of M×N×P directly. The choice to use the processed M×N×L datacube for 2D analysis and the original M×N×P hyperspectral datacube for 3D analysis is based on several factors.The 2D approach primarily focuses on 2D spatial information between adjacent pixels, while the 3D approach extracts information primarily from adjacent spectral channels.Therefore, in the case of 3D analysis, it is crucial to retain as many spectral channels as possible, whereas this is not necessary for 2D analysis.In fact, performing PCA on the data before using 2D-CNN can reduce dimensionality, decrease computational workload, and improve classification accuracy.This is demonstrated in later sections by comparing the results of 2D-CNN and 2D-CNN+PCA.As shown in the works of [50], the use of PCA in 2D-CNN can significantly improve HSI classification accuracy. Feature extraction module As depicted in figure 5, the feature extraction module employs the M×N×P dataset directly as input for 3D-CNN.Additionally, the processed M×N×L serves as input for 2D-CNN, while the M×N 1D arrays are used as input for 1D-CNN. In contrast to the conventional CNN model, our approach uses multi-scale feature extraction.The conventional CNN streams comprise Input_iD, ConviD_1, and ConviD_2 (i = 1,2,3).In our method, Multi-scale Feature_iD_1, Multi-scale Feature_iD_2, and Multi-scale Feature_iD_3 are used for Multi-scale feature extraction.Multi-scale Feature_iD_1 represents the convolutional result of Input_iD using a kernel different from that in ConviD_1.Multi-scale Feature_iD_2 is the concatenation of ConviD_1 and the Feature fusion module The feature fusion module, as depicted in figure 6, incorporates multi-scale features extracted from each stream.Equation ( 13) outlines the fusion process, which employs direct concatenation among the multi-scale features.This approach has been shown to be effective for multi-scale feature fusion, as demonstrated in the explications and experiments conducted in 2022 [51].Their findings support the efficiency of concatenation for this purpose Deep fusion of features extracted from different dimensions presents a challenge due to differences in their dimensions.To address this, a flattening process is applied to each fused multi-scale feature, which is then passed through the same dense layer.This approach standardizes the extracted features from each stream into a 1D feature array of the same size.Subsequent to the Dense layer, each FF i (where i assume values of 1, 2, and 3) uniformly assumes dimensions of R × 1, wherein R signifies the extent along the array dimension originating from the Dense Layer FF_i = Dense (Flatten (Fc_i)) . ( Following the aforementioned standardization process, the fused multi-scale features undergo two subsequent sub-modules: the concatenation permutation module and the attention module. Concatenation permutation module The concatenation permutation module is illustrated in figure 7. Given the three streams of extracted features, there are six possible permutations of their concatenation: Fa_1 (1D-2D-3D), Fa_2 (1D-3D-2D), Fa_3 (2D-3D-1D), Fa_4 (2D-1D-3D), Fa_5 (3D-2D-1D), and Fa_6 (3D-1D-2D).The module considers all of these possibilities for concatenation.Due to the potential impact of the sequence of concatenation on the accuracy of feature fusion and classification, all six possible combinations of concatenation must be considered.In the experiments, one of these combinations is used at a time, with all six combinations ultimately being utilized FF_all is the outcome that follows the execution of the Concatenation Permutation Module.Within this module, the three Multi-scale Fused Feature FF_i (R × 1) were concatenated, consequently leading to its dimensions expanding to (3R) × 1. Attention module In 2017, some reseachers proposed a Transformer model, which marked a significant milestone in the application of attention mechanisms to the domain of images [52].Their experiments demonstrated the model's ability to enhance significant information and remove redundancy.However, the Transformer model presented in their work was complex, requiring multiple hidden layers.Inspired by this, we propose a lightweight attention-based module in this article that achieves similar results.As shown in figure 8, the Concatenated Fused Feature (FF_all) is inputted into the Channel Attention and Spatial Attention modules.Our model is inspired by the works of [53], who proposed a similar model of Channel Attention and Spatial Attention.However, there are two key differences between our work and theirs.Firstly, while their model operates solely on 2D images, we have adapted our model to the 1D case for feature fusion among the extracted features of three streams of hyperspectral datasets.Secondly, their models were directly tested on image detection, whereas in our article, we propose a novel and unique application of using the model to improve the classification performance of the fused features. Channel attention Channel Attention focuses on the finding out the inter-channel information of the dataset.When constructing our model of Channel Attention, we utilize the Average Pooling as well as Max Pooling. Initially, during the development of the Channel Attention model, researchers commonly employed Average Pooling as the primary method for feature extraction, as demonstrated in prior works such as [54,55], which focused on Object Detection and demonstrated the effectiveness of this approach.However, subsequent studies, including [56,57], have shown that incorporating Max Pooling in addition to Average Pooling can significantly improve model accuracy.Therefore, in the present study, we have opted to utilize both Average Pooling and Max Pooling in our Channel Attention models. In Channel Attention, the fused feature undergoes both Max Pooling and Average Pooling operations.The outputs of these operations are then passed through a shared network, which generates the respective channel attention maps.To accomplish this, a MLP with a single hidden layer is utilized. Compared with conventional MLP, a shared MLP is a type of neural network architecture that extracts pertinent features from various inputs, generating intermediate representations that subsequently interface with other facets of the model, such as classifiers or decision-making components.This architectural paradigm embodies the principle of sharing neural network components among diverse data streams, leveraging their collective prowess to amplify the model's overall effectiveness and predictive capacity.The genesis of the Shared MLP can be traced back to pioneering work in the domain of Natural Language Processing in 2017 [58], where its merits were first underscored.Subsequently, this paradigm found extensions in the realm of attention mechanisms [53], augmenting its relevance and utility. Based on the idea of shared MLP, the process of Channel Attention can be expressed mathematically as shown in equations ( 16) and (17).CA_map materializes through an automated process that accentuates dominant features while simultaneously reducing the weights of less impactful attributes.Consequently, CA_map corresponds to the features in the FF_all, meticulously representing each feature in a one-to-one correspondence.Thus, the dimensions of the CA_map harmoniously mirror those of FF_all, spanning (3R) × 1 ) where ⊗ denotes the matrix multiplication. Spatial attention Spatial Attention focuses on exploiting the inter-spatial information of the dataset.As we have explained in the Channel Attention, in Spatial Attention, we still use the Max Pooling and the Average Pooling at the same time.The main difference is that we concatenate the results of pooling and add a convolutional layer after it for extracting the spatial layer.SA_map emerges as a consequence of an automated procedure that enhances salient features while concurrently diminishing the influence of less impactful attributes.As a result, SA_map impeccably aligns with the individual features present within the FF_all, ensuring a one-to-one representation of each feature.Thus, the dimensions of the SA_map harmoniously mirror those of FF_all, encompassing a spatial span of (3R) × 1 ) Channel Attention extracts the features more from the aspect of the channel, while Spatial Attention extracts the features more from the aspect of spatial relationship.Even though for 1D case, their difference may not be as distinguishing as the case of 2D.However, it is still useful to consider these two aspects for comprising more comprehensive information After extracting features from the three streams, their fusion is carried out using equation (20).The resulting fused feature is subsequently utilized in the Output Module for further processing and analysis. Output module The Output Module is where the fused feature FF_Merge, obtained from the Feature Fusion Module as described in section 3.5, is processed using a Dense layer whose size is determined by the number of endmembers present in the dataset being used.This step is critical, as the size of the Dense layer must match the number of endmembers in order to facilitate accurate identification and classification. Hyperparameters of models In previous sections, we discussed the scalability of our ATSFCNN.Specifically, the neural networks used for extracting 1D, 2D, and 3D features can be replaced with other models, and the methods in the Feature Fusion section will still function.However, we must provide further details regarding the structures of the neural networks used in this article. 1D-CNN In our conducted experiments, the architecture of the 1D-CNN comprises a total of four hidden layers, as visually delineated in figure 9.The selection of hyperparameters governing this structure entails a multitude of potential permutations.To foster a stringent basis for performance assessment and result in comparability, we opted to standardize the architectural configuration across the 1D-CNN, 2D-CNN, and 3D-CNN models. Commencing with the foremost convolutional layer, we introduced 64 filters, each characterized by a kernel size of 3×1.The input to this layer is designated as T 11 .Subsequently, the second convolutional layer integrates 32 filters of identical kernel dimensions, with T 12 as its designated input.A subsequent Flatten operation is executed to transform the output into a 1D array, with T 13 denoting the resultant array.Consequent to this, a Fully-Connected layer interfaces with the third layer, incorporating a ReLU activation function to exclude non-positive outcomes.This layer assumes dimensions of R×1. Pertaining to the original hyperspectral dataset with dimensions expressed as M×N×P, as illustrated in figure 4, the dataset's inherent three-dimensional structure is reconfigured into a sequence of 1D arrays.Each individual pixel within the two-dimensional (M×N) image is projected along the depth dimension, yielding M×N distinct 1D arrays.Each of these arrays is characterized by a dimensionality of P×1, which, in turn, serves as the input for the 1D stream.This configuration thus imparts T 11 with dimensions of P×1.Given that the kernel dimensions are established at 3×1, in accordance with equation (1), T 12 assumes dimensions of (P-2)×1.Similarly, T 13 adopts dimensions of (P-4)×1.Within the confines of the Flatten layer, a compression process is initiated, whereby the entirety of the 32 filters within the second convolutional layer are compacted.The outcome of this operation, quantified as (P-4)×32, translates to (32P-128) in numerical terms.Consequently, the resultant output dimensions of the Flatten layer can be succinctly expressed as (32P-128)×1. The architecture of our 1D-CNN model is primarily inspired by the works of [34,59], and [60].However, there is one fundamental difference between our model's architecture and theirs.In their respective studies, they have included a Maxpooling layer.[59,60] added the Maxpooling layer after all the Convolutional layers, while [34] inserted the Maxpooling layer between the first and second Convolutional layer. In our study, we decided not to include the Maxpooling layer due to the following consideration.In our 3D-CNN models, the hyperspectral dataset is divided into a series of small cubes measuring 5×5×P, and the kernel is 3×3×3.Without MaxPooling, the 1D-CNN, 2D-CNN, and 3D-CNN can have very similar structures, allowing for better comparison of their individual results.Adding the Maxpooling stage would require much larger small cubes as inputs for the 2D and 3D models.However, this step is not necessary for our study because our primary contribution is proposing a novel method for fusing features from different streams of HSIs for classification purposes. 2D-CNN In our conducted experiments, the architecture of the 2D-CNN encompasses a total of four hidden layers, as elucidated in figure 10.Commencing with the foremost convolutional layer, we introduced 64 filters, each characterized by a kernel size of 3×3.The input to this layer is designated as T 21 .Subsequently, the second convolutional layer integrates 32 filters of identical kernel dimensions, with T 22 as its designated input.Subsequently, a Flatten operation is executed, resulting in the transformation of the output into a 1D array, with the resultant array designated as T 23 .Consequent to this, a Fully-Connected layer interfaces with the third layer, incorporating a ReLU activation function to exclude non-positive outcomes.This layer assumes dimensions of R×1.With regards to the original hyperspectral dataset characterized by dimensions M×N×P, as portrayed in figure 4, via PCA, the inherent three-dimensional structure of the dataset undergoes a transformation into a datacube configuration of M×N×L. In the field of 2D-CNN, it is practical to choose small image patches and use them to represent the class of the pixel in the center of the patches, as discussed by [35].Specifically, to classify a pixel p x,y at location (x, y) on the image plane, we use a square patch of size s×s centered at pixel p x,y .To avoid the issue of void values for image patches near the borders, we need to add padding.In this study, we choose to add zero values in the padding parts near the borders of the dataset. In light of the M×N image being methodically encapsulated within the s×s image patch paradigm as elucidated earlier, it ensues that the input dimensionality pertinent to the 2DCNN-represented by T 21 -precisely equates to s×s.An intricate facet to underscore is the existence of M×N such minuscule patches that embody the comprehensive representation of the entire dataset.Harmonizing this with the kernel dimensions, as per the precepts of equation ( 1), culminates in the derivation s-3+1 = (s-2).It follows that T 22 attains dimensions denoted as (s-2)×(s-2), with s signifying the size of the image patch.Similarly, T 23 assumes dimensions of (s-4)×(s-4), adhering to a parallel rationale. Within the confines of the Flatten layer, a compression process is initiated, whereby the entirety of the 32 filters within the second convolutional layer are compacted.The outcome of this operation, quantified as (s-4)×(s-4)×32, translates to (32s 2 − 256s + 512) in numerical terms.Consequently, the resultant output dimensions of the Flatten layer can be succinctly expressed as (32s 2 − 256s + 512)×1. 3D-CNN In our conducted experiments, the architecture of the 2D-CNN encompasses a total of four hidden layers, as elucidated in figure 11.Commencing with the foremost convolutional layer, we introduced 64 filters, each characterized by a kernel size of 3×3×3.The input to this layer is designated as T 31 .Subsequently, the second convolutional layer integrates 32 filters of identical kernel dimensions, with T 32 as its designated input.Subsequently, a Flatten operation is executed, resulting in the transformation of the output into a 1D array, with the resultant array designated as T 33 .Consequent to this, a Fully-Connected layer interfaces with the third layer, incorporating a ReLU activation function to exclude non-positive outcomes.This layer assumes dimensions of R×1.Similar to the 2D case, T 31 has the size of s×s×s, T 32 has the size of (s-2)×(s-2)×(s-2), T 33 has the size of (s-4)×(s-4)×(s-4). Experimental results and discussions This section will contain the following parts: 4.1 dataset details, 4.2 experimental setup, 4.3 metrics for evaluation, 4.4 baseline results and discussions, 4.5 results for different training/validation/testing proportions Dataset details The study utilizes thr ee hyperspectral datasets: Samson, Urban, and PaviaU.All these three datasets belong to the remote sensing field. Samson contains 95×95 pixels with 156 channels, covering a wavelength range of 401 to 889 nm with a resolution of 3.13 nm.It includes three different endmembers: Soil, Tree, and Water.Urban comprises 307 × 307 pixels with 162 channels, with a resolution of 10 nm.It consists of six endmembers: Asphalt, Grass, Tree, Roof, Metal, and Dirt.PaviaU has 610 × 340 pixels with 103 channels.Among all these pixels, only 42 776 pixels belong to the 9 endmembers: Asphalt, Meadows, Gravel, Trees, Painted metal sheets, Bare Soil, Bitumen, Self-Blocking Bricks, and Shadows.These three datasets are all labeled, with ground-truth classification information provided for each pixel.In these three datasets, 70%, 5%, and 25% are randomly assigned as Training dataset, Validation Dataset, Testing Dataset.The Training dataset assumes a pivotal role as the part for model training.Subsequent to this, the Validation dataset operates as the crucible for refining model hyperparameters.Ultimately, the Testing dataset is enlisted to conduct a comprehensive evaluation of model performance. The specific dataset partitions are demonstrated in tables 1-3. Experimental setup The experiments were conducted using the Anaconda 22.9.0 environment and the Python language, along with the TensorFlow library toolkit.The results were generated on a PC located in the Imaging Department of C2RMF, featuring an Intel(R) Xeon(R) W-2275 CPU 3.30 GHz and an Nvidia GeForce RTX 3080 graphics card, with 128G of memory. During the validation procedure, we need to determine the most suitable hyperparameters for our model.The two most predominant factors are the learning rate, the value of reduced dimension after PCA.Within this context, for each hyperparameter configuration, models exhibiting maximal classification performance within the validation subset shall be selected.Subsequently, during the assessment conducted on the testing dataset, the chosen model characterized by this specific constellation of hyperparameters will be engaged.During training, the batch size was set to 32, and the Binary Cross Entropy loss function was employed.All experiments utilized the Adam Optimizer, a choice underpinned by Adam's favorable attributes in terms of operational simplicity, invariance of the magnitudes of the parameter [61], excellent performance [62].The number of epochs was set to 10, and the results of each model were repeated 10 times.The image patch size was fixed at 5 × 5 for all experiments.The experiments were conducted using the same hyperparameters, with a training/validation/testing ratio of 0.70:0.05:0.25. First, we delve into the calibration of the learning rate, a pivotal facet of utmost significance.The determination of an apt learning rate value assumes paramount importance, as its judicious selection is instrumental in expediting convergence during training.An excessively low value might protract the convergence process significantly, whereas an overly high value can trigger deleterious divergence in the loss function dynamics [63].In the quest to ascertain the most fitting learning rate, a comprehensive array of options is explored: {0.01, 0.003, 0.001, 0.0003, 0.0001, 0.00 003}.Across these three datasets, the experiments are uniformly executed over a span of 10 epochs.Drawing insights from the classification outcomes vis-a-vis the validation dataset, we identify the optimal learning rate values.For the Samson dataset, the learning rate of 0.001 emerges as the most suitable.Conversely, the Urban and PaviaU datasets manifest an optimal learning rate value of 0.0003.Secondly, within the framework of our proposed methodology, we introduce an initial application of PCA to effect dimensional reduction in the context of the 2D channel.In this context, the identification of the most appropriate reduced dimension value constitutes a salient endeavor.As depicted in table 4, it becomes evident that for the Samson dataset, optimal performance materializes at a reduced dimension of 15, whereas the Urban dataset attains peak performance at a reduced dimension of 20.Conversely, the PaviaU dataset yields superior results with a reduced dimension of 10.Notably, the tabulated results in table 4 underscore a consistent trend wherein models devoid of PCA preprocessing consistently underperform in comparison to their PCA-incorporated counterparts.This trend resonates with the compelling necessity of integrating PCA as an indispensable facet of the data preprocessing pipeline. Metrics for evaluation The metrics used to assess the results of the classification in this section are OA, AA, and κ (kappa coefficient). The OA is calculated as follows: Here, A i represents the number of pixels that belong to the ith class and were correctly classified as the ith class by the model.The variable N represents the total number of pixels in the testing dataset. The AA is calculated as follows: Here, A i represents the number of pixels in the ith class that were correctly classified, and N i represents the total number of pixels in the ith class.The variable N denotes the total number of pixels in the testing dataset. κ (kappa coefficient) is a widely used statistical metric, it has been utilized in various fields including medical research [64], classification of thematic map [65], and voice recognition [66].It is a powerful statistical tool because it is computed by weighting the measured accuracies, which represents the robust measure of the degree of agreement. According to [67], the κ can be briefly expressed as follows: where Pr(a) represents the actual observed agreement, and Pr(e) represents chance agreement. Baseline results and discussions In this part, we evaluated the performance of various CNN models for HSI classification on three datasets.The models evaluated included 1D-CNN, 2D-CNN, 2D-CNN + PCA, 3D-CNN, 3D-CNN + PCA, and the proposed ATSFCNN, all implemented in the Python language and TensorFlow library.We used OA, AA, and κ (kappa coefficient) as the evaluation metrics. The results on the Samson dataset shown in table 5, indicate that the proposed ATSFCNN model outperformed other individual CNN models that use only one stream of the hyperspectral dataset.The OA, AA, and κ results showed that ATSFCNN had the highest OA, AA, and kappa coefficient of classification, as well as the smallest standard deviation compared to other methods.This indicates that the results of ATSFCNN are more stable and consistent, with better prediction ability even for individual classes. In table 6, on the Urban dataset, ATSFCNN achieved the highest OA and the smallest standard deviation for the OA metric.Similarly, ATSFCNN outperformed other models in terms of AA and κ.The results in table 7 of the PaviaU dataset also showed that ATSFCNN's performances in OA, AA, and κ were much better than other models, with the highest average and the smallest standard deviation. The illustrations denoted as figures 12-14 revealed the ground-truth results and the corresponding classification maps derived from various methodologies, including 1D-CNN, 2D-CNN, 2D-CNN + PCA, 3D-CNN, 3D-CNN + PCA, and ATSFCNN.The visual representations presented in these figures harmonize cohesively with the numerical values meticulously documented in tables 5-7.Remarkably, even within the context of a testing dataset that encompasses merely a quarter of the entire dataset, the conspicuous excellence of the proposed ATSFCNN model remains distinctly evident, as vividly depicted in table 7. Our study's findings revealed that ATSFCNN, which fuses features from 1D-CNN, 2D-CNN, and 3D-CNN, achieved more accurate and stable predictions than models that use only one stream or part of the hyperspectral dataset.The superior OA of ATSFCNN can be attributed to its architecture, which considers spectral, spatial, and hidden information among adjacent channels in the hyperspectral data.In contrast, individual models such as 1D-CNN, 2D-CNN, and 3D-CNN only capture partial aspects of the data, which can lead to less accurate predictions. These results demonstrate that, by fusing features from different streams using the attention mechanism, ATSFCNN incorporates information from all relevant aspects, resulting in more stable and accurate predictions with the smallest standard deviation.These results suggest the importance of using ATSFCNN in HSI classification. Results for different training/validation/testing proportions Real-world applications often involve situations where labels are only available for a small proportion of the entire dataset.Therefore, it is crucial to examine the performance of our proposed model, ATSFCNN, under such conditions.To evaluate the robustness of the model, we conducted experiments with different proportions of training, validation, and testing data: 70/5/20, 45/5/50, 20/5/75, 10/5/85, 5/5/90.The notation T1/V/T2 indicates the percentage of the entire dataset used for the training, validation, and testing sets, respectively. The results presented in table 8 demonstrate that the OA of all methods decreases slightly as the proportion of the testing set increases, which is expected due to the larger proportion of the training set typically leading to better model training.However, ATSFCNN consistently outperforms other methods across various proportions of training/validation/testing, highlighting its potential for real-world applications with limited labeled data.Notably, even when the training dataset accounts for only 5% of the entire dataset, ATSFCNN achieves a high classification accuracy of 96.65%.Furthermore, the relative performance of different methods remains consistent across various proportions of training/ validation/testing, with ATSFCNN consistently achieving the highest OA and smallest standard deviation.This consistency suggests that ATSFCNN is a robust model for hyperspectral imaging classification, even in situations where labeled data is scarce.Similarly, table 9 presents results of ATSFCNN's performance in predicting the Urban hyperspectral dataset across different proportions of training, validation, and testing data, showing that as the proportion of the training set decreases from 70/5/25 to 5/5/90, ATSFCNN consistently achieves the best OA among all models, with standard deviations that are almost consistently the smallest.Finally, table 10 confirms that ATSFCNN outperforms all other models in predicting Pavia University hyperspectral datasets, irrespective of the proportion of training, validation, and testing data used.These results emphasize the robustness of ATSFCNN in predicting hyperspectral datasets with limited labeled data. From tables 8-10, two notable trends can be observed regarding the performance of different models on different datasets. Firstly, the results of 2D-CNN and 3D-CNN fall short of 1D-CNN in the Samson and Urban datasets, while in the PaviaU dataset, this trend is reversed.This can be explained by the fact that 1D-CNN only extracts spectral information, requiring fewer hyperparameters to train the model and being better suited for situations where the number of endmembers is smaller.The Samson dataset has three different endmembers and the Urban dataset has six different endmembers, whereas the number of endmembers in the PaviaU dataset is much larger.Therefore, the simplicity of the 1D-CNN model may provide an advantage over the more complex 2D-CNN and 3D-CNN models in datasets with fewer endmembers. Secondly, our proposed ATSFCNN consistently outperforms the other models on all three datasets.This can be attributed to the fact that the ATSFCNN incorporates and fuses features extracted from all possible streams, giving it an advantage over other models that only utilize information from one aspect.Additionally, the classification accuracy of ATSFCNN can still be improved by utilizing more complex CNN models with more hidden layers and various scales.However, we emphasize that the main contribution of our research lies in the development of a novel approach for integrating features from all relevant streams, considering all pertinent information from hyperspectral data, and fusing the features for prediction.The specific architectures used for the 1D, 2D, and 3D streams can be substituted with other CNN models, demonstrating the scalability of our proposed ATSFCNN approach. Conclusions This study addresses two critical limitations of existing research in the classification of HSIs, namely the absence of consideration of 3D information and studies on feature fusion.To overcome these limitations, we present a novel Convolutional Network model, ATSFCNN, which utilizes a novel attention-based feature fusion method to fuse features extracted from hyperspectral datasets in 1D, 2D, and 3D dimensions. We conducted experiments on three remote sensing datasets to compare the performance of ATSFCNN against other CNN models that use only one stream, including 1D-CNN, 2D-CNN, 2D-CNN+PCA, 3D-CNN, and 3D-CNN+PCA.Our results indicate that, ATSFCNN always outperforms all other models across different training/validation/testing proportions in the average of OA.And ATSFCNN obtains the smallest in the standard deviation of OA for almost all the cases.In fact, even when it does not achieve the smallest standard deviation of OA, its value remains comparable to that of the best-performing models.Significantly, our proposed ATSFCNN model demonstrates superior stability and robustness, essential attributes for real-world applications. Our experiments demonstrate that ATSFCNN's performance is not only superior in terms of accuracy but also in terms of stability, which is a crucial consideration for developing reliable models applicable to various scenarios and datasets.The consistently superior performance of ATSFCNN in terms of stability, as demonstrated by the small standard deviation in the results, further supports its reliability. Overall, our study establishes that ATSFCNN is an effective approach for HSI classification.By integrating features from various CNN models, ATSFCNN captures and utilizes information from different aspects of hyperspectral data, leading to enhanced accuracy and stability.This approach holds potential for further extensions and adaptations to other datasets and applications in the future. Figure 1 . Figure 1.The Theoretical structure of the 1D-CNN model. Figure 2 . Figure 2. The Theoretical structure of the 2D-CNN model. Figure 2 illustrates the architecture of a 2D-CNN model.Similar to the 1D-CNN model, it consists of several layers, including the Input Layer, Convolutional Layer C1, Convolutional Layer C2, Max-Pool Layer M1, Dense Layer D1, and Output Layer. Figure 3 . Figure 3.The Theoretical structure of the 3D-CNN model. Figure 9 . Figure 9.The 1D-CNN model in the experiments. Figure 10 . Figure 10.The 2D-CNN model in the experiments. Figure Figure The structure of the 3D-CNN model. Figure 15 - Figure 15-17 serve as illustrative depictions of the classification maps, which align cohesively with the outcomes as delineated in tables 8-10.Simultaneously, abundance maps corresponding to the authentic ground truths are provided for reference.Notably, the classification maps resulting from the proposed ATSFCNN exhibit markedly reduced levels of noise in various regions, in stark contrast to the outcomes generated by methodologies exclusively reliant on singular streams (1D-CNN, 2D-CNN, 2D-CNN + PCA, 3D-CNN, 3D-CNN + PCA).A specific case in point, as illustrated in figure 17, distinctly underscores the superior classification precision of ATSFCNN in distinguishing between Bare Soil and Self-Blocking Bricks, outperforming all alternative methodologies. Table 1 . and number of pixels of Samson Dataset. Table 2 . Classes and number of pixels of Urban Dataset. Table 3 . Classes and number of pixels of PaviaU Dataset. Table 4 . OA (%) of different values of reduced dimension after PCA under different datasets. Table 8 . Under different sample proportions, class-specific accuracy (%), overall accuracy (OA) and their standard deviation (%) with different techniques in Samson Dataset Table 9 . Under different sample proportions, class-specific accuracy (%), overall accuracy (OA) and their standard deviation (%) with different techniques in Urban Dataset. Table 10 . Under different sample proportions, class-specific accuracy (%), overall accuracy (OA) and their standard deviation (%) with different techniques in PaviaU Dataset.
2024-01-12T16:09:54.113Z
2024-01-10T00:00:00.000
{ "year": 2024, "sha1": "59ebae5ed217e9931d132ae551a998f1250fd068", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/2632-2153/ad1d05/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3639eb544b3cb0012d32c3478b8bb4d2fe8f0806", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247505239
pes2o/s2orc
v3-fos-license
PROBLEMS OF ISLAMIC PRIMARY AND SECONDARY EDUCATION IN ERA 4.0 IN INDONESIA This article aims to determine the policies and problems that exist in Islamic primary and secondary education in Indonesia in Era 4.0. This research method uses library research with a qualitative approach. The authority of schools to manage themselves is given by the government to develop the potential that exists in schools so that there are several things that are the duty of the government in terms of education, one of which is education management as stated in Law no. 23 of 2014 that the district or city government manages basic education. The national education policy for Islamic basic education includes Madrasah Ibtidaiyah and Madrasah Tsanawiyah which have been regulated in Minister of Religion Regulation No. 60 of 2015 concerning the Implementation of Madrasah Education, paragraphs 4 and 5. Then the national education policy on secondary education includes Madrasah Aliyah and Vocational Madrasah which have been regulated in Minister of Religion Regulation No. 60 of 2015 concerning the Implementation of Madrasah Education, paragraphs 6 and 7. The educational levels of MI, MTs, MA, and MAK are all under the auspices of the Minister of Religion. Islamic primary and secondary education in Indonesia in Era 4.0 include First, the lack of public interest in Islamic education. Second, the low quality of teachers, Third, government discrimination against the allocation of Islamic education funds. Fourth, Certificate oriented. Fifth, the low quality of each learning process management process in each educational unit. INTRODUCTION Education is a conscious effort by the government to shape students into qualified, moral, and moral individuals to be useful in their lives, society, and nation. 1 Every citizen has the right to education as stated in the 1945 Constitution article 31 paragraph 1, namely "every citizen has the right to education". The goal is to be able to educate the life of the nation as is the goal of education in Indonesia as stated in the preamble of the 1945 Constitution. In education, we often hear about the education system, management, and education policy. 2 All of these are important aspects of education so that educational goals can be realized to the fullest. Education is divided into several types, one of which is formal education which includes basic education, secondary education, and higher education. 3 The education system in Indonesia has been regulated in Law no. 20 of 2003 concerning the National Education System. Based on the education policy contained in the National Education System, it turns out that various problems have arisen in Islamic primary and secondary education. Education in Indonesia at this time is still far from feasible and is still a concern. This is evidenced by UNESCO data (2000) that the Human Development Index ranking is the composition of the ranking of educational attainment, health, and income per head which shows that the human development index in Indonesia is decreasing. 4 The problems that arise in education in Indonesia are still very low. Therefore, in the discussion of this paper, the author discusses how to manage primary and secondary education, Islamic primary and secondary education policies in Indonesia, and the problems of Islamic primary and secondary education in Era 4.0 in Indonesia. RESEARCH METHOD This research uses library research. The main sources of data in this study come from books, journals, and several regulations related to Islamic primary and secondary education policies in Indonesia in the form of the Constitution, Laws, and Ministerial Regulations as well as problems that occur in primary and secondary education. Islam in Indonesia. The approach used in this study uses a qualitative approach to obtain an overview of the policies and problems that exist in Islamic primary and secondary education in Indonesia. Primary and Secondary Education Management Hearing the word management may be familiar to our ears. Management comes from Latin, namely from the origin of the word manus which means hand, agete means to do. These words are combined into the verb manager which means to handle. In English, the word manager is in the form of the verb to manage, with the noun management, and manager is for people who carry out management activities. According to Jones and George management is planning, organizing, directing, and controlling human resources and other resources to achieve organizational goals effectively and efficiently. The scope of management itself is very much included in the world of education. 5 Education in Indonesia has been regulated in the Republic of Indonesia Law No. 20 of 2003 concerning the National Education System. According to RI Law No. 20 of 2003, education is a conscious and planned effort to create a learning atmosphere and learning process so that students actively develop their potential to have religious-spiritual strength, self-control, personality, intelligence, noble character, and the skills they need, society, nation, and state. 6 Article 17 of the Republic of Indonesia Law No. 20 of 2003 states that basic education is the level of education that underlies the middle education level. Basic education is in the form of Elementary School (SD) and Madrasah Ibtidaiyah (MI) or other equivalent forms and Junior High School (SMP) and Madrasah Tsanawiyah (MTs), or other equivalent forms. Then article 18 of the Republic of Indonesia Law no. 20 of 2003 states that in paragraph 1 secondary education is a continuation of basic education. Then paragraph 2 secondary education consists of general education and vocational secondary education. Paragraph 3 states that secondary education is in the form of Junior High School (SMA), Madrasah Aliyah (MA), Vocational High School (SMK), Vocational Madrasah Aliyah (MAK), and other equivalent forms. 7 The explanation above has explained the concept of management and education. Management and education are a unit in which education requires management, so management in education is a process of planning, organizing, leading, controlling educational personnel, educational resources such as Human Resources, Learning Resources and Facilities, and Funding Resources. 8 The government gives schools the authority to manage themselves (decentralization) so that each school can develop its own potential. A copy of Law no. 23 of 2014 related to the division of concurrent government affairs between the central, provincial, district, or city governments. The division of government affairs in the education sector includes six matters, namely education management, curriculum, accreditation, educators and education personnel, education licensing. In education management, the central government has the authority to set national education standards and higher education management. Basic education is managed by the district or city government, besides that, the district or city government also manages early childhood education and nonformal education. Secondary education and special education are managed by the provincial government. This is in accordance with Law no. 23 of 2014. 9 The government's effort to increase its independence in managing is the School-Based Management (SBM) model. As in Law No. 20 of 2003 concerning the National Education System in article 51 paragraph 1 the management of early childhood education units, basic education, and secondary education is carried out based on minimum service standards with school/madrasah-based management principles. These efforts also require good cooperation between schools, the government, and the community. For example, in basic education where the community plays an important role in realizing transparent, democratic, and independent education so that improving the quality of education can be achieved with all parties involved in managing the school. 10 Based on the explanation above, it can be concluded that the government gives authority to schools to manage themselves to develop the potential that exists in schools so that there are several things that are the duty of the government in terms of education, one of which is education management. Based on Law no. 23 of 2014 that the district or city government manages basic education. Then the provincial government manages secondary education. Regarding the government's efforts to increase the independence of schools in managing schools, it is carried out by means of School-Based Management, which requires cooperation between the school and the government and the community to fight together in advancing the school. National Education Policy for Islamic Primary and Secondary Education in Indonesia The Indonesian government inherits two education systems, namely the education and teaching system in secular public schools and the Islamic education and teaching system that grows and develops among the Islamic community, both traditional-isolated and synthetic. Madrasah Ibtidaiyah, hereinafter abbreviated as MI, is a form of formal education unit that organizes general education with the specificity of the Islamic religion from 6 levels at the basic education level. Then paragraph 5 states that Madrasah Tsanawiyah, hereinafter referred to as MTs, is a form of formal education unit that organizes general education with the specificity of the Islamic religion which consists of 3 levels at the basic education level as a continuation of Elementary School, MI, or other equivalent forms. recognized as equal or equivalent to Elementary School or MI. 13 Secondary education according to the Regulation of the Minister of Religion No. 60 of 2015 concerning the Implementation of Madrasah Education in paragraph 6 states that Madrasah Aliyah, hereinafter abbreviated as MA, is a formal education unit that organizes general education with the peculiarities of the Islamic religion at the secondary level as a continuation of Junior High School, MTs, or another equivalent form, recognized as the same or equivalent to Junior High School or MTs. Then paragraph 7 states that the Vocational Madrasah Aliyah, hereinafter abbreviated as MAK is a formal education unit that organizes vocational education with the peculiarities of the Islamic religion at the secondary level as a continuation of Junior High School, MTs, or other equivalent forms, recognized as equal or equivalent to Junior High School. or MTs. 14 The basic framework and structure of the primary and secondary education curriculum is set by the government. The curriculum for primary and secondary education is developed according to its relevance by each educational group or unit and madrasah or school committee under the coordination and supervision of the education office or district/city religious department office for basic and provincial education for secondary education. the primary and secondary education curriculum must contain religious education, citizenship education, language, mathematics, natural sciences, science social knowledge, arts and culture, physical education and sports, skills/vocational, and local content. The curriculum on Islamic basic education which is Madrasah Ibtidaiyah, Madrasah Tsanawiyah, and Madrasah Aliyah, and Vocational Madrasah Aliyah is regulated in the Decree of the Minister of Religion No. 184 of 2019 concerning Guidelines for Curriculum Implementation in Madrasahs. Implementation of the Curriculum at the First Madrasah Ibtidaiyah, covering Group A subjects whose content and references were developed by the center, which include Islamic Religious Education: Al-Qur'an Hadith, Akidah Akhlak, Fiqh, and Islamic Cultural History, Pancasila and Citizenship Education, Indonesian Language, Arabic, Mathematics, Natural Sciences, Social Sciences. Then there is the Second Later Eye, Group B Lessons whose content and references are developed by the center and can be supplemented with local content/content, which includes Cultural Arts and Crafts, Physical Education, Sports and Health, Local Content. The implementation of the MTs curriculum is the same as MI, only in group A subjects plus language I, these subjects include First, Group A subjects whose content and references are developed by the center, which includes Islamic Religious Education (Al-Qur'an Hadith, Akidah Morals, Fiqh, and Islamic Cultural History), Pancasila and Citizenship Education, Indonesian Language, Arabic, Mathematics, Natural Sciences, Social Sciences, English, Second, Group B Subjects whose content and references are developed by the center and can be supplemented with local content/content, which includes Cultural Arts and Crafts, Physical Education, Sports and Health, Local Content. The implementation of the MA and MAK curriculum is adjusted to their specialization or vocational, the subjects include First, Group A subjects whose content and references are developed by the center, these subjects include Islamic Religious Education (Al-Qur'an Hadith, Akidah Akhlak, Fiqh, and History of Islamic Culture), Pancasila and Citizenship Education, Indonesian Language, Arabic Language, Mathematics, Indonesian History, English. Then Second, Group B Subjects whose content and references are developed by the center and can be supplemented with local content, which includes Cultural Arts and Crafts, Physical Education, Sports and Health, Local Content. Third, Specialization Subjects, and finally Fourth, Elective Subjects. 15 Based The Problems of Islamic Primary and Secondary Education in Indonesia Basic and secondary education is an education that must be taken by Indonesian citizens because compulsory education in Indonesia is twelve years. But the fact is that in both primary and secondary education there are still many educational problems that are still a big concern for the government. 16 The reason is the problems of education in Indonesia are very complex considering the current condition is the industrial era 4.0 where the era is increasingly advanced in terms of science and technology, but education still has many problems. As for the various problems or problems of Islamic primary and secondary education in Indonesia, including. First, is the lack of public interest in Islamic education. The majority of Indonesian people who are still secondary to Islamic educational institutions are a problem. They consider that education in Islamic educational institutions is less qualified than general education institutions. Even Islamic educational institutions are used as the last alternative after they are 16 Elective Subjects not accepted in public educational institutions so the majority of people are competing to invade their favorite educational institutions. 17 People think that with general education, their future will be more secure because they think that religious education is not important for their career-related future. Religious education is considered to be unable to compete in the outside world with general education. This is a challenge for Islamic educational institutions, both primary and secondary, to be upto-date in technology. 18 So that the assumption of people who think that Islamic education is only learning related to the field of religion and is not up-to-date in technology is wrong. Islamic education in the 4.0 era in Indonesia should be able to interact with technological advances. Because the thing that is very concerning is that Indonesia is known as a country with a Muslim majority, but Islamic educational institutions are very minimal in demand by the public. 19 The solution to this problem is that Islamic educational institutions are expected to be able to properly introduce to the public that religious education is also important so that students are not only qualified in the general field but also in the religious field so that in the future students are strong in their faith and qualified in the general field to compete outside. 20 Second, is the low quality of teachers. The teacher is an important component in education, where a teacher is the one who provides knowledge to students so that students are able to understand and compete outside. 21 However, in fact, we still encounter teachers, both primary and secondary education, who are less professional in carrying out their duties and we also find that the qualifications of these teachers are not in accordance with their fields, which results in the learning process not getting maximum results. 22 The problem is a big concern for the government to further improve the quality of teachers, especially in Islamic educational institutions. Teachers in Islamic educational institutions are required to be more creative and innovative and follow technological developments in the learning process. In addition, teachers in Islamic educational institutions are expected to improve their quality of professionalism by participating in various training to improve their knowledge and skills of teachers in Islamic educational institutions. In addition, the Ministry of Religion also provides scholarships for madrasa teachers to improve their quality so it is hoped that the quality of qualified teachers according to the current state of the students will be able to improve the quality of the learning process so as to produce quality output. 23 The problem is a big concern for the government to further improve the quality of teachers, especially in Islamic educational institutions. Teachers in Islamic educational institutions are required to be more creative and innovative and follow technological developments in the learning process. 24 In addition, teachers in Islamic educational institutions are expected to improve their quality of professionalism by participating in various training to improve their knowledge and skills of teachers in Islamic educational institutions. In addition, the Ministry of Religion also provides scholarships for madrasa teachers to improve their quality so it is hoped that the quality of qualified teachers according to the current state of the students will be able to improve the quality of the learning process so as to produce quality output. 25 The role of the teacher is indeed very important for education, even though the technology is sophisticated even though the presence of teachers or educators will still play an important role in directing so that the government does not take wrong steps so that the government needs to pay attention to the quality of teachers to achieve educational goals. Therefore, informal educational institutions, both primary and secondary Islamic education, it is expected to increase the effectiveness and efficiency of teachers by regulating the management pattern of data sources starting from planning, recruitment and selection, coaching, job assessment, and competence so that they can improve both cognitively, affectively. as well as psychomotor skills and abilities in education that can be obtained through various training held by madrasas and external parties, although these efforts require a lot of time and money, but will have a good impact on teacher professionalism. Third, government discrimination against the allocation of Islamic education funds. There is discrimination in the allocation of funds or finances given by the government to religious education institutions with the Ministry of National Education. 26 Islamic education, namely madrasas, is an educational institution under the auspices of the Ministry of Religion, where the agency is not decentralized by the local government and DPRD so that there are differences in the level of welfare between schools teachers and madrasa teachers. The welfare of school teachers is obtained from the local government while madrasa teachers do not get welfare at all from the government because madrasa teachers are under the guidance of the Ministry of Religion. 27 This becomes a problem that can eventually affect others, for example, educational facilities and infrastructure. The government should generalize regarding this matter because basically, it is part of the rights, be it the rights of students, educators, students, or others. If the funds given from the government are only small for religious education institutions, it will be difficult for educational institutions to align with general education institutions because of the absence of capital. In fact, if religious education institutions want to jump-start it, they need to struggle with independent capital. Fourth, Certificate oriented. Islam gives orders to humans to always seek knowledge. Even the command to seek knowledge is listed in various hadiths. Then at the beginning of the heyday of Islam, someone who studies really on the basis of desire from within and produces work. A person who produces a lot of work will receive various awards as a form of appreciation for having produced work and developing science. The virtue of studying at the beginning of the heyday of Islam was purely to seek knowledge or knowledge-oriented, but it is different from the case now which has shifted to certificated oriented. In studying, often those who only want a diploma when receiving education. The spirit and quality of science become the next priority. 28 Whereas in learning the most important thing is to seek knowledge because we as Muslims are indeed required to continue to seek knowledge. The priority should be the knowledge, not the diploma. Diplomas become a bonus when we have completed our education. It is necessary to change this thinking. Therefore, it is hoped that Islamic education in Indonesia, especially in the 4.0 era, can change the mindset that the orientation of studying is not just pursuing a diploma but really must be occupied so that the knowledge gained can be useful for the future so that the human resources produced are truly qualified for the future. 29 able to compete in the outside world. Really the knowledge gained is not only valuable but can produce quality products. In studying, often those who only want a diploma when receiving education. The spirit and quality of science become the next priority. Whereas in learning the most important thing is to seek knowledge because we as Muslims are indeed required to continue to seek knowledge. The priority should be the knowledge, not the diploma. Diplomas become a bonus when we have completed our education. 30 It is necessary to change this thinking. Therefore, it is hoped that Islamic education in Indonesia, especially in the 4.0 era, can change the mindset that the orientation of studying is not just pursuing a diploma but really must be occupied so that the knowledge gained can be useful for the future so that the human resources produced are truly qualified for the future. able to compete in the outside world. Really the knowledge gained is not only valuable but can produce quality products. 31 Fifth, the low quality of each learning process management process in each educational unit. Among them are related to various things, namely First, the lack of effectiveness of the learning process. The learning activity can be said to be effective if the lesson plans that have been prepared can be implemented. A teacher must be able to design a learning process so that it can run effectively and efficiently so that learning objectives can be achieved properly. But the fact is that what we see in an Islamic primary and secondary education institution in learning a teacher is still in monotonous teaching using only one method, namely the lecture method, in which there is no interaction between students and teachers because the method is principally a teacher gives the material in full to students so that the learning process with the lecture method is only centered on the teacher (teacher center). Learning process is very contrary to the provisions of the 2013 curriculum which implements that in the learning process students are required to play an active role in learning so teachers must design various strategies and learning methods that are student-centered (student center) in which students play an active and creative role. in the learning process. 32 Second, facilities and infrastructure. Second, is the success of education, one of which is the existence of adequate facilities and infrastructure, be it learning media, sports venues, libraries, laboratories, and so on that support the learning process. But the fact is that in an Islamic educational institution we still see that school buildings and facilities and infrastructure are still very minimal and do not support the educational process so the learning process that takes place is still less effective because of the lack of facilities and infrastructure in schools. For example, now in the 4.0 era, all schools are required to be able to master technology, there are even some exams that use an online system, so schools should provide facilities and infrastructure, namely computers as student facilities when carrying out online-based exams. 33 But in fact, what we see is not all, especially Islamic educational institutions that only have very little funds, not many are able to provide these facilities and infrastructure. So, this is the attention of the government or the Ministry of Religion which oversees Islamic educational institutions to pay attention to this. At least every school is given assistance for facilities and infrastructure so that when the facilities and infrastructure can be fulfilled to the maximum and can be utilized properly, it will have a good impact, namely the smooth learning process to achieve learning objectives. Seeing the problem of the quality of Islamic education in Indonesia, which is still very lacking, it is a big concern for the Ministry of Religion to improve the management of Islamic Religion, but very few are able to develop properly, due to the lack of good management and lack of funding, and lack of interest and quality so improvements in Islamic education management still need to be addressed. In this regard, the government has recently issued a policy regarding BOS funding that all students who are at the age of compulsory education, madrasas, or Islamic educational institutions are prohibited from withdrawing funds from students. Prior to this policy, all funds for madrasa education or Islamic education were obtained from various donors, student families, foundations, and the community, although the BOS program has been implemented, the results have not been maximized. 34 Figure 2. The Problems of Islamic Primary and Secondary Education in Indonesia The decline in Islamic education, both primary and secondary education, is not only due to the problems described previously, but there are several factors that cause Islamic education to often receive criticism, namely: 1) There is a cultural gap where there is an imbalance between the development of science and technology and the speed of education development. Related to this, Islamic education does not try to adapt to the social changes that occur in society so Islamic education is still stagnant and does not try to adapt and respond to these social changes so education lags behind. 2) The stigma of two classes. the result of the first factor is that there is a delay in Islamic education in responding to the development of science and technology and the occurrence of social changes that cause a second-class stigma that ultimately survives in this situation. 3) Dichotomization of science. Discrimination in Islamic science and general science is still a problem in Islamic education which until now has not found significant results. 4) Political dualism. The policy difference between the Ministry of Religion and the Ministry of Education and Culture is still a conflict. These two institutions are still tugging on education policies that often cause conflict, whether it's related to salary issues, intensive 34 Munazar. hal 76 Lack of public interest in Islamic education Low quality of teachers Government discrimination against the allocation of Islamic education funds Certificate oriented The low quality of each process in each educational unit education certification, or others. 5) Thinking solutions to a problem. 6) Not surprised by a change. Changes that will always occur in life do not become an obstacle in education. An Islamic educational institution is able to adapt to the changes that occur including the development of technology and education, if it is not able to manage the educational institution with the changes that occur it will be left behind with institutions that are better managed. 7) Thinking and strategy. Islamic educational institutions must have clear steps in designing something so that everything can be directed and arranged systematically, whether it is a semester program curriculum or others. 35 Of the several factors above that become the problems of Islamic primary and secondary education, which have been described are only some of the problems that occur, but there are still many other problems. The government also needs to pay attention to the needs of existing education so that it needs to be reviewed both from the curriculum or system as well as good education management and it is hoped that with these problems, the government is aware to immediately fix all existing problems so that in the future education in Indonesia, especially Islamic education, is good. that primary and secondary education can be completed properly and the realization of the goals of Indonesian education, namely educating the nation's life and creating a creative, innovative, faithful, and moral generation so that they are able to compete with the outside world and are able to improve the quality of good education, especially Islamic education because of Islamic education. Education is very important in human survival. Progressive Islamic education is education that is not only superior in the field of religion but in all fields. When studying a balance between religious and general knowledge, when a problem occurs, it is able to handle it well. CONCLUSION The government gives the authority to schools to manage themselves to develop the potential that exists in schools, one of which is education management which is regulated through Law no. 23 of 2014 that basic education is regulated by the district or city government, secondary education is regulated by the provincial government. As for the national education policy, Islamic basic education includes Madrasah Ibtidaiyah and Madrasah Tsanawiyah which have been regulated in Minister of Religion Regulation No. 60 of 2015 concerning the Implementation of Madrasah Education, paragraphs 4 and 5. Then the national education policy on secondary education includes Madrasah Aliyah and Vocational Madrasah which have been regulated in Minister of Religion Regulation No. 60 of 2015 concerning the Implementation of Madrasah Education, paragraphs 6 and 7. The educational levels of MI, MTs, MA, and MAK are all under the auspices of the Minister of Religion. In a policy, there must be problems related to Islamic primary and secondary education in Indonesia which have very complex problems including First, including a lack of public interest in Islamic education. Second, the low quality of teachers, Third, government discrimination against the allocation of Islamic education funds. Fourth, Certificate oriented. Fifth, the low quality of each learning process management process in each educational unit. The problems that occur in Islamic primary and secondary education are also caused by various factors, including First, the existence of a cultural gap where there is an imbalance between the development of science and technology and the speed of development of education, Second, the stigma of two classes. The result of the first factor is that there is a delay in Islamic education in responding to the development of science and technology and the occurrence of social changes that cause a second-class stigma that ultimately survives in this situation. Third, is the dichotomization of knowledge. Fourth, the policy difference between the Ministry of Religion and the Ministry of Education and Culture is still a conflict. Fifth, think of a solution to a problem. Sixth, don't be surprised by a change. Seventh, think and be strategic. The problems and factors that occur in the existence of a problem become a big homework for the government to immediately improve the system in Islamic education because it has an impact on the continuity of education and the quality of education and the achievement of educational goals to be achieved. It is hoped that Islamic education will also be able to adapt to the existence of a social change that occurs both changes in technological developments and educational developments so that later Islamic education is not underestimated by the community. We have to change the mindset of the people that Islamic education is a milestone in the success of education that is able to produce a creative, innovative, faithful, and moral generation according to Islamic law.
2022-03-18T15:22:26.429Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "823f65ec76f28831ba94031b9e19a62d9b301e27", "oa_license": "CCBYSA", "oa_url": "https://e-journal.ikhac.ac.id/index.php/NAZHRUNA/article/download/1909/835", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84fc813e93890990bf54f3bb7221ed5084c91040", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
252622835
pes2o/s2orc
v3-fos-license
Gaps in hepatocellular carcinoma surveillance among insured patients with hepatitis B infection without cirrhosis in the United States Abstract Suboptimal adherence to guidelines for hepatocellular carcinoma (HCC) surveillance among high‐risk patients is a persistent problem with substantial detriment to patient outcomes. While patients cite cost as a barrier to surveillance receipt, the financial burden they experience due to surveillance has not been examined. We conducted a retrospective administrative claims study to assess HCC surveillance use and associated costs in a US cohort of insured patients without cirrhosis but with hepatitis B virus (HBV) infection, monitored in routine clinical practice. Of 6831 patients (1122 on antiviral treatment, 5709 untreated), only 39.3% and 51.3% had received any abdominal imaging after 6 and 12 months, respectively, and patients were up to date with HCC surveillance guidelines for only 28% of the follow‐up time. Completion of surveillance was substantially higher at 6 and 12 months among treated patients (51.7% and 69.6%, respectively) compared with untreated patients (36.9% and 47.6%, respectively) (p < 0.001). In adjusted models, treated patients were more likely than untreated patients to receive surveillance (hazard ratio [HR] 1.75, 95% confidence interval [CI] 1.53–2.01, p < 0.001), and the proportion of those up to date with surveillance was 9.7% higher (95% CI 6.26–13.07, p < 0.001). Mean total and patient‐paid daily surveillance‐related costs ranged from $99 (ultrasound) to $334 (magnetic resonance imaging), and mean annual patient costs due to lost productivity for surveillance‐related outpatient visits ranged from $93 (using the federal minimum wage) to $321 (using the Bureau of Labor Statistics wage). Conclusion: Use of current HCC surveillance strategies was low across patients with HBV infection, and surveillance was associated with substantial patient financial burden. These data highlight an urgent need for accessible and easy‐to‐implement surveillance strategies with sufficient sensitivity and specificity for early HCC detection. INTRODUCTION Hepatocellular carcinoma (HCC) accounts for approximately 75% of primary liver cancer, and is the third leading cause of cancer deaths worldwide. [1,2] The most common risk factor for HCC globally is chronic hepatitis B virus (HBV) infection, [2] which can increase HCC risk even in the absence of cirrhosis. [2,3] A large randomized controlled trial among patients with HBV demonstrated that HCC surveillance significantly increased early HCC detection and reduced HCC-related mortality. [4] These data have informed practice guidelines such as those issued by the American Association for the Study of Liver Diseases (AASLD), which recommends HCC surveillance using abdominal ultrasound with or without alpha-fetoprotein (AFP) every 6 months for patients with HBV infection with cirrhosis as well as those without cirrhosis at higher risk for HCC-including Asian or Black men over 40 years of age, Asian women over 50 years of age, and patients with hepatitis delta virus coinfection or a first-degree family history of HCC. [5,6] Although HCC has an overall 5-year survival rate of only 19.6%, [7] patients who are diagnosed at an early stage may be eligible for curative treatment such as resection, ablation, or liver transplantation, which increases 5-year survival to 50%-80%. [8] Unfortunately, HCC surveillance is widely underused, [9][10][11][12][13][14][15][16] despite evidence that it can promote early detection and potentially improve survival among patients with chronic HBV. [4,[17][18][19][20] In a recent systematic review and metaanalysis of 22 studies including 19,511 patients with cirrhosis or chronic viral hepatitis, the adherence rate to AASLD surveillance guidelines was only 52% overall and was 39% when limited to retrospective analyses, which may be a better reflection of real-world practice. [21] Due in part to such low surveillance rates, most individuals with HCC are diagnosed at an intermediate or advanced stage, when the prognosis is much poorer. [22] Given that suboptimal adherence to HCC surveillance guidelines is a persistent problem with substantial detriment to patient outcomes, the elucidation of potential barriers to HCC surveillance in routine practice is an important research goal. Previous assessments conducted among US patients with HBV infection have indicated that those who were not under specialist care were less likely to receive guideline-based HCC surveillance. [13,15,23,24] However, the patient-side financial burden of surveillance-which many patients cite as a significant barrier to surveillance receipt [25,26] -has not been previously examined. The present study was conducted to assess HCC surveillance use and associated costs in a US cohort of insured patients without cirrhosis but with HBV infection, monitored in routine clinical practice. Study design and data source This was a retrospective observational study conducted using administrative claims data from the Optum Research Database (ORD) from January 1, 2013, through December 31, 2018 (study period; Figure 1). The ORD is geographically diverse across the United States and contains deidentified medical and pharmacy claims data and linked enrollment information for individuals enrolled in US health plans. Medical claims include diagnosis and procedure codes from the International Classification of Diseases, 9th and 10th Revisions, Clinical Modification; Current Procedural Terminology or Healthcare Common Procedure Coding System codes; site of service codes; paid amounts; and other information. Pharmacy claims include drug name, national drug code, dosage form, drug strength, fill date, and financial information for health plan-provided outpatient pharmacy services. Because no identifiable protected health information was accessed in the conduct of this study, institutional review board approval or waiver of approval was not required. Patient selection The study included commercial insurance enrollees and Medicare Advantage with Part D (MAPD) F I G U R E 1 Study design schematic. The 12-month baseline period was designed to capture previous hepatocellular carcinoma (HCC) screening. A minimum 6-month follow-up period was chosen to allow sufficient opportunity for guideline-recommended HCC screening to occur. HBV, hepatitis B virus beneficiaries with two or more claims for HBV and no claims for cirrhosis (Table S1) from January 1, 2014, through December 31, 2018 (patient identification period; Figure 1). The date of the first qualifying claim for HBV was designated as the index date. The 12 months before the index date were designated as the baseline period. Included patients were also required to be 40 years old or older for men, or 50 years or older for women, on the index date; and to have continuous health plan enrollment with medical and pharmacy benefits during the baseline and follow-up periods. Those with claims evidence of liver cancer (two or more nondiagnostic claims with diagnosis codes for liver cancer ≥ 30 days apart within a 365-day period) or liver transplantation during the baseline period or on the index date were excluded from the study (Table S1). Patients were observed for at least 6 months, beginning on the index date and ending at the earlier of disenrollment from the health plan or the end of the study period. Patients with medical or pharmacy claims for HBV treatments (adefovir dipivoxil, entecavir, interferon alfa-2b, lamivudine, peginterferon alfa-2a, telbivudine, tenofovir disoproxil, or tenofovir alafenamide fumarate) any time between the start of the baseline period and the end of the follow-up period were categorized as treated, whereas others were categorized as untreated. Follow-up for study outcomes was truncated at liver cancer diagnosis or liver transplantation. Study variables Patient demographic and clinical characteristics measured during the baseline period included age, sex, US census region, insurance type, baseline Quan-Charlson comorbidity score, [27] baseline comorbidities identified using Clinical Classifications Software from the Agency for Healthcare Research and Quality, [28] and prior HCC surveillance. Health care provider specialty was captured from claims with diagnosis codes for HBV during the follow-up period. HCC surveillance events HCC surveillance events (abdominal ultrasounds, magnetic resonance imaging [MRI] scans, computed tomography [CT] scans, and AFP tests) were identified from claims data. AFP tests occurring within 14 days of an ultrasound were considered to accompany the ultrasound. As a sensitivity analysis, AFP tests occurring within 60 days of an ultrasound were also captured. Surveillance events that included any abdominal imaging were considered to be complete, while events that included only AFP were considered incomplete. Proportion of days covered The proportion of follow-up time during which patients were up to date with recommended HCC surveillance was assessed using proportion of days covered (PDC), which was calculated as (days covered)/(days of followup). Any abdominal imaging was considered to provide 6 months of days covered. PDC was analyzed separately for all patients and for patients with evidence of any surveillance during the follow-up period. Cost outcomes Cost outcomes were analyzed during the first surveillance episode among patients with no inpatient admission or emergency room visit during the follow-up period. The first surveillance episode was defined as the first outpatient surveillance event during follow-up plus outpatient surveillance events within the following 60 days. For each surveillance mechanism, the mean and median daily costs during the first surveillance episode were calculated as health plan-paid and patient-paid amounts. For patients with ultrasound plus AFP testing, costs on the day of the AFP test were added to the costs on the day of the ultrasound if the tests occurred on different days. All combinations of surveillance types that occurred on the same day were assessed; however, data are shown only for ultrasound plus AFP, as very few surveillance days included any other combinations (nine other combinations totaling only 1.1% of surveillance days, with no single combination exceeding 0.3%). Yearly patient productivity costs due to surveillancerelated outpatient health care encounters were estimated by assuming 4 working hours lost per outpatient visit, multiplied by the patient's estimated average wage derived from the US Bureau of Labor Statistics (BLS) data [29] and the federal minimum wage. [30] Costs were adjusted to 2018 USD using the annual medical care component of the Consumer Price Index. [31] Statistical analysis Study variables were analyzed descriptively. Numbers and percentages were provided for categorical variables; means, medians, and SDs were provided for continuous variables. Time to follow-up surveillance events and the censoring-adjusted proportion of patients receiving surveillance during the follow-up period were evaluated using Kaplan-Meier analysis. Proportional hazards models were used to evaluate the effect of baseline provider specialty on receipt of surveillance. An ordinary least squares model was used to evaluate the effect of baseline provider specialty on PDC among patients with at least one follow-up surveillance event. All multivariable models were adjusted for treatment status, age group, sex, geographic region, presence of high-deductible health plan, baseline Charlson comorbidity score category, and select comorbidities; the ordinary least squares model was also adjusted for follow-up length. To examine follow-up surveillance from a similar starting point, Kaplan-Meier and multivariable analyses were performed among patients without surveillance during the baseline period. All results were stratified by treated versus untreated patients. Statistical analyses were performed using SAS software version 9.4 (SAS Institute). Statistical significance was defined as p ≤ 0.05. Only 43.3% of patients had evidence of HCC surveillance during the baseline period, with ultrasound being the most common modality (33.2%) followed by AFP (29.4%). The proportion of patients with prior surveillance was almost twice as high among treated versus F I G U R E 2 Patient identification and attrition. a At least two nondiagnostic claims for HBV in any position on different dates during the identification period and age ≥ 40 years on the claim if male or age ≥ 50 years on the claim if female (only required on the second of the two claims; the first claim that meets the age criteria is the index date). b At least two nondiagnostic claims ≥ 30 days apart in positions 1 or 2 on the claim. c At least one claim in any position. d Medical or pharmacy claims for HBV treatments (adefovir dipivoxil, entecavir, interferon alfa-2b, lamivudine, peginterferon alfa-2a, telbivudine, tenofovir disoproxil, or alafenamide fumarate) any time between the start of the baseline period and the end of the follow-up period. MAPD HCC surveillance events The proportions of patients who received any abdominal imaging (ultrasound, CT, or MRI regardless of AFP) during follow-up were 39.3% and 51.3% at 6 and 12 months, respectively ( Figure 3), with completion of abdominal imaging being substantially higher at 6 and 12 months among treated patients (51.7% and 69.6%, respectively) compared with untreated patients (36.9% and 47.6%, respectively) (p < 0.001). Results were similar when considering only ultrasound ± AFP, with 36.0% and 48.1% of patients overall completing any abdominal ultrasound at 6 and 12 months, respectively, and higher receipt among treated patients (45.7% and 65.2%, respectively) compared with untreated patients (34.1% and 44.7%, respectively) (p < 0.001) ( Figure 4A). The proportions of patients who received ultrasound with AFP were even lower overall (13.9% and 19.6% at 6 and 12 months, respectively), although receipt was still higher among treated versus untreated patients at both time points (p < 0.001) ( Figure 4B). Notably, a relatively large proportion of patients received AFP alone: 24.2% at 6 months and 32.5% at 12 months ( Figure 4C). In a sensitivity analysis that increased the time permitted between ultrasounds and AFP tests from 14 days to 60 days, the overall proportion of patients receiving AFP alone remained substantial: 22.0% and 29.3% at 6 and 12 months, respectively. Proportion of days covered Overall, patients' PDC with imaging-based HCC surveillance was only 0.28 (SD 0.30) during the follow-up period ( Figure 5A). PDC was higher for treated versus untreated patients (0.43 vs. 0.25; p < 0.001). In the subset of individuals with at least one surveillance event during follow-up (n = 4250), PDC was 0.45 (SD 0.26) and was higher for treated versus untreated patients (PDC 0.53 vs. 0.43; p < 0.001) ( Figure 5B). Factors associated with HCC surveillance In a proportional hazards model adjusted for treatment status, patient demographics, presence of highdeductible health plan, baseline Charlson comorbidity score category, and select comorbidities, patients with treated HBV were more likely to receive HCC surveillance during follow-up compared with untreated patients (hazard ratio [HR] 1.75, 95% confidence interval [CI] 1.53-2.01, p < 0.001) ( Table 2). Younger age and Northeast or West/Other geographic region (vs. South) were associated with increased follow-up surveillance, whereas higher baseline comorbidity burden was associated with lower surveillance receipt. The effect of baseline gastroenterology care was not significant (95% CI 0.99-1.24; p = 0.068) ( Table 2). In an ordinary least squares model adjusted for treatment status, patient demographics, presence of highdeductible health plan, baseline Charlson comorbidity score category, select comorbidities, and follow-up F I G U R E 3 Completed follow-up surveillance events. Surveillance events that included any abdominal imaging were considered to be complete. p < 0.001 for difference among survival curves. Cost outcomes Total and patient-paid mean daily costs of outpatient surveillance were highest for MRI only ($1717 and $334, respectively) and lowest for ultrasound only ($415 and $99, respectively) ( Figure 6A). Total median daily costs were lower than mean daily costs due to a skewed distribution, but remained highest for MRI only ($1261) and lowest for US only ($234) ( Figure 6A). Daily surveillance costs were not appreciably different between treated and untreated patients (Table S2). DISCUSSION Routine surveillance is essential for patients with chronic HBV infection-including those without cirrhosis, who generally have well-preserved hepatic function and are therefore more likely to be eligible for curative treatments if diagnosed with HCC at an early stage. [5,6,32] However, in this study we found that after 6 months of follow-up, only 36% of individuals without cirrhosis but with HBV infection had received an abdominal ultrasound (the primary recommended HCC surveillance modality), and close to half of patients had received no abdominal imaging at all. Although surveillance was significantly higher among those with evidence of HBV treatment versus untreated individuals (45.7% vs. 34.1% at 6 months and 65.2% vs. 44.7% at 12 months), it was still notably underused even in the former group, which would presumably include the highest-risk patients. Moreover, patients who underwent surveillance experienced a substantial financial burden, with mean out-of-pocket costs ranging from $99 to $334 on the day of surveillance, depending on modality, and sizeable annual productivity costs. Survey data indicate that many patients perceive cost as a significant barrier to HCC surveillance receipt. [25,26] The present study quantitatively assesses the patient financial burden associated with HCC surveillance among individuals with HBV without cirrhosis in the United States. [33] As in our previous analysis conducted among patients with cirrhosis, [34] health plans paid the majority of costs for surveillance-related visits but patients' out-of-pocket expenses remained high, particularly for MRI and CT surveillance. The estimated yearly patient productivity costs of $321 (using BLS wage data) were markedly lower than the $1471 we observed previously for patients with cirrhosis and HBV infection [34] -likely attributable to patients with versus without cirrhosis being sicker and requiring more testing, higher-intensity care, and more frequent outpatient visits [35] -but would nevertheless constitute a substantial burden for many Americans. We also found that a substantial proportion of patients with HBV infection received only AFP testing during follow-up. AFP testing in the absence of abdominal imaging is not a guideline-recommended mechanism for HCC surveillance; however, its frequent use may suggest broad acceptance of blood-based screening tests on the part of providers and patients alike. Taken together with our cost findings, these results point to the development of blood-based biomarkers as a potential avenue for improving HCC surveillance underuse and increasing test effectiveness in this population. [36] Compared with imaging, blood tests are generally more accessible and require minimal time commitment. [37] Furthermore, as they are a familiar feature of routine primary care visits for many patients, inclusion of another test on the panel would require no additional effort or productivity loss, potentially decreasing barriers to surveillance. This may be particularly relevant for patients with HBV infection, who typically undergo regular blood-based assessments to monitor HBV status. The development of novel biomarkers may also represent a cost-effective way to expand HCC screening to other groups that are not included in current HCC surveillance recommendations but have been found to have increased risk, such as men under age 40 or women under age 50 with chronic HBV infection but not cirrhosis. [38] The findings of this study also augment a large body of existing evidence that surveillance is underused among multiple subgroups of patients at high risk for HCC. [9,12,14,21,33] Interestingly, adherence to surveillance guidelines in the present study was similar to that observed in our previous analysis conducted among patients with cirrhosis, in which 34% had received an abdominal ultrasound at 6 months. [33,34] This outcome was somewhat surprising, as patients with HBV infection without cirrhosis have previously been reported to have lower adherence to HCC surveillance guidelines, despite having high HCC risk. [12,14] As these earlier studies were conducted in 2009 and 2014, respectively, this could suggest that some progress has been made in reducing surveillance underuse among patients with HBV in the past decade. We found that receipt of GI care during the baseline period did not have a significant effect on adherence to recommended surveillance in the present analysis. This was in contrast to our previous study and others, which have found specialist care to be associated with improved adherence. [12,13,15,34,39] Given that HBV is often managed by GIs and we found that treated patients had significantly higher follow-up surveillance than untreated patients, we hypothesize that HBV treatment status may essentially have functioned as a surrogate for guideline-concordant provider behavior due to collinearity between HBV treatment status and provider specialty. Our findings may also reflect providers' assessment of patient risk, as providers may have been more likely to recommend surveillance for patients perceived to be at high risk for liver-related outcomes (such as those whose HBV was sufficiently progressed to warrant antiviral treatment). However, it should be noted that substantial underuse of surveillance was observed even among HBV-treated patients, who are presumably at high risk for HCC. Geographic region also had a significant effect on HCC surveillance adherence in the present study, with patients located in the Northeast or West being more likely to have surveillance during follow-up than those in the South. We speculate that these findings are due to regional differences in distribution of both patients and providers. Although information on patient race and ethnicity was not available in this analysis, the burden of HBV in the United States is known to fall disproportionately on foreign-born individuals, who constitute an estimated 60%-70% of those living with HBV and are primarily of Asian or African origin. [40][41][42][43][44] As these high-risk populations are concentrated in the northeastern and western United States, [45] it is plausible that HBV awareness and/or availability of care providers with knowledge of HBV management and HCC surveillance guidelines would be higher in these regions than in the South. In addition, localities with sizeable foreign-born populations have been targeted for community-based HBV outreach programs that have been shown to increase awareness of HBV and facilitate linkage to care for infected individuals. [46][47][48][49] Notably, older age and higher baseline comorbidity burden were significantly associated with lower HCC surveillance. These findings suggest that the challenges involved in managing multiple conditions for patients who are in poorer health may increase the likelihood that surveillance recommendations will be overlooked-a concerning possibility, given that increased age is a risk factor for HCC. [5] Conversely, this finding may reflect appropriate provider decisions regarding the lower value of HCC surveillance in patients with a high competing risk of mortality. [50] Study limitations The results of this study should be considered in light of several limitations. First, surveillance estimates were modeled in a population that was screening-naïve during the baseline period; however, surveillance receipt Healthcare Research and Quality. [27] Includes alcohol-related liver disease, liver cirrhosis without mention of alcohol, liver abscess and sequelae of chronic liver disease, ascites, and other/unspecified liver disorders. may be higher among patients with prior surveillance before the index date. Second, the presence of a diagnosis code on a claim is not proof of disease, as codes may have been entered incorrectly or included as ruleout diagnoses. Patient misidentification was minimized by requiring at least two nondiagnostic claims for HBV during the identification period; however, this may have caused surveillance to be overestimated, as patients with only one HBV code were excluded. Third, information on factors that contribute to HCC risk and may affect screening recommendations for patients with HBV infection (e.g., patient race/ethnicity, hepatitis delta virus infection) were not available for this study, and while diagnosis codes for family history of HCC exist, they were not included in this analysis as it is unclear to what extent they would have been captured in the 12month baseline period. Without these data, it is possible that some patients who did not meet HCC surveillance criteria were inadvertently included in the study population. This may be particularly true of untreated patients, which may have contributed to the lower estimates of surveillance receipt observed for this group. Fourth, while blood tests may generally require less time than imaging-based surveillance, a standard estimate of 4 work hours lost per encounter was used for all surveillance methods in the patient productivity cost calculations to help account for factors such as travel time and work hours lost by individuals providing transportation assistance; this may underestimate the cost differential between imaging-based and blood test-based surveillance. In addition, it was not possible for this study to distinguish the ancillary costs of services that occurred on the same day as the AFP laboratory test, including phlebotomy and other charges related to testing or office visits. Together, these factors may have led to overestimation of costs for AFP testing. Finally, because this analysis was conducted in a US population with commercial or MAPD insurance, study results may not be generalizable to populations such as patients who are uninsured, enrolled in Medicaid, or outside the United States. However, uninsured or underinsured populations may have more barriers to medical care overall, potentially resulting in even poorer adherence to HCC surveillance. CONCLUSIONS Patients with HBV infection experienced substantial economic burden due to health care encounters related to HCC surveillance. Furthermore, HCC surveillance was low in this patient population, potentially mitigating surveillance effectiveness in clinical practice. The development of accessible and easy-to-implement biomarkers with sufficient accuracy for effective earlystage HCC detection could help reduce barriers to patient adherence and thereby improve implementation of surveillance programs.
2022-10-01T06:16:13.599Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "f47e4af2a3fec688c34b02bb54a40ae39afdc07e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/hep4.2087", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "8ac5f74b36a1310d3c7bd554446fc54138fbc1a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251852743
pes2o/s2orc
v3-fos-license
Mahuang Decoction Attenuates Airway Inflammation and Remodeling in Asthma via Suppression of the SP1/FGFR3/PI3K/AKT Axis Background/Purpose Mahuang decoction (MHD) is a classic famous traditional Chinese medicine and has various pharmacological effects, including anti-inflammation and anti-asthma. In this study, we aimed to investigate the potential protective effect of MHD against asthma and elucidated the underlying mechanism. Materials and Methods A mouse model of asthma was induced by ovalbumin (OVA) treatment, and then treated with MHD to evaluate its effect on the asthma. Gain- or loss-of-function approaches were performed in SP1 and FGFR3 to study their roles in asthma via measurement of airway inflammation, airway remodeling and airway smooth muscle cell (ASMC) proliferation-related factors. Results MHD reduced airway inflammation and remodeling. Additionally, MHD contributed to diminished expression of SP1, which was shown to repress airway inflammation and remodeling. Furthermore, SP1 bound to the FGFR3 promoter, resulting in the FGFR3 transcription promotion and ASMC proliferation. Conversely, FGFR3 knockdown abolished airway inflammation and remodeling, the mechanism of which was related to suppression of the PI3K/AKT signaling pathway. Meanwhile, MHD hindered airway inflammation and remodeling following asthma by suppressing the SP1/FGFR3/PI3K/AKT axis. Conclusion Taken together, MHD may retard airway inflammation and remodeling by suppressing the SP1/FGFR3/PI3K/AKT axis, which contributes to an extensive understanding of asthma and may provide novel therapeutic options for this disease. Introduction Asthma is a highly heterogeneous disease, encompassing both atopic and non-atopic phenotypes, 1 and frequently manifests with dyspnea, wheeze, chest tightness, and cough. 2 Pediatric asthma, the most common chronic disease of childhood, causes a significant burden to the health care system. 3 Airway inflammation and remodeling are among the most important pathological features of asthma. 4 Therefore, identification of the specific molecular mechanism underlying airway inflammation and remodeling facilitates the development of treatment approaches for the management of asthma. Mahuang decoction (MHD) is a traditional Chinese medicine composed of four different herbs: Ephedrae herba, Armeniacae semen, Cinnamomi ramulus and Glycyrrhizae radix, and has been widely used as a prescription for allergic reaction and inflammation for many years. 5,6 MHD has been reported to be an effective treatment option for asthma due to its role in mitigating airway inflammation. 7,8 A recent study has highlighted that modified MHD is capable of inhibiting inflammatory responses caused by cigarette smoke in human airway epithelial cells. 6 SP1 exerts critical function in human diseases because of its modulation in genes associated with cellular processes in mammalian cells. 9 A recent work revealed SP1 as an intensively associated hub gene in the integrated network of DNA methylation and gene expression influencing the asthma development. 10 Additionally, the linkage of SP1 to the airway remodeling induced by WNT-5A has been documented. 11 As previously confirmed, fibroblast growth factor receptor 3 (FGFR3) is a Sp-modulated gene in bladder cancer cells. 12 Suppressed FGFR3 contributes to ameliorated airway inflammation and remodeling in an ovalbumin (OVA)-induced mouse model of chronic asthma. 13 Meanwhile, PI3K/AKT is a well-established downstream regulatory kinase of FGFR3 which positively regulates its expression. 14 Inhibition of the PI3K/AKT signaling pathway is critical for suppression of airway inflammation and remodeling of asthmatic mice. 15 Therefore, we hypothesized that MHD might play a therapeutic role in preventing the development and progression of asthma via the SP1/FGFR3/PI3K/AKT axis. To address the hypothesis, we studied the effect of MHD on airway inflammation and remodeling during asthma and elucidated the underlying mechanism so as to provide guidance for the treatment of MHD on asthma. The experimental results suggested that MHD intervention confirmed a strong inhibitory action on asthma by suppressing the SP1/FGFR3/PI3K/AKT axis. Ethics Statement Animal experimentations were approved by the Ethics Committee of Changchun University of Traditional Chinese Medicine and conducted on the basis of the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. Due efforts were created to limit animals' pain. Bioinformatics Analysis GeneCards, CTD and DisGeNET databases were used to retrieve asthma-related genes, which were then intersected using the jvenn tool to identify the candidate targets. SymMsp database was adopted to retrieve the targets of MHD. The obtained targets of MHD were intersected with the asthma-related candidate targets to obtain the candidate targets of MHD in the treatment of asthma. The drug-target network was visualized employing Cytoscape 3.5.1 software. Functional enrichment analysis of the targets was then performed using Metascape database and the hTFtarget database was used to predict the downstream targets. Asthma-related gene expression dataset GSE27876 was downloaded from GEO database. The dataset includes 5 normal samples and 5 asthma samples, equipped with the platform annotation file of GPL6480. The R language "limma" package was applied for differential analysis of gene expression with logFC > 1.5, p value < 0.05 as the threshold to screen significantly highly expressed genes. High-Performance Liquid Chromatography (HPLC) Analysis of MHD Extract Components Nine marker components of MHD (ephedrine HCl, amygdalin, 6-gingerol, glycyrrhizin, liquiritin apioside, liquiritin, cinnamaldehyde, cinnamic acid and coumarin) were analyzed by HPLC (Shimadzu, Corp., Kyoto, Japan). Chromatographic data were analyzed utilizing LabSolutions software (Version 5.54 SP3, Shimadzu). Next, the nine components were subjected to chromatographic separation using a SunFire C18 column. For simultaneous determination, 100 mg of the freeze-dried modified MHD powder dissolved in 20 mL of DW was extracted for 20 min at ambient temperature with the help of an ultra-sonicator (Branson 8510, Danbury, CT) and filtered through a 0.2-μm filter (PALL Life Sciences, MI). 6 Establishment of OVA-Induced Asthma Mouse Models Ninety 6-8 week-old healthy C57BL/6 wild-type (WT) female mice (weighing 20 ± 2 g; Beijing Vital River Laboratory Animal Technology Co., Ltd., Beijing, China) were housed in the animal experiment center of Shanghai Jiading Traditional Chinese Medicine Hospital with humidity of 40-60% and temperature of 21-27°C under a 12-h light/dark cycle (eat and drink freely). FGFR3 gene knockout (FGFR3 −/− ) mice were purchased from Cyagen Bioscience Inc. (Suzhou, Jiangsu, China). All mice were acclimatised for one week before experiment. The WT mice were randomized into the control group (treated with saline, n = 10) and OVA group (treated with OVA, n = 80). Twenty FGFR3 −/− mice were randomized into the Con + FGFR3 −/− group (n = 10) and the OVA + FGFR3 −/− group (n = 10). The mice in each group were numbered for noninvasive lung function test and preparation of lung tissue specimens and bronchoalveolar lavage fluid (BALF). OVA-treated mice were sensitized by intraperitoneal injection of 0.2 mL of sensitization solution containing 100 μg OVA (Sigma-Aldrich Chemical Company, St Louis, MO) and 2 mg aluminum hydroxide (Pierce Biotechnology Inc., Rockford, IL) on day 1 and day 14. Control mice were intraperitoneally injected with 0.2 mL of normal saline. On day 21, mice were anesthetized with 1% pentobarbital sodium followed by intranasal injection of 30 μg OVA. As shown in Figure S1, 80 OVA-induced asthma model mice were randomized into (10 mice in each group): OVA group (intraperitoneal injection of OVA), OVA + MHD group (intraperitoneal injection of OVA and oral administration of 20 mg/g MHD), OVA + DEX group (peritoneal injection of OVA and oral administration of 3 mg/kg DEX [15032521, Anhui Jintaiyang Pharmaceutical Co., Ltd., Anhui, China]), OVA + sh-NC (peritoneal injection of OVA and intranasal instillation of adenovirus coated with negative control), OVA + sh-SP1 (peritoneal injection of ovalbumin and intranasal instillation of adenovirus coated with sh-SP1), OVA + MHD + oe-NC group (peritoneal injection of OVA, oral administration of 20 mg/g MHD and intranasal instillation of adenovirus coated with negative control), OVA + MHD + oe-SP1 group (peritoneal injection of OVA, oral administration of 20 mg/g MHD and intranasal instillation of adenovirus coated with oe-SP1), OVA + LY294002 group (peritoneal injection of OVA and intraperitoneal injection of 1.5 mg/kg LY294002 [Calbiochem, France Biochem, Meudon, France], a PI3K/AKT signaling pathway inhibitor). From the 21st day, 30 min before the intranasal OVA treatment, the mice in the OVA + MHD group were given oral MHD treatment at a dose of 20 mg/g, the mice in the OVA + DEX group were intraperitoneally injected with 2 mg/kg DEX, and the mice in the OVA + LY294002 group were intraperitoneally injected with 1.5 mg/kg LY294002, all of which lasted 3 consecutive days until the 23rd day. The mice in the OVA + sh-NC, OVA + sh-SP1, OVA + MHD + oe-NC, OVA + MHD + oe-SP1 groups were instilled 30 min before intranasal OVA treatment at the 21st day by intranasal instillation of 10 μL 3.5×10 9 PFU adenovirus (Sangon Biotech, Shanghai, China). All adenoviruses were procured from Shanghai Sangon Bioengineering Co., Ltd. (Shanghai, China). Measurement of Airway Hyperresponsiveness (AHR) On the basis of AniRes 2005 animal pulmonary function system, within 24 h after the final OVA exposure, AHR of mice was tested by MeCh (Sigma) challenge. The respiratory rate was pre-set at 90/min, and the time ratio of expiration/ inspiration was set as 20:10. AHR was then assessed using expiratory resistance (Re), inspiratory resistance (Ri), and the minimum value of Cdyn. R-areas of Ri and Re, the graph area between the peak value and baseline, and the trough of Cdyn were obtained for further analysis. Preparation of Lung Tissue Specimens and BALF Within 24 h after the final OVA exposure, the mice were anesthetized. BALF was implemented by the instillation of 0.8 mL cold PBS through the self-made trachea ( Figure S2) into the lung and the liquid was withdrawn. This process was repeated for three times for collection of a total of 2 −5 mL liquid from each mouse. After that, the obtained BALF sample was centrifuged at 200 g and 4°C for 10 min and the supernatant was harvested and stored at −80°C for subsequent enzyme-linked immunosorbent assay (ELISA). The cell sediment was suspended in 200 μL of PBS, centrifuged, and subjected to Wright-Giemsa stain. The percentage of eosinophils in the BALF was calculated by counting 100 cells in randomly selected areas using an optical microscope (Olympus, Japan). Hematoxylin-Eosin (HE) Staining After BALF, mice were fixed for 5 min with 10% neutral formalin by inflation at 25 cm water pressure. Next, the left upper lobe lung tissue of mice was harvested and fixed in 4% paraformaldehyde for 24 h, paraffin-embedded and cut into 5-μm-thick sections. Ten bronchial cross-sections with diameters of 100-200 μm were randomly selected from each section under a light microscope (× 200). HE staining images were obtained using a microscope (Leica-DM2500, Germany) and analyzed using the Image Pro Plus 7.1 software (Media Cybernetics, Silver Spring, MD). Periodic Acid-Schiff (PAS) Staining This assay was conducted as previously reported. 16 Paraffin sections were observed under a microscope to assess the pathological changes in lung tissues. Masson's Trichrome Staining Paraffin sections were stained in hematoxylin and in ponceau-acid fuchsin solution, and hydrolyzed by 1% molybdophosphoric acid aqueous solution. Next, the sections were stained in 1% aniline blue or green light liquid for 5 min and treated with 1% glacial acetic acid aqueous solution for 5 s. Then, the sections were dehydrated in ascending series of alcohol and observed for the collagen deposition in the airway wall with the help of a microscope (blue: collagen fiber; red: muscle-fiber cytoplasm stained red; nuclei: blue-brown). Immunohistochemistry (IHC) Lung tissues of mice were paraffin-embedded and cut into 4-μm-thick sections, which were then submitted to antigen retrieval. Then, the sections were immunostained with primary antibodies anti-SP1 (ab227383, 1:100, Abcam Inc., Cambridge, UK) and anti-FGFR3 (MA5-32620, Thermo Fisher Scientific Inc., Waltham, MA) at 4°C overnight. The following day, the sections were incubated with biotinylated secondary antibody goat-anti rabbit IgG (1:1000, ab6721, Abcam) for 20 min, followed by additional incubation with HRP-streptavidin (Innova Biosciences) for 20 min. Following development with DAB, the sections were counterstained with hematoxylin and observed under a microscope (Leica-DM2500, Leica, Germany). Image-Pro Plus (version 7.1, Media Cybernetics) was adopted for quantitative analysis. RT-qPCR Total RNA was extracted from airway smooth muscle cell (ASMC) or mouse lung tissues using TRIzol reagents (Invitrogen, Carlsbad, California), with the quantity and concentration determined by ultraviolet visible light spectrometer (ND-1000, Nanodrop). The RNA was then reversely transcribed into cDNA employing the reverse transcription kit (RR047A, Takara Bio Inc., Otsu, Shiga, Japan). RT-qPCR was processed employing the SYBR ® Premix Ex TaqTM II (Perfect Real Time) kit (DRR081, Takara) on an ABI 7500 instrument (Applied Biosystems, Foster City, CA). The primers are shown in Table S1. GAPDH was considered as a normalizer and the fold changes were calculated by the 2 −ΔΔCt method. Beijing Huamaike Biotechnology Inc., Beijing, China) and incubated with Dulbecco's modified Eagle's medium (DMEM) (SH30021, Beijing North TZ-Biotech Develop Inc., Beijing, China) supplemented with 0.15% collagenase in an electromagnetic stirrer (constant-temperature, 1008011909, Tianjin Shidanda Trade Inc., Tianjin, China) at 37°C for 30 min, followed by centrifugation at 100 g for 5 min. Next, the obtained precipitate mixed with the low-glucose DMEM appended to 20% fetal bovine serum (FBS) was adopted for cell suspension preparation. Then, the cell suspension was treated with 0.25% trypsin appended to 0.02% EDTA in a 37°C incubator, followed by reaction termination. Thereafter, the cells were centrifuged at 200 g for 5 min, resuspended in DMEM appended to 20% FBS, and cultured at 37°C in a 5% CO 2 incubator. The solution was renewed once every two to three days. Under 80% confluence, cells were trypsinized and observed under a microscope. The cells were purified employing a differential adherence method with the cell purity of 100%. ASMCs at passage 5-8 were used for the subsequent experimentations. Finally, the primary ASMCs were identified with the help of immunocytochemistry staining. Immunocytochemistry Staining ASMCs were seeded onto the sterile cover glass to make 70% confluence, fixed in 4% paraformaldehyde for 20 min and immunolabeled with anti-α-SMA antibody (MA5-11544, Thermo Fisher Scientific) at 4°C overnight. The cells were then incubated with biotin-labeled secondary antibody rabbit anti-mouse IgG. The signal was detected by adding fast red, and then the cells were counterstained with modified Mayer's hematoxylin. The cover glass was fixed with a crystal holder and visualized with Axio Vision software (Carl Zeiss, Inc, Thornwood, NY). Cell Grouping and Transfection ASMCs were isolated from control and OVA-treated mice. ASMCs from OVA-treated mice were then transfected with 50 ng/mL of plasmids (Sangon) of sh-NC, sh-SP1 and sh-FGFR3, or treated with 1% DMSO (Calbiochem) and 25 μM LY294002. ASMCs from control mice were used as the controls. Chromatin Immunoprecipitation-Polymerase Chain Reaction (ChIP-PCR) ChIP Kit (Thermo Fisher Scientific) was procured for this assay. Cells following treatments were collected and fixed with 1% formaldehyde to produce DNA-protein cross-linking. The cells were then submitted to ultrasonic treatment for making chromatin fragments. Next, cell lysate was incubated with the antibody against SP1 (rabbit, ab227383, 1:50, Abcam) to immunoprecipitate the complex. RT-qPCR was started to quantify ChIP products. The primer sequences of the FGFR3 promoter were as follows: forward: 5'-AGGCCCCATCAACAAAGGAG-3' and reverse: 5'-GTGACCAACCCTCA GACCAGG-3'. Dual-Luciferase Reporter Assay The Jaspar database was applied to predict the binding site between SP1 and FGFR3 promoter sequences (about 1000 bp upstream of FGFR3 transcription initiation site). Cell Counting Kit-8 (CCK-8) Assay CCK-8 kit (K1018, Apexbio) was procured for assessing cell proliferation. ASMCs in each group were seeded into 96well plates with medium containing 100 μL 10% FBS at a density of 1×10 3 cells/well. After culture for 12, 24, 36 and 48 h, each well was added with 10 μL CCK-8 solution and cultured at 37°C for 2 h. Subsequently, the optical density values at 450 nm were determined by a microplate reader (51119080, Thermo Fisher Scientific). Each experiment was set up with 5 parallel wells and repeated three times independently. ELISA By referring to the ELISA kit (Dakewe Biotech Company) instruction, the levels of the inflammatory factors interleukin (IL)-4, IL-6 and tumor necrosis factor-α (TNF-α) in the collected mouse BALF and the supernatant of ASMCs were detected accordingly. Statistical Analysis SPSS 21.0 statistical software (IBM Corp. Armonk, NY) was run for data analysis. Measurement data were described as mean ± standard deviation. Data between two groups were analyzed using unpaired t-test, while data among multiple groups were assessed using one-way analysis of variance (ANOVA), followed by Tukey's post hoc tests. Repeated measures ANOVA was used for data comparison at different concentrations and two-way ANOVA for comparison of cell viability at different time points. p < 0.05 described statistical significance. MHD Inhibits Airway Inflammation and Remodeling of Asthmatic Mice In order to verify the role of MHD in asthma, we first constructed a mouse asthma model induced by OVA. The mouse model was then treated with MHD and subjected to MeCh-stimulated AHR test. Ri and Re in the OVA-treated mice were found to be progressively increased over the MeCh dose, while the trough value of Cdyn was decreased. OVA exposure at each time point had a significant impact on Ri, Re and Cdyn, indicating the successful establishment of the mouse asthma model. In addition, MHD treatment markedly reduced the changes of lung function, with the effect similar to that of DEX treatment ( Figure 1A-C). The number of total cells and the percentage of eosinophils were increased in BALF of OVA-treated mice, while treatment with MHD or DEX caused an opposite result ( Figure 1D and E). Moreover, ELISA data showed an increase of the levels of IL-4, IL-6 and TNF-α in the BALF of OVA-treated mice, while treatment with MHD or DEX decreased their levels ( Figure 1F-H). Analysis on the lung tissue using HE, PAS and Masson's trichrome staining suggested severe peribronchial inflammatory infiltration, increased mucus secretion and collagen deposition and obviously thickened airway wall in lung tissues of OVAtreated mice, and conversely, treatment with MHD or DEX could reverse these results ( Figure 1I and J). These results suggested that MHD could inhibit airway inflammation of asthmatic mice. We then moved to explore the potential molecular targets of MHD in relieving airway inflammation and remodeling following asthma. GeneCards, CTD and DisGeNET databases were searched to retrieve the asthma-related genes, which were then intersected using the jvenn tool (Figure 2A). Following intersection analysis with the targets of MHD, 63 genes were obtained ( Figure 2B). Cytoscape 3.5.1 was adopted to construct a regulatory network involving the drug-target ( Figure 2C). Enrichment analysis using the Metascape database revealed that 63 genes may be related to reperfusion injury, inflammation, and asthma ( Figure 2D), of which RELA, NFKB1, JUN, SP1 and STAT1 may be the key genes ( Figure 2E). 2839 The results of RT-qPCR and IHC demonstrated that in the lung tissues of OVA-treated mice, the expression of SP1 was increased but treatment with MHD or DEX led to opposite trend ( Figure 2F and G). Collectively, SP1 was highly expressed in the lung tissue of asthmatic mice, and MHD could inhibit the expression of SP1. Therefore, we chose SP1 as the target gene for subsequent experiments. Silencing of SP1 Suppresses Airway Inflammation and Remodeling of Asthmatic Mice Next, we probed into the role of SP1 in airway inflammation and remodeling following asthma. Initial results identified reduced expression of SP1 in the lung tissues of OVA-treated mice treated with sh-SP1 ( Figure 3A and B), indicating the efficiency of SP1 knockdown in mice. As shown in Figure 3C-G, AHR was abated and the number of total cells and the percentage of eosinophils were decreased in OVA-treated mice following SP1 knockdown. Meanwhile, ELISA data showed declines in the levels of IL-4, IL-6 and TNF-α in the BALF of OVA-treated mice treated with sh-SP1 ( Figure 3H-J). Further analysis on the lung tissues using HE, PAS and Masson's trichrome stainings DovePress suggested that SP1 knockdown decreased peribronchial inflammatory infiltration, mucus secretion, collagen deposition and thickness of airway wall in lung tissues of OVA-treated mice ( Figure 3K and L). Cumulatively, silencing of SP1 could reduce airway inflammation and remodeling of asthmatic mice. SP1 Promotes ASMC Cell Proliferation Through Transcriptional Activation of FGFR3 To elucidate the mechanism by which SP1 affects airway inflammation and remodeling during asthma, we first employed the hTFtarget database to predict the downstream target genes of SP1, which were then subjected to Venn diagram analysis with the significantly highly expressed genes obtained from the asthma-related dataset GSE27876. Seven genes were found at the intersection, including SEMA3F, ASB4, TOMM34, PTPRN, SEZ6, ERBB3, and FGFR3 ( Figure 4A and B). RT-qPCR results further confirmed that FGFR3 showed the highest expression in the lung tissues of OVA-treated mice than the remaining genes ( Figure 4C). It has been reported that inhibiting the expression of FGFR3 could reduce airway inflammation, and SP1 could bind to the FGFR3 promoter region. 17,18 Therefore, we speculated that SP1 may promote airway inflammation and remodeling during asthma by modulating the transcription of FGFR3. In addition, the expression of FGFR3 was detected to be elevated in the lung tissues of OVA-treated mice. However, SP1 knockdown inhibited the FGFR3 expression (Figure 4 and E). Jasper website predicted the binding site of SP1 to the FGFR3 promoter. Dual-luciferase reporter assay further showed that SP1 promoted the luciferase activity of FGFR3-WT, while inhibiting that of FGFR3-MUT-1, with no significant effects on the luciferase activity of FGFR3-MUT-2, FGFR3-MUT-3 and FGFR3-MUT-4 ( Figure 4F), indicating that SP1 could target FGFR3 promoter Site1 and activate its transcription. Immunocytochemistry staining analysis suggested the successful isolation of primary ASMCs from mouse lung tissues ( Figure S3). ChIP-PCR results showed an increase in the enrichment of SP1 in the FGFR3 promoter region in AMSCs from OVA-treated mice ( Figure 4G). Moreover, elevated expression of SP1, FGFR3, COL1A2 and COL3A1 was witnessed in AMSCs from OVA-treated mice, which was reversed following treatment with sh-SP1. Conversely, combined treatment with sh-SP1 and oe-FGFR3 resulted in upregulated FGFR3, COL1A2 and COL3A1 expression ( Figure 4H and I). CCK-8 assay results demonstrated an upward trend in the proliferation of AMSCs from OVA-treated mice but SP1 knockdown impaired the proliferation. In the presence of both SP1 knockdown and FGFR3 overexpression, cell proliferation was noted to be increased ( Figure 4J). Therefore, SP1 could bind to the FGFR3 promoter and activate its transcription to stimulate ASMC proliferation and thus reduce airway inflammation and remodeling during asthma. Down-Regulation of FGFR3 Expression Inhibits ASMC Proliferation and Airway Inflammation and Remodeling of Asthmatic Mice We then elucidated the role of FGFR3 transcription in airway inflammation and remodeling following asthma. AHR was abated and the number of total cells and the percentage of eosinophils were decreased in OVA-treated mice following FGFR3 knockdown ( Figure 5A-E). In addition, the results of ELISA showed no difference in the levels of IL-4, IL-6 and TNF-α in the BALF between WT and FGFR3 −/− mice after normal saline treatment. However, following OVA treatment, FGFR3 −/− mice had decreased IL-4, IL-6 and TNF-α levels compared with WT mice (Figure 5F-H). HE staining results showed no significant changes in the airway wall thickness of WT and FGFR3 −/− mice after normal saline treatment; however, in the presence of OVA treatment, the airway wall thickness of FGFR3 −/− mice was decreased in comparison with WT mice ( Figure 5I). Next, we knocked down FGFR3 gene in primary ASMCs. We found reductions in the expression of FGFR3, COL1A2 and COL3A1 in AMSCs from OVA-treated mice treated with sh-FGFR3 ( Figure 5J and K). CCK-8 assay results demonstrated a downward trend in the proliferation of AMSCs in the absence of FGFR3 ( Figure 5L). These results highlighted the ability of FGFR3 knockdown to reduce ASMC proliferation and the airway inflammation and remodeling. Downregulation of FGFR3 Expression Inhibits ASMC Proliferation and Airway Inflammation and Remodeling of Asthmatic Mice by Inhibiting the PI3K/AKT Signaling Pathway Previous literature has reported that FGFR3 activates the PI3K/AKT signaling pathway which is involved in airway inflammation and remodeling in asthmatic mice. 19,20 We then proceeded to examine the mechanism of FGFR3 in the airway inflammation and remodeling via regulation of the PI3K/AKT signaling pathway. We identified downregulated phosphorylation levels of PI3K and AKT in the lung tissues of FGFR3 −/− mice but no significant changes were observed in the total protein of PI3K and AKT ( Figure 6A). In addition, the phosphorylation levels of PI3K and AKT were elevated in AMSCs from OVA-treated mice while FGFR3 knockdown resulted in opposite results ( Figure 6B). Treatment with LY294002 reduced AHR and the number of total cells and the percentage of eosinophils in OVAtreated mice ( Figure 6C-G). Meanwhile, ELISA data showed lower levels of IL-4, IL-6 and TNF-α in the BALF of mice treated with OVA and LY294002 compared to mice treated with OVA alone (Figure 6H-J). Furthermore, the thickness of Figure 6K). In addition, in ASMCs, LY294002 diminished the AKT phosphorylation level ( Figure 6L). The collagen deposition was noted to decreased in ASMCs of LY294002-treated OVA-sensitized mice ( Figure 6M), and meanwhile, the proliferation of ASMCs was suppressed in response of LY294002 treatment ( Figure 6N). Thus, downregulation of FGFR3 could suppress the PI3K/AKT signaling pathway to repress ASMC proliferation and the resultant airway inflammation and remodeling. MHD Inhibits Airway Inflammation and Remodeling of Asthmatic Mice by Inhibiting the SP1/FGFR3/PI3K/AKT Axis Finally, we further validated the effect MHD on airway inflammation and remodeling during asthma through SP1mediated FGFR3/PI3K/AKT axis. The results of Western blot analysis showed an enhancement in the PI3K and AKT phosphorylation levels in lung tissues of OVA-treated mice, while MTH treatment reversed the results, similar to that following DEX treatment ( Figure 7A). Additionally, SP1 silencing led to reduced PI3K and AKT phosphorylation levels in lung tissues of OVA-treated mice ( Figure 7B). These results suggested that either MHD or SP1 silencing suppressed activation of the PI3K/AKT signaling pathway in lung tissues of asthmatic mice. Further data from RT-qPCR and IHC demonstrated upregulated SP1 and FGFR3 expression in the presence of both MHD and oe-SP1 ( Figure 7C and D). Meanwhile, dual treatment with MHD and oe-SP1 enhanced AHR ( Figure 7E-G), and increased the number of total cells and the percentage of eosinophils in OVA-treated mice ( Figure 7H and I). Meanwhile, ELISA results also showed augmented levels of IL-4, IL-6 and TNF-α in response to both MHD and SP1 overexpression ( Figure 7J-L). Further analysis on the lung tissues using HE, PAS and Masson's trichrome stainings suggested that combined treatment with MHD and oe-SP1 augmented peribronchial inflammatory infiltration, mucus secretion and collagen deposition in lung tissues of OVA-treated mice ( Figure 7M). Moreover, Western blot analysis results revealed an elevation in the PI3K and AKT phosphorylation levels in lung tissues of OVA-treated mice co-treated with MHD and oe-SP1 ( Figure 7N). These lines of evidence suggested that MHD could inhibit airway inflammation and remodeling of asthmatic mice by downregulating SP1 expression and inhibiting the FGFR3/PI3K/AKT signaling pathway. Discussion Asthma represents a chronic airway inflammatory disease mainly associated with heterogeneity. 21 MHD, a well-known traditional Chinese medicine prescription, has been extensively applied for relieving the cold, influenza, cough, acute bronchitis, asthma and other pulmonary diseases. 22,23 Here, this study suggested that MHD could potentially decrease SP1 expression and disrupted FGFR3-dependent PI3K/AKT signaling pathway activation, consequently preventing airway inflammation and remodeling, therefore delaying asthma progression (Figure 8). It was noted in our work that MHD could alleviate airway inflammation and remodeling in mice with asthma. Shegan-MHD is capable of reducing AHR and attenuating the pulmonary infiltration of CD3 and CD4 T cells. 24 MHD can also inhibit the pulmonary inflammation, evidenced by a marked inhibition of IL-1β, IL-6, and TNF-α in the BALF of cigarette smoke-and lipopolysaccharide-exposed mouse models through suppression of Erk phosphorylation. 6 Additionally, the ameliorative role of MHD in bronchial asthma symptoms has also been demonstrated, 25 which is partly in line with our findings. What's more, MHD inhibits the release of inflammatory cytokines and thus inhibits the occurrence of asthma by down-regulating the phosphorylation level of STAT1. 26 Airway inflammation is a main pathological feature of asthma and current therapeutic interventions focus primarily on resolving inflammation. 27 Airway remodeling can be defined as the changes in the type, quantity and nature of airway wall compositions and their organization, which is also considered a most important pathological feature of asthma. 28 Airway remodeling is a consequence of repeated inflammatory injury and repair of the respiratory tract. 7 Meanwhile, many traditional Chinese medicines, including shegan MHD, have a regulatory role in the airway remodeling (https://pesquisa.bvsalud. org/portal/resource/pt/wpr-468274). Ephedra, one of the components of MHD, is an effective treatment for asthma due to its multi-target and multi-pathway functions. 29 These findings demonstrate the protective effect of MHD against asthma by mitigating airway inflammation and remodeling. 2847 The present study further identified the downregulated expression of SP1 was involved in the therapeutic effect of MHD on the asthma. A majority of traditional Chinese medicines can act as Sp1 antagonists to suppress its expression, such as Licochalcone A, Xiaoji decoction and Guifu Dihuang pill. [30][31][32] This study represents the first report to reveal the inhibiting role of MHD in the expression of SP1. In line with our results, SP1 expression was previously detected to be abundant in primary bronchial epithelial cells from subjects with severe asthma and potentiated the airway remodeling associated with H1N1 infection. 33 In addition, siRNA-mediated silencing of SP1 can prevent MMP1 expression, 34 the activation of which contributes to ASMC proliferation and the subsequent asthma severity, 35 suggesting the inhibiting effect of SP1 knockdown on the asthma progression. Due to the lack of available literature, the regulation of MHD in the SP1 expression requires further investigation. Furthermore, our study unfolded that SP1 bound to the FGFR3 promoter and induced a resultant increase of transcriptional activation of the FGFR3 promoter. Consistently, SP1 has been identified to bind to the FGFR3 promoter and result in a significant increase in the transcriptional activity of the FGFR3 promoter. 36 Additionally, transfection of a small inhibitory RNA cocktail (iSp) containing small inhibitory RNAs targeted to Sp1 (iSp1), Sp3 (iSp3) and Sp4 (iSp4) has been proved to decrease FGFR3 protein levels. 12 Inhibited FGFR3 expression aids in the lowered AHR and expression of factors engaged in airway inflammation and remodeling in a mouse model of asthma. 17 These findings offered evidence validating the inhibiting effect of SP1 knockdown on the airway inflammation and remodeling by inhibiting the transcriptional activity of the FGFR3 promoter. Furthermore, siRNA-mediated silencing of FGFR3 diminishes the expression of PI3K and AKT phosphorylation level in the context of subarachnoid hemorrhage. 37 Meanwhile, suppression of the PI3K/AKT signaling pathway due to miR-221 inhibition in a murine asthma model has been shown to reduce AHR, mucus metaplasia, airway inflammation and remodeling. 38 An increase of ASMCs is a hallmark of airway remodeling in asthma, and notably, inhibition of the PI3K/AKT signaling pathway has demonstrated an inhibitory role in the ASMC proliferation. 39 Therefore, it might be plausible to suggest that FGFR3 knockdown reduced ASMC proliferation and airway inflammation and remodeling via PI3K/AKT signaling pathway inactivation. Conclusion Collectively, the key findings of this study provided evidence validating that MHD could downregulate the expression of SP1 and inhibit transcription of FGFR3, thereby resulting in inactivation of the PI3K/AKT signaling pathway. By this mechanism, MHD inhibited ASMC proliferation, resulting in the alleviation of the resultant airway inflammation and remodeling in mice with asthma. These findings emphasize a novel aspect of the role of MHD in asthma and inspire us to develop a novel strategy based on this drug reagent. However, further studies are necessitated to focus on clinical samples collected from patients with asthma for further validation of the therapeutic effects of MHD against OVAinduced asthma. Data Sharing Statement The datasets analysed during the current study are available. Disclosure No competing interest in our study.
2022-08-27T15:22:01.676Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "567c5bd3847ce89f7dc77544cfe46f8a4a27e3e0", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "be904445d9356f920ae8e6f44069cdd2ac6e7e2c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
257019124
pes2o/s2orc
v3-fos-license
Iodine Deficiency and Iodine Prophylaxis: An Overview and Update The thyroid gland requires iodine to synthesize thyroid hormones, and iodine deficiency results in the inadequate production of thyroxine and related thyroid, metabolic, developmental, and reproductive disorders. Iodine requirements are higher in infants, children, and during pregnancy and lactation than in adult men and non-pregnant women. Iodine is available in a wide range of foods and water and is susceptible to almost complete gastric and duodenal absorption as an iodide ion. A healthy diet usually provides a daily iodine consumption not exceeding 50% of the recommended intake. Iodine supplementation is usually necessary to prevent iodine deficiency disorders (IDDs), especially in endemic areas. The community-based strategy of iodine fortification in salt has eradicated IDDs, such as endemic goiter and cretinism, in countries providing adequate measures of iodine prophylaxis over several decades in the 20th century. Iodized salt is the cornerstone of iodine prophylaxis in endemic areas, and the continuous monitoring of community iodine intake and its related clinical outcomes is essential. Despite the relevant improvement in clinical outcomes, subclinical iodine deficiency persists even in Western Europe, especially among girls and women, being an issue in certain physiological conditions, such as pregnancy and lactation, and in people consuming unbalanced vegetable-based or salt-restricted diets. Detailed strategies to implement iodine intake (supplementation) could be considered for specific population groups when iodized salt alone is insufficient to provide adequate requirements. Introduction The thyroid gland requires iodine for the synthesis of thyroid hormones. Thyroxine (T4) is the main thyroid hormone directly synthesized by the thyroid gland. In contrast, triiodothyronine (T3), the physiologically active thyroid hormone, is produced either directly by the thyroid or after the peripheral deiodination of circulating T4 via selenium-containing deiodinases. Thyroid hormones regulate several physiologic processes, including growth, development, metabolism, and reproductive function [1]. Thyroid hormone synthesis is enhanced by the pituitary-derived thyroid-stimulating hormone (TSH) that, in turn, stimulates iodine trapping and oxidation by thyrocytes, thyroglobulin synthesis, iodothyronine coupling, and thyroid hormone release by the gland [2]. Thyroid avidity and the trapping of iodine are upregulated in iodine deficiency and suppressed in cases of overexposure. Iodine deficiency results in the inadequate production of T4. In response to decreased blood T4 levels, the pituitary gland increases TSH to restore the circulating levels of T4. Overview of Thyroid Hormone Synthesis Thyroid hormone synthesis requires two proteins: thyroglobulin (Tg) and thyroid peroxidase (TPO). The synthesis of both occurs under TSH control. Tg is a 660 KDa glycoprotein secreted into the follicular lumen, whose tyrosyl residues serve as a substrate for iodination and hormone synthesis. TPO is a heme-containing enzyme expressed at the apical membrane of thyrocytes. TPO reduces the H 2 O 2 generated by NADPH oxidase to create iodinating species and catalyzes the iodination of tyrosyl residues of growing Tg molecules [2]. Oxidized iodine is incorporated into thyroxine residues to form mono-(MIT) and diiodothyronine (DIT) before they merge to generate T3 and T4. Iodothyronine is part of Tg and stored in the colloid in the follicular lumen for weeks or months, according to the individual requirement of thyroid hormones. The first step in thyroid hormone release is the endocytosis of colloidal droplets from the follicular lumen to the cytoplasm. Endocytic vesicles merge with lysosomes, where Tg is proteolyzed by endo-and exopeptidases. After proteolysis, thyroid hormones are released into the cytoplasm of thyrocytes, where specific carriers mediate the release of T4 into the circulation [2]. Iodine deficiency decreases the DIT-to-MIT and T4-to-T3 ratios, while iodine replacement increases them. Iodine Metabolism and Function Iodine is a non-metallic trace element, essential for animals and humans. Iodine accounts for around two-thirds of the molecular weight of thyroid hormones. According to the official nomenclature system, the term iodide refers to the natural form of the free element (inorganic) in its ionic state (I-), while iodine includes both inorganic iodide and iodine covalently bound to tyrosine [2]. Iodine is ingested as an inorganic ion or organically bound compound, but it is absorbed in the form of iodide after the reduction in iodine compounds in the stomach. The enteric absorption of iodide takes place in the stomach and duodenum, where the enteric isoform of the sodium iodide symporter (NIS) is largely expressed [3]. Iodine accumulates in the thyroid gland, which consequently contains the largest pool of intracellular iodine in the human body. However, the most significant amount of iodide is held in the extracellular fluid, where its concentration is around 10-15 µg/L. Circulating iodide undergoes renal clearance, while a small part is lost through the skin, intestinal secretions, or expired air. The mammary gland can also accumulate and secrete iodide, thus offering an additional source of iodine clearance in lactating women [2]. The renal clearance of iodide is 30-50 mL per minute [4] but largely depends on the individual glomerular filtration rate, without any evidence of tubular secretion or active transport [5]. Reabsorption is partial and passive, and the renal clearance of iodide is influenced by the overall iodide status [6]. The thyroid clearance of iodine is around 10-20 mL/min, but it depends on chronic iodine consumption, ranging from 3 mL/min in cases of chronic overexposure to large amounts of iodine (more than 500 µg/day), to 100 mL/min in cases of severe iodine deficiency [7]. Iodide uptake in the thyroid gland occurs through a specific carrier, the NIS. The NIS is expressed at the basolateral plasma membrane in thyrocytes. It belongs to the so-called secondary active transporter family, as the NIS uses the electrochemical gradient generated by the sodium-potassium ATPase to actively transport iodide against the gradient [8]. This mechanism is essential to maintain the intrathyroidal concentration of free iodide at 20-50 times higher than the plasma concentration [9]. The expression of the NIS is enhanced by TSH [10]. There is also an inner autoregulatory mechanism through which iodide transport and intrathyroidal metabolism inversely fluctuate with the glandular content of organic iodine. This mechanism, also known as the Wolff-Chaikoff effect, depends on the iodine saturation of the carriers and enzymes involved in iodine organification and thyroid hormone synthesis. It is an intrathyroidal defensive mechanism to protect against thyroid hormone overproduction in cases of acute or intermittent iodine overexposure [11]. Once iodide accumulates in thyrocytes, the iodine transition from the cytoplasm to the follicular lumen is facilitated by the apical iodide transporter (AIT) [12] and by pendrin [13]. Iodide is also generated via the intrathyroidal deiodination of iodothyronine after thyroglobulin hydrolyzation. Part of the circulating iodide pool undergoes re-organification into de novo synthesized iodothyronine, while the remnant spreads into the systemic circulation (iodide leakage). Iodine also originates from the peripheral degradation of thyroid hormones and enters the circulation, where it can be either recycled after subsequent thyroid uptake or finally excreted in the urine. Natural and Artificial Sources Iodine occurs naturally as iodide and iodate in igneous rocks and soils. However, iodine could be mobilized from superficial layers of ground, and stones such as iodide and iodate are highly soluble in the aqueous phase. Thus, they drain from rainwater into surface waters, seas, and oceans, eventually becoming available for animal and human consumption [14]. Free elemental iodine also sublimates in the atmosphere directly from soils and rocks, because of its high volatility. When rainfalls occur, iodine precipitates on the land surface and drains into the ground and rocks, and can then be assimilated by plants. Vegetables do not provide an adequate dietary iodine supply, and vegans are exposed to iodine deficiency even in iodine-sufficient areas [15]. Meat, milk, eggs, fish, and other animal-derived foods are the most important dietary sources of iodine in human nutrition. The estimated mean concentration of iodine in animal tissues other than the thyroid (i.e., skeletal muscle) is approximately 0.1 mg/kg [16]. However, the iodine content in animal tissues depends on the iodine supplementation of background animal feed [16]. Seafood and saltwater fish are the most relevant iodine sources, as marine fauna and flora accumulate large amounts of soluble iodine from seawaters. Fresh and farmed fish, as compared to seawater foods, contain less iodine. Thus, fish from rivers or lakes usually have a lower content of this element [17,18]. Iodine intake varies according to geographical areas, but also among individuals in a specific geographic region, and, indeed, individual consumption differs daily. Iodine intake largely depends on age too [19][20][21][22]. In Germany, milk and dairy products provide around 35% of the daily requirement of iodine. The other two-thirds are supplied by meat and meat products, bread and cereals, and fish [19]. In Denmark, milk provides more than 30% of daily iodine intake [20], and a similar percentage has been reported in Swiss children [21]. In Dutch schoolchildren, seafood is a negligible source of iodine, as it is consumed only about once a month [22]. Thanks to alimentary policies allowing the addition of iodine to foods, processed foods containing significantly higher levels of iodine have been available in the last few decades and have been used to provide iodine prophylaxis to counteract, in nationally based programs, the clinical consequences of iodine deficiency. The iodization of salt for human food consumption is the worldwide strategy recommended for this purpose. Iodine may enter the body through chronic consumption or exposure to certain medications, such as amiodarone, povidone-iodine, iodine-based radiocontrast media, and multivitamin preparations. For example, 200 mg of amiodarone (the mean daily dose of maintenance treatment) contains 75 mg of iodine, exceeding five-hundred-fold the recommended daily requirement of the element. Iodine-based radiocontrast media contain grams of iodine. Recommended Intake Daily iodine intake ranges from less than ten micrograms in extreme iodine deficiency areas to several hundred milligrams in patients taking iodine-containing medications. Generally, 150 µg of iodine is the recommended daily intake for adults and the elderly. In pregnant or lactating women, the iodine requirement increases to at least 200-250 µg daily [23]. The iodine requirement per kilogram of body weight is higher in newborns and children than in adults, corresponding to an absolute iodine intake requirement of 70-120 µg in children and 40 µg in newborns [24]. These recommendations consider the daily thyroid hormone turnover in healthy individuals, with a mean iodine intake associated with the lowest values of TSH in the normal range, the smallest thyroid volume, and the lowest incidence of transient hypothyroidism in neonatal screening, and the mean requirement of levothyroxine to restore euthyroidism in patients with thyroid agenesis or following thyroidectomy [23]. Iodine Deficiency A healthy diet in historically iodine deficiency regions provides around 50% of the daily iodine requirement in adults, which is insufficient to ensure an adequate supply of the micronutrient. This issue is particularly relevant in certain conditions such as pregnancy and lactation when the iodine requirement is nearly double. Several biomarkers have been used to assess daily population iodine intake. As an example, the rate of urinary iodine excretion is a reliable measure of daily iodine intake, as 90% of circulating iodine is excreted in urine [2]. The most useful laboratory markers of iodine exposure in a community-based screening program are the 24 h urinary iodine concentration and the iodine-to-creatinine urinary ratio. However, spot urinary iodine concentration assessment for population surveys is preferable to 24 h samples, as the former are impractical [25]. In iodine-sufficient regions, the median 24 h iodine concentration is equal to or more than 100 µg/L, corresponding to a daily intake of at least 130 µg. According to the WHO, iodine deficiency disorders (IDDs), including goiter, hypothyroidism, intellective impairment, reproductive impairment, decreased child survival, and varying degrees of growth and developmental abnormalities, affect more than one billion people around the world [26]. Iodized salt has significantly reduced the prevalence of iodine deficiency in many iodine-deficient countries worldwide [23,27]. However, almost one-third of the global population still lives in geographic areas where iodine deficiency and related disorders are endemic [28]. Diffuse or nodular thyroid enlargement is the first and most common pathophysiological consequence of iodine deficiency. As mentioned above, iodine deficiency reduces the intrathyroidal synthesis of T4, with a consequent adaptive increase in serum TSH concentrations. If undiagnosed, TSH elevation for months or years is sufficient to stimulate thyroid hyperplasia and enlargement. This adaptive response is usually adequate to preserve euthyroidism over several years when subclinical iodine deficiency occurs. "Endemic" goiter refers to an epidemiological condition where more than 5% of school-aged children are diagnosed with enlarged thyroid glands in a population [29]. Moderate or severe iodine deficiency may result in primary hypothyroidism, as TSH stimulation and thyroid enlargement are insufficient to ensure euthyroidism. Besides iodine deficiency, other agents are defined as goitrogenic in humans and may precipitate thyroid disorders when occurring concomitantly with iodine deficiency. These agents include thiocyanate, isothiocyanates, polyphenols, phthalate esters, polychlorinated and polybrominated biphenyls, organochlorines, polycyclic aromatic hydrocarbons, and lithium [30][31][32] (Table 1). Meanwhile, thiocyanate, isothiocyanate, perchlorate, and lithium, as a few examples, inhibit iodide transport by the NIS. Phenolic compounds and phthalates hamper the oxidation and organification of iodine, and lithium affects the enzymatic proteolysis of Tg and blunts T4 release. Polybrominated biphenyls increase the rate of thyroid hormone metabolism [33]. Iodine fortification is a therapeutic strategy to prevent thyroid enlargement in patients chronically exposed to goitrogenic substances, particularly when iodine uptake and metabolism are affected (e.g., by perchlorate, lithium, and thiocyanate) [30]. Iodine deficiency in the early stages of life may significantly affect brain development. Thyroid hormones are necessary for the myelination of the central nervous system, which takes place before and shortly after birth. Primary hypothyroidism related to iodine deficiency has been found to negatively affect cognitive function, with potentially irreversible intellective consequences [34,35]. Adequate maternal exposure to iodine in the early stages of pregnancy is essential for the proper intellective development of the child, irrespective of hypothyroidism. In a longitudinal study from the UK, the verbal intelligence quotient, reading accuracy, and comprehension were significantly lower in children of women with an iodine-to-creatinine ratio of less than 150 µg/g than in women with a ratio equal to or more than 150 µg/g [36]. Iodine deficiency has also been associated with increased miscarriage and stillbirth rates, and congenital disabilities, including congenital hypothyroidism in the offspring [37,38] ( Table 2). Congenital hypothyroidism comprises two classical clinical features with specific phenotypes: neurological and myxedematous. The first is characterized by intellectual impairment and developmental delays, and various neurological defects, including the underdevelopment of the cochlea leading to deafness, defects of the cerebral neocortex with intellective impairment, and the underdevelopment of the corpus striatum with motor disorders [39]. Patients do not exhibit signs of hypothyroidism, and the prevalence of goiter is similar to that observed in the general population. The hypothyroid phenotype includes dwarfism with delayed bone and sexual maturation, intellective impairment, and overt hypothyroidism. Thyroid development is critically involved, and patients usually exhibit a low thyroid volume or thyroid atrophy [40]. Neurological cretinism is related to thyroid hormone deficiency in the early stages of embryonal development, resulting from a severe maternal iodine deficiency in a phase when thyroid development is still incomplete [41]. Myxedematous cretinism is associated with thyroid insufficiency during late pregnancy or early infancy [42]. Pure forms of myxedematous cretinism predominate in Central Africa, while in other endemic regions such as New Guinea and some of South America, only neurological cretinism is described. Mixed forms were observed in India [43]. The specific geographic distribution of these different phenotypes suggests that factors other than iodine deficiency could be involved, including hereditary factors, a diet with a rich thiocyanate [44], and poor selenium, zinc, copper, manganese, iron, and antioxidant content (e.g., vitamin A) [45,46]. The prevalence of endemic goiter and other IDDs is extremely low in most European countries, while subclinical iodine deficiency remains a widespread medical issue in Western and Central Europe [47]. However, iodine deficiency is still a public health concern even in Europe. Firstly, iodine intake should be quite low in specific subgroups of the population, such as people consuming a vegan diet without the consumption of iodized salt or use of supplements containing iodine and other micronutrients (such as selenium and zinc) [48] and those with unsatisfactory adherence to dietary recommendations or with an increased iodine requirement (e.g., during pregnancy and lactation) [49]. A recent systematic review of national surveys and subnational studies confirmed that in Europe, some subjects have an iodine intake below recommended levels, especially among girls and women [50]. Iodine Excess In most regions, habitual diets provide a low normal iodine supply and are more prone to induce iodine deficiency than excess [52]. Nevertheless, people living in some regions could be exposed to extraordinary iodine overload due to their diet. Chronic iodine overload is usually well tolerated, as most people exposed to a large amount of iodine do not manifest any thyroid complaints [53]. However, chronic overexposure may increase the risk of subclinical hypothyroidism and possibly goiter due to persistent TSH overstimulation [54]. Acute iodine poisoning is a rare emergency occurring after ingesting grams of iodide. Common clinical manifestations include a burning mouth, sore throat, fever, nausea, vomiting, diarrhea, and, in severe cases, coma [23]. Acutely iodide excess inhibits thyroid hormone synthesis abruptly, due to the described Wolff-Chaikoff effect. It is usually transient and reversible, but it could be persistent in certain conditions, such as chronic autoimmune thyroiditis [23]. Iodine Prophylaxis A thousand years have passed since the first medical descriptions of a relevant reduction in goiter size in patients consuming significant amounts of seaweed and sea sponges, typical products from Asian coastal regions. However, iodine was incidentally discovered in 1811 by Courtois, and it was characterized and described as a new element two years later by Gay-Lussac [55]. Jean-Francois Coindet, a Swiss physician born and practicing in Geneva, was the first to speculate that the historically described decrease in goiter size after the ingestion of seaweed was attributable to its high iodine content [56]. Hence, he created the first "therapeutic" solution of iodine by dissolving 48 grains (3.1 g) of iodine in a volumetric ounce (around 28 mL) of distilled alcohol. Based on empiric and anecdotal case series, Coindet provided the first evidence of the effectiveness of iodine fortification in reducing goiter size in goitrous patients. News of Coindet's experience rapidly spread through Europe, prompting criticism, especially due to safety concerns in terms of overexposure to iodine. This delayed the widespread use of fortification as a basic treatment of multinodular goiter. Years later, more detailed studies were carried out by David Marine, who performed a clinical trial of an iodine prophylaxis program for schoolgirls in 1917 [56]. He found that iodine prophylaxis prevented goiter development in children with an initially normal thyroid size and induced a considerable decrease in thyroid size in around two-thirds of schoolgirls with an originally enlarged thyroid [56]. In the United States, iodine prophylaxis was started in 1924 in Michigan, which belongs to the so-named goiter belt, a group of states in which endemic goiter was highly prevalent. For the first time, fortified (iodized) salt was employed for administering iodine prophylaxis; the iodine concentration was 100 mg for each kg of salt, resulting in an estimated average intake of 500 µg iodine daily since, at that time, the mean recommended salt consumption was approximately 6.5 g per day. Iodized salt consumption has increased remarkably since the 1950s. Thereafter, the consumption of iodized salt as the main salt for household use has remained stable at around 50% [57]. The US FDA recommends fortifying iodized salt with a range of 46-76 mg iodide/kg [58]. Iodine prophylaxis programs in Europe began in the regions recognized as endemic areas for iodine deficiency since the 1920s, such as Switzerland (1922) Iodized salt consumption was initially voluntary, and the iodine content of fortified salt was usually insufficient to prevent or treat endemic goiter, especially in moderately endemic areas. The iodine content of fortified salt differs considerably across Europe, ranging from 10 mg iodine/kg in Austria to 60 mg iodine/kg in Spain. The difference is based on the severity of iodine deficiency, dietary policies, and information campaigns to promote iodine prophylaxis [59]. Iodized salt manufacturing was formally allowed by law in 1972 in Italy. Iodine prophylaxis started selectively in endemic regions after that and, five years later, was extended to the whole country. The iodine content of fortified salt was 15 mg/kg, and iodized salt consumption was on a voluntary basis. Epidemiological data in 1994 were collected and analyzed from Pescopagano, a small village of Basilicata. Analysis of about 1400 citizens living with subclinical iodine deficiency, who never underwent iodine prophylaxis, showed that iodine deficiency (mean urinary iodine excretion 55 µg/L) leads to a progressive increase in goiter prevalence with aging, a high frequency of autonomously functioning thyroid nodules and other forms of hyperthyroidism, and thyroid autoimmunity [60]. Other epidemiological reports confirmed a direct relationship between the iodine deficiency severity and the prevalence of anatomic and functional thyroid disorders and intellectual impairment [61]. The results of 10-year iodine prophylaxis in correcting iodine deficiency found that it lessened the risk of endemic goiter in schoolchildren, suggesting that the widespread use of iodized foods would be desirable to reduce IDDs [62]. At that time, a new Ministry Decree (1991) established that the iodine content of fortified salt should be increased to 30 mg/kg, but iodine fortification was still on a voluntary basis. Law 55, promulged in late March 2005, reorganized and regulated iodine prophylaxis to reduce the risks related to iodine deficiency. Then, strict monitoring of the effect of iodine prophylaxis and information campaigns to promote iodine consumption were carried out. In 2009, the Ministry of Health instituted the National Observatory for iodine prophylaxis monitoring at the National Institute of Health to collect and analyze the effect of iodine prophylaxis over time. Salt market reports before the legislation found that iodized salt consumption was significantly lower than 50% of the salt consumed. Iodine sufficiency was found only in three Italian regions (Liguria, Tuscany, and Sicily). However, six of nine regions (Liguria, Emilia-Romagna, Marche, Tuscany, Calabria, and Sicily) were areas with endemic goiter [63,64]. In collaboration with the regional observatories, the postlaw surveillance data were collected by the National Observatory for Monitoring Iodine Prophylaxis and analyzed from 2015 to 2019. Salt market reports displayed a relevant increase in iodized salt consumption (65% of the whole pool of commercialized salt). National household consumption rose to 63%, ranging from 50% (Sicily) to over 75% (Veneto and Tuscany), while the national percentage of school dining halls using iodized salt was 78%, with regional differences ranging from 65% in Sardinia to 97% in Sicily [64]. The mean urinary iodine concentration was 124 µg/L, indicating the achievement of an adequate iodine intake without any differences between rural and urban areas. A sufficient iodine intake was reached in Veneto, Emilia-Romagna, Umbria, Marche, Lazio, and Calabria, and iodine deficiency was resolved in Tuscany, Liguria, and Sicily [64][65][66][67]. In seven of the nine examined regions (Liguria, Sicily, Tuscany, Emilia-Romagna, Umbria, Marche, and Lazio), the prevalence of goiter diagnosed in schoolchildren was lower than 5%, suggesting a relevant decrease in the number of endemic areas for goiter [63]. The frequency of neonatal TSH > 5 mUI/L, an indicator of insufficient exposure to iodine during pregnancy, decreased from 6.1% in 2010 to 4.9% in 2018. Despite these improvements, the safe threshold of 3% was still far distant, suggesting a need for additional supplementation by healthcare providers, including obstetricians, gynecologists, and pediatricians [68]. Since 1990, universal iodine fortification programs have provided remarkable progress worldwide, with a growing number of countries adhering to mandatory salt iodization ranging from 15 to 40 mg/kg. The number of countries that achieved adequate (median urinary iodine concentration of 100-199 µg/L) and more than adequate (median urinary iodine concentration of 200-299 µg/L) iodine intake increased remarkably in the following decades [69]. It has been estimated that 88% of the global population used iodized salt in 2018, with the highest consumption in East Asia and the Pacific areas (92%) and the lowest coverage in West and Central Africa (78%) [70]. According to the 2021 Global scorecard of iodine nutrition, in school-aged children, 146 countries reached adequate iodine exposure (defined as a median urinary iodine concentration of 100-300 µg/L), while 26 are still endemic for mild-to-moderate iodine deficiency [71]. Risk Related to Iodine Prophylaxis and Potential Iodine Overexposure Iodine fortification programs in iodine-deficient regions recommend a daily iodine intake of 150-200 µg. Iodine fortification has been associated with an increased incidence of iodine-induced hyperthyroidism, especially in older people with a background multinodular goiter. Subclinical iodine deficiency generates a chronic stimulation leading to follicular hyperplasia, thyroid enlargement, and multinodular goiter. In natural history, one or more hyperplastic nodules may acquire an autonomous activity, thus becoming unresponsive to the normal thyroid regulation system. When iodine fortification occurs, iodine uptake and thyroid hormone synthesis are significantly enhanced, especially in the autonomous nodules of the thyroid gland, resulting in hyperthyroidism after iodine fortification [72]. The effect is brief and usually disappears after a few weeks or months, but it can result in adverse consequences in predisposed individuals (e.g., in patients at high risk of atrial fibrillation) [73]. Epidemiological data indicate that a higher incidence of autoimmune thyroid diseases is observed in people with a sufficient dietary iodine intake than in those with subclinical iodine deficiency [74]. On the other hand, chronic exposure to iodine in previously iodine-deficient patients with autoimmune thyroid disease may increase the risk of hypothyroidism and goiter, particularly in the short term [74]. It has been hypothesized that iodine exposure may trigger thyroid autoimmunity by exacerbating the immunogenicity of intrathyroidal iodized proteins, especially thyroglobulin [74]. Although some studies have demonstrated that iodine prophylaxis may increase the incidence of autoimmune thyroid diseases, other long-term trials have not confirmed that iodine prophylaxis reduces the incidence of hypothyroidism or that it does not increase the risk of hypothyroidism and thyroid autoimmunity [74]. Iodine deficiency is associated with a higher risk of follicular thyroid cancer, while iodine fortification reduces it. However, in countries previously defined as iodine-deficient regions, iodine prophylaxis has increased the prevalence of papillary thyroid cancer [73]. Moreover, a positive relationship between daily iodine intake and occult papillary thyroid cancer has also been described. Data from autopsy registries suggested that the prevalence of occult thyroid cancer was particularly relevant in Finland (36%), where iodine exposure has been substantially optimal since the 1980s [74]. According to the WHO classification, all follicular thyroid cancers presenting a papillary component were considered papillary thyroid cancers, and this contributed to an increase in the ratio of papillary to follicular thyroid in many countries after the classification change [75]. On the other hand, it should be borne in mind that most occult cancers were microcarcinoma (mostly <5 mm). This epidemiological phenomenon does not raise warnings as, first, iodine deficiency is a risk factor for follicular thyroid cancer, and second, the prognosis of papillary cancer is usually slightly better than that described for iodine-deficiency-related follicular thyroid cancer. Conclusions Thyroid hormones play a central role in regulating several functions in the human body, and a sufficient iodine intake is essential to maintain thyroid homeostasis. Iodine deficiency is an epidemiological issue not only in low-or middle-income countries. Even in high-income countries, where iodine fortification has gained general acceptance and diffusion, and that have experienced a significant improvement in IDD epidemiology over time, dietary habits such as a vegan diet, low consumption of iodine-rich foods, and the lack or discontinuation of measures to monitor iodine intake in a population-based manner (e.g., screening of iodine exposure) could be responsible for subclinical iodine deficiency and other IDDs. The iodization of salt for human consumption remains the recommended strategy for adequate iodine exposure. Despite some concerns related to high iodine exposure risks (hyperthyroidism, thyroid autoimmunity, and a relative increase in the risk of papillary thyroid cancer), the benefits outweigh the risks. Specific recommendations and strategies to implement iodine intake (as a supplement) are needed for categories of people in whom iodized salt alone appears insufficient to provide adequate requirements.
2023-02-19T16:13:46.602Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "448a4bd7e525495d234a4d5a926501e7076e1657", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "320ddce2c49a17a8e9fe2d17bbf4561736fb8b5f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
229225669
pes2o/s2orc
v3-fos-license
Autistic Disorder Analysis Among Children and Adult Autism is one of the psychological and heterogeneous developmental1disorders due to the abnormal wiring between the different brain regions.1 It is a neuropsychiatric syndrome, derived from the Greek term autos, where an individual keeps himself/herself isolated from nearby interactions. CDC estimates that the incidence rate of autism was 1 in 110 kids in 2006 and that the rate was 1 in 88 births by 2012.1 INTRODUCTION Autism is one of the psychological and heterogeneous developmental 1 disorders due to the abnormal wiring between the different brain regions. 1 It is a neuropsychiatric syndrome, derived from the Greek term autos, where an individual keeps himself/herself isolated from nearby interactions. CDC estimates that the incidence rate of autism was 1 in 110 kids in 2006 and that the rate was 1 in 88 births by 2012. 1 The CDC estimates its present prevailing frequency to be 1 out of 68 births or 14.7 per 1000 kids. 2 In India 1 out of 68 children is being diagnosed with ASD. At least 70 million people have autism worldwide in which 10 million are Indians. Due to the X-chromosome patching related mutations (PTCHD1) genes, five times boys are more vulnerable to this disease than compared with girls. In kids over 1.5-2 years of age, clinical symptoms are noted owing to an abnormality in neuronal connections both in computational and physical. It may occur as troubled sleep, depression, reduced length of sleep, anxiety, and enhanced delay in the start of sleep (Belmonte et al., 2004). Researchers describe aggression, hyperactivity and behaviours of stereotypes as prevalent in autistic men, while autistic women demonstrate anxiety, depression, and increased intellectual deficiency. 3 Other characteristics include macrocephalus, where head circumference development accelerates in the first 2 years followed by slowing down in subsequent adolescence, repetitive behaviour, delay in development, behavioural impairment, and absence of interaction and interactiveness. Infant early behavioural traits are delayed in babbling and inappropriate sleep and eating practices. FACTORS INSTIGATING AUTISM Different trials and experiment have been performed and analyzed to provide likely causes of autism. Autism is a neurobiological abnormality that affects the nerve fibres of the corpus callosum, which connects the two hemispheres (left and right) of the brain and plays a major role in sensory, motor and cognitive information transmission. A brain region's inherent wiring potential is consistent with reduced wiring expenses connected with small geodesic distances. The brain's complex surface is studied by computing the geodesic distance. It was noted that the inherent connection of the brain differs from ordinary topics in ASD topics. The functional connectivity of the ASD brain with other brain regions within frontal and temporal regions decreases for ASD. Over connectivity or under connectivity occurs due to less specialized autistic brain resulting in language impairment and reduced learning rate. DIAGNOSIS Behavioural therapies can be given to the people if diagnosed at early stages. Children with autism are more focused on the face's region particularly mouth relative to the region of the eye and have the weak judgmental capacity. Gaze and position detection can assist in autism diagnosis using a virtual-reality-based intervention expression system that tracks eye gaze and physiological signals with Autism Spectrum Disorders typically develop in emotion identification tasks. The variations between the ASD and normally developing characters can be calculated with eye monitoring indices and output information. Quantitative differences between autistic and normal using facial discrimination since autistic people have impaired recognition concerning facial identification eye discrimination is also suggested. 5 Functional magnetic resonance imaging technique helps to identify the neurons correlation during motion processing is tracked by Takarae et al. (2014). 6 To improve the ASD people, passive views of visual movement and monitoring are identified. This study also suggested high abnormalities towards visual processing in autism. Chromosomal microarray analysis, exome sequencing and genetic testing are suitable tools in the identification of de novo mutations and ASD risk genes. Dyslexia is a learning disorder in which the children finds difficult to reading and learning because of their problem in identifying the sounds and matching with letters and words. This disorder affects the brain area that process language. These children can be successful if they have special attention from family members with active participation in various activities. The general cure is not available but Early Intervention and specialized assessment result in the best outcome. AUTISM INFORMATION GATHERING UNIT This includes four modules the biosignal sensor unit, the video processing unit, Central Processing Unit (CU), Assessment unit. The main function of the biosignal sensor unit is the acquisition of EEG and ECG signals using wearable and wireless devices from the child. Child behaviour is recorded using the video processing unit both in still and mobility conditions. All the collected data are processed using a central processing unit to evaluate the physiological and behavioural parameters linked with the behaviours of the ASD child. Electroencephalogram (EEG) signal Recording and Analysis Acquiring electroencephalogram signal, digitizing and transmitting the signal with reduced environmental noise is done using Enobio wireless device (STARLAB, Barcelona, Spain). This device continuously records EEG signals over 32 channels which have to be placed according to the 10/10 standards and two references at 500 Hz with a 32-bit accuracy. 6 This Enobio can be used either with gel or dry electrodes. Gel electrodes provide good contact but it might be uncomfortable for the children, whereas dry electrodes can be chosen for easier setup and give good comfort for ASD children than normal child resulting in a good performance. EEGLAB Toolbox can be used to pre-process the EEG Signals resulting in the removal of noise and artefacts. After-pre-processing, QEEG analysis can be done using the Matlab toolbox. Power Spectral Density (PSD) is computed for the EEG signal in the frequency domain, so a conversion from time-domain has to be processed. For each electrode, absolute power as well as and the relative power of each band has to be identified. Relative powers are more reliable than absolute because they are less affected by artefacts. The BSI (Brain Symmetry Index) is computed within each EEG band considering the total energy in both left and right hemisphere region. Coherence value estimates correlation among the collected signals from scalp points and is computed for each frequency band. ECG Recording and Processing ECG recording is performed using a wearable device called Shimmer R which is a wireless base module powered by a 3.6 V rechargeable battery, that allows up to 7h of continuous monitoring per charge. This is tailored with cardiofitness Polar TM or Adidas TM chest straps a lightweight, long term monitoring to gain in ergonomics. These signals are processed using different filtering techniques that aim at the removal of ECG artefacts and interferences. Significant features that identify the engagement of the child can be identified from the signal. The Heart Rate (i.e the number of poundings of the heart in a specific time expressed in beats per minute -HR), the Root Mean Square of the Successive Differences (an indicator of vagal activity -RMSSD), and the Respiratory Sinus Arrhythmia (periodic fluctuations in HR -RSA) are the measures to be identified. Video Mobile Unit The video mobile unit has to be set up with two environmental cameras with a frame rate of 60fps and is the solution of 640 × 480 for recording. The video cameras are synchronized to contextualize the neurophysiological parameters with the behaviour of the child. An expensive camera has to be used to capture video in-order to exactly identify the eye, mouth expression and their gait. The video analysis toolbox has to be developed to label the children behaviour from the recorded session. Some features like gesture, gait, mouth and eye variations that are instantaneous have to be annotated which represents the state of the ASD child. Manual annotation by the referring therapist has to be performed to identify the behaviour and the state of the autistic child. Behaviour always corresponds to action with start and end time, where state refers to the behavioural states i.e. engaged or disengaged. This video analysis tool generates an XML file with information's about the annotated events. This annotation file on every session will help to identify significant behav-iour for exploratory analysis with EEG and ECG signal. Central Unit Central Unit is used to monitor the neurophysiological signals and the expressions of the child. The data recorded from the sensors and the video recorder is sent to this unit using Bluetooth data transmission. This unit notifies the session start to all the recording units to initialize each new session. The data collected can be processed offline and can be uploaded in the cloud for further research users. Similarly, wearable's can also be used for data acquisition of the neurophysiological parameters for the treatment. Lucia et al 7 acquired the signals from the autistic children during their therapy classes. EEG features were fetched from the EEG signals and identified the heart rate variability. This study helps to monitor the treatment effects for which naturalistic paradigms can be also used. Juan et al 8 identified the behavioural changes and issues of the ASD people using some smartwatch. These authors performed nine-day experiments with two individual to show their behavioural changes during different emotions. Sucksmith et al. studied 9 the empathy and emotions of the parents of the child affected by ASC. The study shows that the fathers have lesser empathy quotient that the mothers. Similarly, the anxiety level of the autistic children is analyzed to understand the physiological changes 10 . Many supervised algorithms were proposed to analyze the physiological signals. This paper proposed a Kalman filtering theory for identifying the physiological factors to evaluate the heart rate variability. This scheme achieved 99% of sensitivity and 92% specificity. Robust machine learning algorithm 11,12 was proposed by Yuan 13 using semi-structured and structures digital forms. The data are preprocessed and classified and achieved an accuracy of 83.4% and 91.1% of recall. Mohd et al. 14 studied the EMG signals of the ASD children by acquiring the signal from the lower limb muscles during the walking process. These signals are studied for the typical development children also and the difference between them is studied for considering for habitation problem. CONCLUSION Monitoring neurophysiological signals and neuroimaging helps the researchers to identify the relations between the major disorders in neurodevelopmental and behavioural changes. Specifically, the brain typical pattern that leads to Autism Spectrum Disorders (ASD) can be analyzed in children with the signals effectively. Early identification of Autism Spectrum Disorders syndrome helps the parents to decide about the therapies and treatments required for improving their day to day activities. Different analyses performed by various researchers using machine learning algorithms using the EEG and ECG signals have been presented. Different techniques and devices used for acquiring the data through various experiments are also presented. ACKNOWLEDGEMENT Authors acknowledge the immense help received from the scholars whose articles are cited and included in references to this manuscript. The authors are also grateful to authors/ editors/publishers of all those articles, journals and books from where the literature for this article has been reviewed and discussed.
2020-11-19T09:15:35.863Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9109b720f1560421e7939055d952031962da3cf2", "oa_license": null, "oa_url": "https://doi.org/10.31782/ijcrr.2020.12217", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f1a702e2c7878226356acff6baaa16f10ba31fc", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219523891
pes2o/s2orc
v3-fos-license
A pilot clinical phase II trial MemSID: Acute and durable changes of red blood cells of sickle cell disease patients on memantine treatment Abstract An increase in abundance and activity of N‐methyl D‐aspartate receptors (NMDAR) was previously reported for red blood cells (RBCs) of sickle cell disease (SCD) patients. Increased Ca2+ uptake through the receptor supported dehydration and RBC damage. In a pilot phase IIa‐b clinical trial MemSID, memantine, a blocker of NMDAR, was used for treatment of four patients for 12 months. Two more patients that have enrolled into the study did not finish it. One of them had psychotic event following the involuntary overdose of the drug, whereas the other had vertigo and could not comply to the trial visits schedule. Acute and durable responses of RBCs of SCD patients to daily oral administration of memantine were monitored. Markers of RBC turnover, changes in cell density, and alterations in ion handling and RBC morphology were assessed. Acute transient shifts in intracellular Ca2+, volume and density, and reduction in plasma lactate dehydrogenate activity were observed already within the first month of treatment. Durable effects of memantine included (a) decrease in reticulocyte counts, (b) reduction in reticulocyte hemoglobinization, (c) advanced membrane maturation and its stabilization as follows from reduction in the number of NMDAR per cell and reduction in hemolysis, and (iv) rehydration and decrease in K+ leakage from patients’ RBC. Memantine therapy resulted in reduction in number of cells with sickle morphology that was sustained at least over 2 months after therapy was stopped indicating an improvement in RBC longevity. serum albumin (BSA). RBCs were re-suspended in the same solution to Hb levels of 90-100 g/l, incubated in a thermoshaker at 37 o C and under continuous shaking for 6 hours. Each hour extracellular K + concentration was measured, and kinetics of its accumulation plotted against time and normailsed per Hb content. Flow cytometry for detection of CD71+ RBC and intracellular free Ca 2+ content The number of RBCs positive for CD71 (reticulocytes) was assessed using Gallios Flow Cytometer (Becton Dickenson AG, Allschwil, Switzerland) for 100 000 cells at medium flow rate. RBC density measured using separation on Percoll gradients RBCs were fractionated into low-, medium-and high-density fractions on Percoll density gradient as described elsewhere 22 . One ml of whole blood was layered on top of 13 ml of the 90% isotonic Percoll solution and centrifuged at 48 000×g at 34-36•C for 15 min to achieve separation of cells into fraction of low (L), medium (M) and high (H) densities. Image of the distribution of RBCs within the gradient was taken against the homogeneous light source and analyzed using ImageJ software (see Suppl Fig 2). In addition, cells forming L, M and H fraction were then collected and the number of cells within the fractions in % was calculated using capillary hematocrit measurements for whole blood and for reach fraction. Assessment of the number of NMDARs per cell using [ 3 H]MK-801 binding assay The radiolabeled NMDA receptor antagonist ([ 3 H]MK-801) binding assay was used to detect the number of receptor copies in RBCs forming M density fraction as described elsewhere 25 . Briefly, RBCs were washed with plasma like solution and resuspended to a hematocrit of 40-50%. 5 μl of 2D projected images were then binarized by setting in first place a threshold on the image pixel intensity value followed by a typical binarization routine. The identified objects (sRBCs) were set to pixel value 1 (white) and the background to pixel value 0 (black). The coordinates of cell borders were defined as the positions of the interface between the objects and the background. The number of pixels inside the cell border was defined as the cell projected area, A. From the covariance matrix obtained by a fitting of a multivariate normal distribution to the cell border coordinates, the major and the minor axis, a and b respectively, of an ellipse that fits to the cell contour and its eccentricity  were obtained. "sickle", considered as so according to Corbett (Corbett, et al 1995) description of cells which bearer single or multicentral HbF crystalls; 3) "others", to include additional shapes (e.g. teardrop cells), and deformed cells without a clear typical shape observed in sickle cell disease. The classification procedure was repeated tree times with an error of 2%. Eccentricity was considered as a reference value to distinguish discocytes from ellyptocytes: cells with ε ≥ 0.7 were classified as ellyptocytes and therefore included in the group "others". Changes in the prevalences of those shape groups after the memantine treatment are show on the Supplementary Figure 3C. Supplementary table 1 Effect of memantine therapy on the white blood cell counts in SCD patients Average of the values at the start of the MemSID trial and the up-dosing phase (base) are compared to the average of the last 3 months of treatments (20 mg/day) and a down-dose phase (end). Stars denote significance (p<0.05) between the "base" and the "end" datasets. (B) Automated and manual analysis of fixed RBC shapes. Fixed RBC shape to projected area distributions based on the manual classification and eccentricity values before and after the treatment and by the end of the follow-up phase. As mentioned in the extended methods section, projected area was defined as number of pixels inside the cell border using Matlab R2017b software routine and re-calculated to µm 2 thereafter. (C) Impact of memantine treatment on the solidity and eccentricity of RBCs of individual patients compared to three healthy donors (S1-S3, grey color). Samples were collected at prescreening (pre, blue color), by the end of down-dosing phase (end, black color) and at the end of the follow-up phase (post, red color). Paired t-tests were used to assess the significance of changes for individual patients with memantine treatment and its interruption. Supplementary literature Corbett
2020-05-21T09:10:53.935Z
2020-05-20T00:00:00.000
{ "year": 2020, "sha1": "dcabd425ce0bde65fcea25daa1c83edae4c2d0af", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jha2.11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8c63bf833ac1fe9f4095460d660a100a3f90adc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
267847438
pes2o/s2orc
v3-fos-license
Association between Brain-Derived Neurotrophic Factor and Lipid Profiles in Acute Ischemic Stroke Patients Ischemic stroke, the most prevalent form of stroke, leads to neurological impairment due to cerebral ischemia and affects 55–90% of the population. Brain-derived neurotrophic factor (BDNF) plays a crucial role in the central nervous system and regulates cardiometabolic risk factors, including lipids. This single-center study aimed to explore the relationship between lipid profiles and BDNF levels in 90 patients who had experienced AIS for the first time. The results show that the high BDNF group (≥3.227 ng/mL) had significantly higher HbA1C and TG levels; ratios of TC/HDL-C, LDL-C/HDL-C, and TG/HDL-C; and percentage of hyperlipidemia (60%) as well as lower levels of HDL-C, with an OR of 1.903 (95% CI: 1.187–3.051) for TG/HDL-C, 1.975 (95% CI: 1.188–3.284) for TC/HDL-C, and 2.032 (95% CI: 1.113–3.711) for LDL-C/HDL-C. Plasma BDNF levels were found to be significantly positively correlated with TG and negatively with HDL-C, with OR values of 1.017 (95% CI: 1.003–1.030) and 0.926 (95% CI: 0.876–0.978), respectively. TC/HDL-C, TG/HDL-C, and LDL-C/HDL-C ratios are associated with BDNF levels in AIS patients. The results also indicate that, in AIS patients, higher BDNF levels are associated with lower HDL and higher TG concentrations. Introduction Ischemic strokes are experienced by a significant portion of the population, estimated at between 55% and 90%, while hemorrhagic strokes affect 12% to 35% [1].The primary cause of IS is cerebral ischemia, which leads to neurological impairment, and it the most prevalent subtype of stroke [1,2].Dyslipidemia is a major risk factor for cardiovascular and cerebrovascular diseases and is characterized by abnormal levels of blood lipids, including elevated total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), and triglycerides (TG) and reduced high-density lipoprotein cholesterol (HDL-C) [3]. The National Cholesterol Education Program Adult Treatment Panel defines an HDL-C level under 40 mg/dL or TC level over 200 mg/dL as indicative of a high risk for ischemic heart disease [4].Research in Northern Manhattan, New York, encompassing a diverse population, found a link between HDL-C levels and stroke risk, revealing that higher HDL-C levels, particularly in individuals aged 75 and above, were protective against IS across all racial groups [4][5][6].Dyslipidemia is not only associated with atherosclerosis and stroke [7,8] but also impacts the smooth muscle and endothelial function of cerebral arteries [9].Recent research suggests the development of vascular disorders is influenced by the pro-inflammatory properties of LDL-C and the anti-inflammatory traits of HDL-C [10]. While the association between the levels of individual lipids, particularly HDL-C, and AIS is well established, using lipid ratios provides a more comprehensive assessment of lipid metabolism and its relationship with BDNF levels.The use of lipid ratios to investigate the relationship between lipid profiles and AIS risk has been well established in the literature.The ratios of TC to HDL (TC/HDL), LDL to HDL (LDL/HDL), and TG to HDL (TG/HDL) are valuable indicators of atherogenic lipid profiles, which have been linked to increased cardiovascular risk, including stroke [11,12].Lipid ratios provide insight into the overall lipid balance and are possibly a better representation of dyslipidemia-related risk factors than individual lipid parameters alone [13][14][15]. BDNF plays a vital role in the development, maintenance, and recovery of the central nervous system [16].According to in vivo research, BDNF controls neurogenesis in the adult hippocampus and can thereby improve neuronal survival following injury or neurodegeneration in the developing brain [17][18][19].BDNF is also present in peripheral tissues, such as those of adipose, muscle, and the cardiovascular system.While its exact role in these peripheral areas is still not fully understood, it appears to regulate glucose, lipids, and lipoproteins, which are all linked to cardiometabolic risk factors [20,21].One relevant study closely evaluated the impact of free fatty acids (FFAs) and their inflammatory metabolites on the BDNF levels of 73 patients who had suffered an IS.The results showed a strong positive relationship between the levels of certain FFAs and BDNF [22].However, the role of BDNF in regulating lipid profiles in IS patients is not well studied [23,24]. This study aimed to explore the relationship between the presence of BDNF and lipids, including TC, TG, LDL-C, and HDL-C, in patients who have experienced an AIS. Characteristics of Patients Based on BDNF Levels Table 1 displays the characteristics of 90 AIS patients, categorized into two distinct groups according to their BDNF levels (45 patients in each group).One group consists of patients with BDNF levels ≤ 3.227 ng/mL, while the other includes those with BDNF levels ≥ 3.227 ng/mL.The table includes various health parameters, such as age, gender, and the prevalence of conditions like hyperlipidemia, diabetes, hypertension, and lifestyle factors (smoking and alcohol use).It also describes clinical measures such as systolic and diastolic blood pressure, white blood cell count, platelet count, glucose levels, and various cholesterol levels. The average age was 67.19 years (±13.82) in the low BDNF group and 68.21 years (±12.00) in the high BDNF group, which is not statistically significant (p = 0.711).There were 68.9% and 66.7% males in the low and high BDNF groups, respectively, with a p value of 1.0, indicating no significant difference.Hyperlipidemia was found in 36.1% of the low BDNF group compared to 60.0% in the high BDNF group, corresponding to a statistically significant difference (p = 0.043).In contrast, diabetes was found in 63.6% of the low BDNF group and 71.9% of the high BDNF group, which was not significantly different (p = 0.598).Similarly, 75.6% and 68.2% of patients in the low and the high BDNF group, respectively, had hypertension, which was not significantly different (p = 0.486), whereas 40.0% and 29.7% in the low and the high BDNF group were smokers, also with no significant difference (p = 0.363).Around 23.3% and 20.0% of the low and the high BDNF group consumed alcohol, with no significant difference (p = 0.798), and the comparison of systolic (p = 0.585) and diastolic (p = 0.950) blood pressures, as well as white blood cell (WBC) counts (p = 0.767), between groups also showed there were no significant differences.However, there was a significant difference in platelet counts, with 262.98 (±111.68) in the low BDNF group and 210.68 (±76.98) in the high BDNF group (p = 0.013).Various blood parameters were compared, with significant differences found in TG, HDL-C, and TC/HDL-C, LDL-C/HDL-C, and TG/HDL-C ratios, but not in others.No significant differences in non-HDL-C and TyG index were found between the groups. The key findings presented in the table include statistically significant differences in hyperlipidemia prevalence, platelet count, TG, high-density lipoprotein cholesterol, and the ratios of TC to HDL-C, LDL-C to HDL-C, and TG to HDL-C between the two groups.Regarding the etiology of stroke, 28 patients (31.1%) were identified with large artery atherosclerosis, 3.3% with small artery atherosclerosis, 12 patients (13.3%) with cardio-embolism, 3.3% with determinable causes, and 11 patients (12.2%) with indeterminate causes. Age and Sex-Adjusted Odds Ratios The data in Table 2 show that the variables HDL-C, TG, TG/HDL-C, TC/HDL-C, LDL-C/HDL-C, and hyperlipidemia are significantly associated with higher levels of BDNF in AIS patients, while others such as TC, LDL-C, non-HDL-C, glucose, diabetes mellitus, and blood pressure are not significantly associated. Multivariate Logistic Regression The results for the multivariate logistic regression of BDNF biomarkers and lipid profile, indicating risk of AIS, are shown in Table 3.In the first model, adjustments were made for age, sex, and HbA1C levels.These factors are included in the second model along with platelets.For the third and most extensive model, SBP, DBP, smoking status, alcohol consumption, and platelet count were added as adjustment factors.TC/HDL-C and TG/HDL-C ratios both showed significant associations with the risk of AIS across all models.Notably, the TC/HDL-C ratio demonstrated an odds ratio of 3.14 (95% CI: 1.31-7.54;p = 0.010) in the most extensive model, highlighting a substantially elevated risk.Similarly, the TG/HDL-C ratio demonstrated a significant correlation with stroke risk, with the OR peaking at 2.53 (95% CI: 1.24-5.17;p = 0.011).The LDL-C/HDL-C ratio exhibited borderline significance in the first two models but statistical significance in the third model, with an odds ratio of 3.02 (95% CI: 1.19-7.66;p = 0.020), indicating an increased risk of stroke.Hyperlipidemia did not show significant associations in any model, suggesting that the specific individual lipid ratios may be more predictive of stroke risk than the general condition of hyperlipidemia. Correlation between Serum BDNF Levels and Lipid Parameters Figure 1 presents a scatter plot analyzing the correlations between plasma BDNF levels and various lipid parameters, including TG, HDL-C, LDL-C, and TC.The analysis reveals a significant positive correlation between plasma BDNF levels and TG, implying that elevated BDNF levels are linked to higher triglyceride levels.Conversely, a significant negative correlation is observed between plasma BDNF levels and HDL-C, indicating that higher levels of BDNF are associated with lower concentrations of HDL-C.However, the data for LDL-C and TC do not demonstrate any significant correlations with plasma BDNF levels. veals a significant positive correlation between plasma BDNF levels and TG, implying that elevated BDNF levels are linked to higher triglyceride levels.Conversely, a significant negative correlation is observed between plasma BDNF levels and HDL-C, indicating that higher levels of BDNF are associated with lower concentrations of HDL-C.However, the data for LDL-C and TC do not demonstrate any significant correlations with plasma BDNF levels. Discussion In our present study of stroke patients, we found that the high BDNF group (≥3.227 ng/mL) had significantly higher levels of HbA1C and TG; ratios of TC/HDL-C, LDL-C /HDL-C, and TG/HDL-C; and percentages of hyperlipidemia (60%), as well as lower levels of HDL-C and platelets.The ratios of TC/HDL-C, TG/HDL-C, and LDL-C/HDL-C showed significant associations with the risk of AIS when adjusted for multiple factors (age, sex, HbA1C, SBP, DBP, smoking, alcohol, and platelets).A positive correlation with TG levels and a negative correlation with HDL-C levels were observed, with LDL-C and TC showing weaker correlations.This indicates that lipid metabolism might play a crucial role in Discussion In our present study of stroke patients, we found that the high BDNF group (≥3.227 ng/mL) had significantly higher levels of HbA1C and TG; ratios of TC/HDL-C, LDL-C/HDL-C, and TG/HDL-C; and percentages of hyperlipidemia (60%), as well as lower levels of HDL-C and platelets.The ratios of TC/HDL-C, TG/HDL-C, and LDL-C/HDL-C showed significant associations with the risk of AIS when adjusted for multiple factors (age, sex, HbA1C, SBP, DBP, smoking, alcohol, and platelets).A positive correlation with TG levels and a negative correlation with HDL-C levels were observed, with LDL-C and TC showing weaker correlations.This indicates that lipid metabolism might play a crucial role in IS, as mediated through BDNF.Researchers have found a link between BDNF and lipid metabolism, confirming previous findings that BDNF levels are strongly positively correlated with TC, TG, and LDL, which suggest that BDNF is involved in the management of dyslipidemia [25][26][27].One study showed that dyslipidemia is an important risk factor for stroke, and numerous studies have investigated the link between high cholesterol and stroke risk [28]. The significance of cholesterol originating from glial cells in the development of synapses has been established, and inhibiting cholesterol biosynthesis impacts the development of dendrites and axons in neurons within the cortical and hippocampal regions [29,30]. Findings from an electrophysiological experiment indicate that cholesterol synthesis, dependent on BDNF, contributes to the maturation of a readily releasable pool of synaptic vesicles.This suggests that BDNF, through its influence on cholesterol biosynthesis, is a crucial factor in the development of synapses [31].BDNF boosts cholesterol biosynthesis by activating TrkB, leading to an elevation observed in rafts alongside an increase in presynaptic proteins.This implies that BDNF regulates the quantity of cholesterol specifically in presynaptic regions [31].Signaling through TrkB in cholesterol-rich lipid rafts is crucial for the functioning of BDNF [32,33]. Another study investigated the relationship between BDNF, glucose, and lipid profile in Parkinson's disease (PD) patients compared to healthy controls, finding that BDNF is a predictor of varying percentages of different lipid profile components but not glucose levels.Significant differences in certain lipid profile components were found between low BDNF and high BDNF groups, highlighting the importance of BDNF in lipid metabolism in the context of PD.However, as BDNF was not found to be a significant predictor of glucose levels, the findings suggest that alterations in BDNF might instead be linked to changes in the lipid metabolism of PD patients, offering new perspectives for understanding and managing the disease [23]. Elevated triglyceride levels have been significantly associated with an increased risk of IS.These high levels may indicate broader atherogenic and prothrombotic changes as well as abnormalities in the clotting-fibrinolytic system, which could further elevate stroke risk [34,35].Elevated levels of blood glucose and cholesterol and low levels of HDL-C have been linked to a higher risk of atherosclerosis and stroke [36][37][38][39].Amarenco et al. found that HDL-C level is inversely associated with stroke or carotid atherosclerosis, but more studies are needed to confirm this association [40].A study based on participants from the UK Biobank found that a certain ratio of HDL-C to LDL-C was correlated with lower risks of myocardial infarction, all-cause mortality, hemorrhagic stroke, and IS.This suggests that HDL, often considered "good" cholesterol, might play a protective role in these conditions, including IS [41].In a study conducted by You et al., the researchers examined the association between serum BDNF levels and lipid profiles among a Chinese population.Their findings revealed a negative correlation between serum BDNF levels and HDL-C levels [42].This observation indicates that changes in lipid metabolism, specifically alterations in HDL-C levels, could influence BDNF levels, potentially affecting stroke prognosis.The researchers also found that higher serum BDNF levels were correlated with a decrease in poor prognosis following ischemic stroke.As a result, these findings suggest that BDNF can be used as a biomarker for assessing stroke prognosis and as a therapeutic target for improving stroke outcomes [43]. In our study, significant associations of BDNF (p < 0.05) are found with HDL-C, TG, and ratios of TG/HDL-C, TC/HDL-C, and LDL-C/HDL-C.This suggests that lipid profiles are notably associated with BDNF levels in stroke patients and implies that BDNF and lipid profiles might play a role in the pathophysiology of stroke and might be potential markers of its severity.An association was found between higher BDNF levels and increased TC.Elevated TC is another risk factor for cardiovascular issues, including stroke.High TG is linked to atherosclerosis and, consequently, an increased risk of stroke and heart attack.High LDL-C is a well-known contributor to plaque buildup in arteries, leading to increased stroke risk.However, determining the exact nature of this relationship and its clinical implications would require further investigation. The relationship between serum lipid levels, such as of HDL-C, LDL-C, TG, and TC, and BDNF in IS patients has indeed been a topic of interest in medical research.While the individual roles of these serum lipids and BDNF in stroke pathology are well acknowledged, the precise nature of their interaction, especially in the context of IS, has not yet been fully elucidated.Research to date has primarily focused on the individual impacts of lipid profiles and BDNF in stroke.For example, elevated levels of certain lipids like LDL-C are known to be risk factors for IS, and BDNF has been recognized for its role in neuroprotection and neurodegeneration following stroke.However, the direct link between lipid levels and BDNF levels in stroke patients remains underexplored.Given the complexity of stroke pathology and the multifaceted roles of lipids and BDNF in the brain, further research in this area is necessary.Such research should ideally involve clinical studies that measure both lipid levels and BDNF in IS patients, aiming to uncover any correlations or causal relationships. Our study has some limitations.First, the sample size is comparatively limited, and the study was undertaken in a single center, which limits the generalizability of the results.In addition, the blood samples were taken only once following the onset of IS symptoms. Materials and Methods For this study, we enrolled 90 patients with acute ischemic stroke (AIS) who were admitted to a stroke center in two separate time frames: from July 2014 to July 2015, and from November 2017 to September 2019.Inclusion as a case occurred only if the patient had no prior history of neurological or psychiatric conditions, including HD, stroke, transient ischemic attack (TIA), MS, PD, AD, or ALS.A patient was also eligible if it was their first AIS experience based on its clinical definition by the neurologist.Additionally, those who had infections or lacked complete baseline data were excluded based on specific criteria. The structured questionnaire included data on admission demographics as well as the results of laboratory, radiographic, and clinical examination.One registered nurse with training was tasked with evaluating the patients' functional results, while a neurologist certified for the study handled stroke diagnosis and neurological state.A stroke neurologist confirmed the stroke diagnosis based on the patient's symptoms and brain imaging results from either magnetic resonance imaging (MRI) or computed tomography (CT) [44,45].A trained, certified nurse assessed the functional outcome (used to collect information such as the smoking status, current medication for HTN or DM, and family history of diseases). Diastolic blood pressure (DBP) equal to or exceeding 90 mmHg and systolic blood pressure (SBP) equal to or exceeding 140 mmHg are considered HTN [46][47][48].For DM, fasting blood glucose levels must be 126 mg/dL or higher, along with HbA1c >6.5%.Hyperlipidemia involves TC levels greater than 200 mg/dL, LDL-C levels greater than 130 mg/dL, and TG greater than 150 mg/dL.In addition, all confirmed stroke patients underwent follow-up until their discharge or death.Furthermore, brain MRI, CT scans, or both were used to identify and evaluate the location and dimensions of the brain infarct lesion.Ischemic strokes were categorized into five subgroups using the Trial of Org 10,172 in Acute Stroke Treatment (TOAST) criteria, including large artery atherosclerosis, small vessel occlusion, cardioembolism, specific pathogenesis, and undetermined pathogenesis. The study was accepted by Shin Kong Wu Ho-Su Memorial Hospital's Investigational Review Board (IRB nos.20140401R and 20170701R) and complies with the Declaration of Helsinki's guidelines.Before beginning the study, each participant provided their written informed consent. Blood Sampling Blood samples from participants were taken at the time of admission to the emergency department if, at that time, the attending physician or neurologist suspected a stroke.The date and time of admission, blood draw, and medical procedures were documented in the enrollment paperwork.The sampling time duration was determined as the number of hours between the time of admission and the blood draw.Fasting blood samples were taken and centrifuged at 3000× g at room temperature for fifteen minutes within two hours of collection.Following this, they were divided into tubes containing plasma, serum, and buffy coats and stored at −80 • C until analysis. Blood Lipids and Glucose A laboratory test was used to measure fasting blood glucose levels, WBC, platelets, HbA1C, Hs-CRP, TC, LDL-C, HDL-C, and TG.Moreover, four alternative lipid profiles were examined in this study (non-HDL-C, TC/HDL-C, LDL-C/HDL-C, TG/HDL-C).As per the definition, non-HDL-C is equal to TC minus HDL-C (non-HDL-C = TC minus HDL-C).Additionally, the ratios of TC to HDL-C, LDL-C to HDL-C, and TG to HDL-C were determined as TC/HDL-C, LDL-C/HDL-C, and TG/HDL-C, respectively. BDNF Measurement An enzyme-linked immunosorbent test (ELISA) was used to quantify BDNF levels (Catalog No. DBD00, USA R&D Systems, Inc., Minneapolis, MN, USA).A Thermo ScientificTM MultiskanTM GO microplate spectrophotometer operating at 450~10 nm wavelength was used to quantify the color transformation.We measured the samples in duplicate to ensure accuracy.The lab technicians conducting the BDNF analysis were kept blind to the details of the study participants. The minimal detectable dosage for this kit, <0.02 ng/mL, was determined to be zero.The threshold between high and low BDNF levels was determined by median (3.227 ng/mL). Statistical Analysis Analysis is based on the mean + standard deviation and the number of patients (percentage).Student's t-tests and Mann-Whitney U tests were used for continuous variables, whereas Chi-square tests were used for categorical variables.Logistic regression analysis, with odds ratios (OR) and 95% confidence intervals (CI), was performed to evaluate the association between BDNF level and continuous lipid profiles, which includes TC, HDL-C, LDL-C, and TG in IS patients.Multivariable logistic regression was used to examine the association between lipid profile and BDNF.IBM SPSS software for Windows, version 23, was used.All p-values are two-sided, and the significance level was set at 0.05. Conclusions Our findings revealed a significant association between BDNF levels and specific lipid ratios such as TG/HDL-C, TC/HDL-C, and LDL-C/HDL-C.The findings also indicate that higher BDNF levels are associated with lower concentrations of HDL-C and higher concentrations of TG in AIS patients. The study suggests that BDNF may play a role in lipid metabolism or serve as an indicator of lipid profile status in the context of IS.The significant associations with lipid ratios, in particular, highlight the potential of BDNF as a biomarker for stroke risk stratification and warrant further investigation into its clinical utility.Future research in this area could focus on elucidating the mechanistic pathways linking BDNF to lipid metabolism and exploring the potential of BDNF as a therapeutic target in stroke prevention and treatment.Informed Consent Statement: Written informed consent was obtained from all participants before they participated in the study. Author Contributions: Conceptualization and methodology, M.N.T. and C.-H.B.; formal analysis, M.N.T.; investigation, W.-H.C. and H.-L.Y.; writing-original draft preparation, M.N.T.; writing-review and editing, M.N.T. and C.-H.B.; supervision and project administration, C.-H.B.; resources and funding acquisition, W.-H.C. and C.-H.B.All authors have read and agreed to the published version of the manuscript.Funding: This study was funded by the Ministry of Science and Technology, Taiwan, in the form of a grant awarded to CHB (reference numbers: MOST 107-2314-B-038-072-MY3 and MOST 110-2314-B038-056-MY3).Institutional Review Board Statement:The study conforms with the principles of the Declaration of Helsinki and was approved by the Investigational Review Board of Shin Kong WHS Memorial Hospital (IRB no.20170701R). Table 1 . Characteristics of 90 acute ischemic stroke patients categorized according to low and high BDNF levels. Table 2 . Age-and sex-adjusted odds ratios for acute ischemic stroke patients with high BDNF levels. Table 3 . Multivariate logistic regression of BDNF biomarkers and lipid profile indicating risk of acute ischemic stroke.
2024-02-25T05:20:58.367Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "79d322ff46c0a6d5b15f37914b372bd1a53c931e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "79d322ff46c0a6d5b15f37914b372bd1a53c931e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11912486
pes2o/s2orc
v3-fos-license
Comparative analysis of temporal dynamics of EEG and phase synchronization of EEG to localize epileptic sites from high density scalp EEG interictal recordings Our objective was to examine if the high-density, 256 channel, scalp interictal EEG data can be used for localizing the epilepsy areas in patients. This was done by examining the long-range temporal correlations (LRTC) of EEGs and also that of the phase synchronization index (SI) of EEGs. It was found that the LRTC of scalp SI plots were better in localizing the seizure areas as compared with the LRTC of EEGs alone. The EEG data of one minute duration was filtered in the low Gamma band of 30–50 Hz. A detrended fluctuation analysis (DFA) was used to find LRTC of the scalp EEG data. Contour plots were constructed using a montage of the layout of 256 electrode positions. The SI was computed after taking Hilbert transform of the EEG data. The SI between a pair of channel was inferred from a statistical tendency to maintain a nearly constant phase difference over a given period of time even though the analytic phase of each channel may change markedly during that time frame. The SI for each electrode was averaged over with the nearby six electrodes. LRTC of the SI was computed and spatial plots were made. It was found that the LRTC of SI was highest at the location of the epileptic sites. A similar pattern was not found in the LRTC of EEGs. This provides a noninvasive way to localize seizure areas from scalp EEG data. Abstract -Our objective was to examine if the high-density, 256 channel, scalp interictal EEG data can be used for localizing the epilepsy areas in patients. This was done by examining the long-range temporal correlations (LRTC) of EEGs and also that of the phase synchronization index (SI) of EEGs. It was found that the LRTC of scalp SI plots were better in localizing the seizure areas as compared with the LRTC of EEGs alone. The EEG data of one minute duration was filtered in the low Gamma band of 30-50 Hz. A detrended fluctuation analysis (DFA) was used to find LRTC of the scalp EEG data. Contour plots were constructed using a montage of the layout of 256 electrode positions. The SI was computed after taking Hilbert transform of the EEG data. The SI between a pair of channel was inferred from a statistical tendency to maintain a nearly constant phase difference over a given period of time even though the analytic phase of each channel may change markedly during that time frame. The SI for each electrode was averaged over with the nearby six electrodes. LRTC of the SI was computed and spatial plots were made. It was found that the LRTC of SI was highest at the location of the epileptic sites. A similar pattern was not found in the LRTC of EEGs. This provides a noninvasive way to localize seizure areas from scalp EEG data. I. INTRODUCTION urrent methods for localizing epileptic seizure onset areas within the brain are highly invasive and involve the placement of intracranial electrodes followed by waiting for one or more seizures to take place while recording a cortical electroencephalogram (ECoG). Clinicians can then use the recorded information to determine the location of interest. Recently, Monto et al. (2007) [1]used detrended fluctuation analysis (DFA) to uncover a correlation between long-term temporal correlations in intracranial EEGs taken during interictal sleep and the locations determined to be the onset sites of epileptic patients using traditional methods. This implies that it may be possible to determine the location of probable seizure onset without the requirement that a patient endure a seizure. This project attempts to replicate the results of Monto et al. using high-density scalp EEG recordings, which are entirely noninvasive. Replicating the results would mean that clinicians could localize epileptic areas of interest within a patient's brain noninvasively and without the patient enduring a seizure, an advance that would greatly benefit the diagnosis of epileptic patients. In addition, in this report, we have further advanced this technology to better localize the epileptic sites by use of the phase synchronization of the scalp EEG data. A. Data Collection and Filtering Epileptic seizure areas in patients were localized with intracranial EEG recordings. Prior to this, high-density 256-channel scalp EEG data was collected with an EEG system developed by Electrical Geodesics, Inc. (Eugene, OR). We used data of five patients. One representative minute of seizurefree data from each patient during sleep was selected and imported into MATLAB for further analysis. The selected data sets were not in close proximity to seizures. Raw EEG data was filtered using a FIR bandpass filter for the low Gamma band of 30-50 Hz. Excessively noisy channels were eliminated by replacing them with the averages of their neighbors. In general, there were 3-5 noisy channels in each data set. B. Detrended Fluctuation Analysis The cumulative sum of each channel was calculated. This sum was divided into windows of 1 through 10 seconds, as well 12, 15, 20, 25, 30, and 60 seconds. Within each window, a linear fit was found and the cumulative sum was detrended. Next, the root-mean-squared (RMS) fluctuation of this detrended sum was calculated. The median fluctuation at each window size was taken. The log of this median fluctuation was plotted against the log of the window size, and a linear fit was found. The slope of this linear fit, denoted alpha, is the result of the detrended fluctuation analysis for each channel. This is what is called long-range temporal correlations (LRTC). As shown by Linkenkaer et Al (2005) [2]and Peng et Al (1995) [3] that detrended fluctuation analysis exposes long range temporal correlations that are characteristic of epleptogenic neocortical networks, the areas where epilepsy begins. C. Phase Synchronization The synchronization between a pair of channel was inferred from a statistical tendency to maintain a nearly constant phase difference over a given period of time even though the analytic phase of each channel may change markedly during that time frame [4]. The Hilbert transform was applied on the pairs of EEG traces with a sliding window which is long enough to encompass at least two cycles of the lowest frequency of 30 Hz in the low Gamma band. The analysis was repeated by stepping the window at 8 ms intervals. The synchronization index (SI) was computed for each pair of EEG traces. A global synchronization index was also computed for each electrode by pairing it with the nearby six electrodes. There were 21 combinations of electrode pairs for each electrode. The SI was averaged over these electrode pairs for each given electrode. After that the LRTC was computed for the SI as explained above. The alphas for the EEGs and the SI were normalized to their common average reference. Color intensity plots were constructed using a montage of the layout of 256 electrode positions. The horizontal and vertical axes for plots are in normalized length units. The color bar in each plot is for the normalized alpha values. III. RESULTS Analysis of the data of two patients is given here. For the patient A, Fig. 1 shows the contour plot of LRTC for the EEG data and Fig.2 shows the LRTC of SI. In the Fig. 2 the location of the seizure activity is also marked with an ellipse as determined from the invasive recordings. The seizure activity was located in the right posterior temporal area. There is a peak also in the central midline area in Fig. 2 at the location (0.45, 0.38). One could interpret this also as a possible location of the seizure area. However, the larger area enclosed by the ellipse is the actual seizure area which was determined by the intracranial recordings. For the patient B, the LRTC of EEG activity is plotted in Fig. 3 and the LRTC of SI is plotted in Fig 4. For this patient also the LRTC of SI plotted in Fig. 4 gives the correct location of the seizure area. The seizure activity as measured with intracranial recordings was in the frontal midline area and it is marked with an ellipse in Fig. 4. However, the LRTC of scalp EEG shown in Fig. 3 does not give a precise location. The maximum positive peak value is located at (0.55, 04) which is not the correct location of the seizure area. These figures show that the LRTC of SI was excellent in localizing the epileptic sites in both patients while the LRTC of scalp EEG alone had inconclusive results. Similar patterns were found in the data of other three patients and we were able to localize the seizure areas correctly with the contour plots of the LRTC of SI. For the other three patients, the seizure activity was located in the left frontal area, left parietal area and the right frontal area. IV. DISCUSSIONS Earlier work [1] has shown that one can localize seizure areas with LRTC analysis performed on invasive cortical (ECoG) recordings. Similar analysis on scalp EEG data does not succeed in localizing the seizure areas. However, the similar analysis when applied to the phase synchronization index of high density interictal scalp EEG recordings was successful in localizing the seizure areas. This was found to be reliable in five patients. A possible hypothesis could be that in the seizure areas the electrical activity of the neurons has stronger phase synchronization as compared to the nearby seizure-free areas. These preliminary results show that it is feasible to localize the seizure areas with a one minute long interictal scalp EEG data. This opens a new way to localize seizure areas noninvasively.
2014-10-01T00:00:00.000Z
2008-10-14T00:00:00.000
{ "year": 2008, "sha1": "f4300e4afb11e41ff101701582e38535072809f5", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt8883p5k0/qt8883p5k0.pdf?t=mq0un0", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "ae1b16e0c238e424dce0792f314f2c0f1e601127", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
15393734
pes2o/s2orc
v3-fos-license
Security Analysis and Improvement of an Anonymous Authentication Scheme for Roaming Services An anonymous authentication scheme for roaming services in global mobility networks allows a mobile user visiting a foreign network to achieve mutual authentication and session key establishment with the foreign-network operator in an anonymous manner. In this work, we revisit He et al.'s anonymous authentication scheme for roaming services and present previously unpublished security weaknesses in the scheme: (1) it fails to provide user anonymity against any third party as well as the foreign agent, (2) it cannot protect the passwords of mobile users due to its vulnerability to an offline dictionary attack, and (3) it does not achieve session-key security against a man-in-the-middle attack. We also show how the security weaknesses of He et al.'s scheme can be addressed without degrading the efficiency of the scheme. Introduction As wireless network and communication technologies advance, there has been a dramatic increase in the use of lightweight computing devices, such as sensors, smart phones, and tablet PCs, being used in our daily lives. To enjoy the convenience of mobility, a roaming service should be seamlessly provided with respect to availability and security, by means of using a visited foreign network. In general, three parties-a mobile user, a foreign agent, and the home agent-participate in a roaming process. A seamless roaming service requires significant security challenges to be addressed among the participants. Basically, authentication and key establishment between the mobile user and the foreign agent should be achieved via assistance of the home agent to prevent illegal usages of the network and to protect their subsequent communications. Achieving anonymity of the mobile user is also important in a roaming service to protect the privacy of the user. Anonymity has recently been identified as a major security property for many applications, including location-based services, anonymous web browsing, and e-voting. These security challenges and their cryptographic solutions, commonly called anonymous authentication schemes, constitute an active research area. The first anonymous authentication scheme for roaming services was proposed by Zhu and Ma [1] in 2004. This initial proposal has been followed by a number of authentication schemes offering various levels of security and efficiency. Some schemes [2][3][4] have been proven secure using a computer security approach while others (e.g., [5][6][7]) justify their security on purely heuristic grounds without providing no formal analysis of security. However, despite all the work conducted over the last decade, it still remains a challenging task to come up with an authentication scheme that meets all the desired goals for roaming services [8]. Most of the existing schemes fail to achieve important security properties such as user anonymity [2,6], session-key security [9], perfect forward secrecy [10], two-factor security [11], resistance against impersonation attacks [12], and resistance against offline dictionary attacks [13]. For this domain, all published schemes are far from ideal as evidenced by a continual history of schemes being proposed and years later found to be flawed. Recently, Xie et al. [4] proposed a new authentication scheme for roaming services and claimed that their scheme not only provides efficiency and user friendliness but also is secure against various attacks. But He et al. [ Throughout the paper, we make the following assumptions on the capabilities of the probabilistic polynomial-time adversary in order to properly capture security requirements of two-factor authentication schemes using smart cards in global mobility networks. (i) The adversary has the complete control of all message exchanges between the three parties: a mobile user, the foreign agent, and the home agent. That is, the adversary can eavesdrop, insert, modify, intercept, and delete messages exchanged among the parties at will [14][15][16]. (ii) The adversary is able to (1) extract the sensitive information on the smart card of a mobile user possibly via a power analysis attack [17,18] or (2) learn the password of the mobile user through shoulder surfing or by employing a malicious card reader. However, it is not allowed that the adversary compromises both the information on the smart card and the password of the mobile user; it is clear that there is no way to prevent the adversary from impersonating the mobile user if both factors are compromised. A Review of He et al.'s Scheme He et al. 's authentication scheme [12] consists of three phases: the registration phase, the login and key agreement phase, and the password update phase. The system parameters listed in Table 1 are assumed to have been established in advance before the scheme is used in practice. Let ‖ and ⊕ denote the string concatenation operation and the bitwise exclusive-OR (XOR) operation, respectively. (1) chooses its identity and password freely and sends the identity to via a secure channel. Login and Key Agreement Phase. This phase is carried out whenever visits a foreign network and wants to gain access to the network. During the phase, mutual authentication and session-key establishment are conducted between and with the help of . Algorithm 1 depicts how the phase works, and its description follows. inserts its smart card into the card reader and inputs its identity and password . Next, retrieves the current timestamp 1 , chooses a random number ∈ Z * , and computes Then, sends the message 1 = ⟨ , 1 , , ⟩ to the foreign agent . Algorithm 1: Login and key agreement phase of He et al. 's scheme [12]. Step 2. Upon receiving 1 , checks the freshness of the timestamp 1 . If it is not fresh, aborts the session. Otherwise, retrieves the current timestamp 2 , computes and sends the message 2 = ⟨ , 2 , ⟩ to . Step and sends the message 3 = ⟨ , 3 , ⟩ to . Step 4. decrypts with key and checks the freshness of the timestamp 3 . If only 3 is fresh, chooses a random number ∈ Z * and computes The Scientific World Journal (Note, here, that the timestamp 3 (received from ) is used in generating the ciphertext since will need it to check the validity of .) Then, sends the message 4 = ⟨ , 3 , , ⟩ to and computes the session key = ( + 1). Step 5. first checks the freshness of the timestamp 3 and aborts the session if not fresh. Otherwise, computes = mod and = ( ), decrypts with key , and verifies that the decryption correctly returns , , and 3 . If the verification succeeds, checks if is equal to and if equal computes the session key = ( + 1). Password Update Phase. One of the general guidelines to get better password security is to ensure that passwords are changed at regular intervals. He et al. 's scheme allows mobile users to freely update their passwords. (1) inserts his smart card into a card reader and enters both the current password and the new password . Weaknesses in He et al.'s Scheme In this section, we point out four weaknesses in He et al. 's scheme, starting with the most obvious one. This weakness is straightforward to see as the identity of , , is given to via the ciphertext (see Step 4 of the login and key agreement phase of the scheme). Weakness 2 is due to the fact that is computed using the bitwise XOR operation when the multiplicative subgroup of Z * is not closed under the XOR operation. This design flaw allows an adversary to find out the password by mounting an offline dictionary attack if the subgroup is much smaller than Z * . We observe, for He et al. 's scheme, that (1) and are defined as two primes such that = + 1 for some ∈ N and (2) the random exponents and are chosen from Z * . Based on these observations, it is reasonable to speculate that He et al. 's scheme was designed to work in a multiplicative subgroup of Z * that has a prime order , though not explicitly mentioned by the authors. For simplicity, let us denote the prime-order subgroup by G. Since and are computed as = ( ) mod and = ( ) mod , it ought to be the case that ∈ G, which in turn implies that is a hash function mapping arbitrary strings into elements of G. Now, assume that an adversary A has gained temporary access to the smart card of and then obtained the value of stored there (possibly by employing a power analysis attack [17]). Then, note that can be used as a password verifier in an offline dictionary attack because is computed as = ⊕ (1‖ ) when G is not closed under the bitwise XOR operation. Let PW be the set of all possible passwords. The adversary A can mount an offline dictionary attack as follows. Step 1. A makes a guess ∈ PW on the password and computes = ⊕ (1‖ ) . Step 2. A then checks whether is an element of G or not. If ∉ G, A deletes from the dictionary PW (i.e., PW = PW \ { }). Note that ∉ G implies ̸ = . If is a safe prime (i.e., = 2 +1), then this attack would fail, cutting only the size of PW about in half. However, if is much greater than (e.g., log 2 ⋍ 512 and log 2 ⋍ 256), the dictionary attack will succeed in determining the correct password with an overwhelming probability. Similar dictionary attacks have been also mounted against key exchange protocols; see, for example, [19]. Weakness 2 can be easily addressed by replacing the bitwise XOR operation with the multiplication operation. Next, we identify two other major weaknesses in He et al. 's scheme. We demonstrate Weaknesses 3 and 4 by mounting a type of man-in-the-middle attack against the scheme. The attack scenario is outlined in Figure 1 and is detailed as follows. Step 1. As a preliminary step, the adversary A chooses a random number ∈ Z * and computes = ( ) mod , where denotes an arbitrary identity. Step 2. When sends the first message 1 = ⟨ , 1 , , ⟩ to , A eavesdrops on this message to obtain and . Immediately after the eavesdropping, A retrieves the current timestamp 1 and sends a fake message 1 = ⟨ , 1 , , ⟩ to as if it is another roaming request from a mobile user. Step 4. A intercepts the message 2 while letting 2 reach its destination, . Since 2 is a valid message, will compute and send the message 3 = ⟨ , 3 , ⟩ to . Step 5. A redirects the message 3 so that it is delivered to Π instead of Π . As a result, Π will not receive any response message and thus will abort after a certain amount of time. Step 6. After decrypting and since 3 is fresh, Π will proceed as per the protocol specification. That is, Π will choose a random number ∈ Z * , compute send the message 4 = ⟨ , 3 , , ⟩ to , and then compute its session key as Step 7. A intercepts the message 4 , computes = mod and = ( ), and decrypts with key to obtain , , and . Then, A chooses a random number ∈ Z * , computes and sends the message 4 = ⟨ , 3 , , ⟩ to as if it is from . Step 8. Upon receiving 4 , will proceed to compute its session key where is computed as = mod , because (1) 3 is fresh, (2) decryption of with key correctly yields , , and 3 , and (3) is equal to Step 9. A computes the two session keys, and , in the straightforward way. Through the attack, user anonymity is completely compromised as the identity of , , is disclosed to the adversary A in Step 7. From the viewpoint of session-key secrecy, the effect of our attack is the same as that of a manin-the-middle attack. At the end of the attack, and believe that they have established a secure session with each other sharing a secret key, while in fact they have shared their keys with the adversary A. As a result, A can not only access and relay any confidential messages between and but also send arbitrary messages for its own benefit impersonating one of them to the other. Man-in-the-middle attacks similar to the attack above have been also presented against various key exchange protocols; see, for example, [20,21]. Our Improved Scheme We now show how to address all the weaknesses identified in He et al. 's scheme without degrading the efficiency of the scheme. Let G be a cyclic group of prime order . A standard way of generating G is to choose two large primes , such that = + 1 for some small ∈ N (e.g., = 2) and let G be the subgroup of order in Z * . Hereafter, we will omit "mod " from expressions for notational simplicity. Assume that the master secret key of , , is an element of Z * (i.e., ∈ Z * ) and the secret key shared between and , , We begin by presenting how to address Weaknesses 3 and 4 (described in the previous section). The vulnerability of He et al. 's scheme to the man-in-the-middle attack is because there is no way for an instance of to check whether the received ciphertext was sent in response to its own request or another instance's request. This design flaw allows the adversary to exploit 's response sent for one session as the response for another session. To prevent the attack, we suggest to modify the computation of the ciphertext from The timestamp 2 is now included as part of the plaintext to be encrypted to . The inclusion of 2 tightly links 's request and 's response and thus effectively prevents the man-in-the-middle attack. However, with the above modification alone, He et al. 's scheme cannot fully achieve user anonymity in the sense that the identity of is still disclosed to . Therefore, we suggest to further modify the computation of as follows: The ciphertext is now generated using ( ) instead of . This modification certainly prevents from immediately learning via decryption of . We next present a possible way of eliminating the vulnerability of He et al. 's scheme to offline dictionary attacks. Recall that this vulnerability is due to the fact that is computed using the bitwise XOR operation when the multiplicative subgroup of Z * is not closed under the XOR operation. Given the flaw in the design, the solution is clear; use the multiplication operation instead of the XOR operation when computing . Hence, we change the computation of from Accordingly, the computation of should be also changed to Finally, we suggest the following additional changes to resolve some notational ambiguities and to correct the misuse of the hash function : As a result of the above modifications, the password update phase is modified as follows. (1) inserts his smart card into a card reader and enters the identity , the current password , and the new password . Combining the above modifications together yields an improved authentication scheme described in Algorithm 2. Our scheme improves He et al. 's scheme in various aspects: (1) it enjoys the anonymity of the mobile user against any parties other than the home agent , including the foreign agent ; (2) it withstands offline dictionary attacks even when the information in the smart card is disclosed; (3) it protects the security of session keys against man-in-the-middle attacks. Clearly, the performance of our scheme is similar to that of He et al. 's scheme. Hence, we can say that our improvement enhances the security of He et al. 's scheme while maintaining the efficiency of the scheme. Concluding Remarks This work demonstrated that He et al. 's authentication scheme for roaming services fails to achieve major security properties-user anonymity, password security, and sessionkey security-in the presence of a malicious adversary. We have shown that failure to achieving user anonymity and session-key security is due to the vulnerability to a manin-the-middle attack while failure to achieving password security is due to the vulnerability to an offline dictionary attack. Note that the latter vulnerability implies that He et al. 's scheme does not achieve two-factor security. We hope that similar security flaws as identified in this work can be prevented in the future design of anonymous authentication schemes. This work also showed how the security of He et al. 's authentication scheme can be improved without efficiency degradation. Our improved scheme not only protects user anonymity against any third parties other than the home agent but also is secure against offline dictionary attacks as well as man-in-the-middle attacks. We leave it as a future work to design an anonymous authentication scheme for roaming services that achieves provable security in a welldefined communication model while providing the same (or even better) level of efficiency as the schemes studied in this paper.
2018-04-03T03:04:17.111Z
2014-09-11T00:00:00.000
{ "year": 2014, "sha1": "07d8b101eb7bd8a35dbc7c5cdc16ee01f452e2bf", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/687879.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07d8b101eb7bd8a35dbc7c5cdc16ee01f452e2bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
15647574
pes2o/s2orc
v3-fos-license
Antenatal diagnostic aspects of placenta percreta and its influence on the perinatal outcome: a clinical case and literature review Background. Placenta percreta is a very rare, but extremely life-threatening obstetrical pathology for the mother and the child, especially in the cases when it is not diagnosed before the birth and when it results in massive bleeding and a dramatic deterioration of condition. It is extremely important to diagnose this pathology as early as possible and plan further optimal care of patients in order to minimize life-threatening complications. Case report. The paper presents an illustrated clinical case of placenta percreta determined before the birth. Features of visual diagnostics are discussed. A 32-year-old pregnant woman with a history of two caesarean deliveries arrived at the tertiary level hospital at 22 weeks of gestation due to abdomen pain. Placenta previa was diagnosed and ultrasound, magnetic resonance imaging suggesting placenta percreta were seen. On the 32nd week, the planned caesarean hysterectomy was performed. The balloon catheters to occlude the internal iliac arteries and minimize bleeding during the surgery were used. Conclusions. Antenatal diagnosis of placenta percreta is especially important. Methods of visual diagnostics are complementary. The optimal surgical approach during caesarean hysterectomy remains controversial. In the case of the slow oozing without a clearly identified source of bleeding after hysterectomy and internal iliac arteries balloons deflation, ligation of one of the internal iliac arteriescan be reasonable to avoid residual haemorrhage and relaparotomy. Background. Placenta percreta is a very rare, but extremely life-threatening obstetrical pathology for the mother and the child, especially in the cases when it is not diagnosed before the birth and when it results in massive bleeding and a dramatic deterioration of condition. It is extremely important to diagnose this pathology as early as possible and plan further optimal care of patients in order to minimize life-threatening complications. Case report. The paper presents an illustrated clinical case of placenta percreta determined before the birth. Features of visual diagnostics are discussed. A 32-year-old pregnant woman with a history of two caesarean deliveries arrived at the tertiary level hospital at 22 weeks of gestation due to abdomen pain. Placenta previa was diagnosed and ultrasound, magnetic resonance imaging suggesting placenta percreta were seen. On the 32nd week, the planned caesarean hysterectomy was performed. The balloon catheters to occlude the internal iliac arteries and minimize bleeding during the surgery were used. Conclusions. Antenatal diagnosis of placenta percreta is especially important. Methods of visual diagnostics are complementary. The optimal surgical approach during caesarean hysterectomy remains controversial. In the case of the slow oozing without a clearly identified source of bleeding after hysterectomy and internal iliac arteries balloons deflation, ligation of one of the internal iliac arteriescan be reasonable to avoid residual haemorrhage and relaparotomy. Keywords: antenatal diagnostics, visual diagnostics, placenta percreta, placenta previa BACKGROUND Invasion of the placenta into the uterine muscle is a dangerous obstetrical complication associated with high perinatal, prenatal and neonatal morbidity and mortality. Three forms are identified according to the depth of placenta penetration: placenta accreta, placenta increta, and placenta percreta (1). However, in literature the term placenta accreta is often used to define all these forms. This placental pathology was first described in 1930. At that time there were even doubts about its existence as it occurred extremely rarely. However, in recent decades the number of cases of the in vasive placenta has increased from 1:4027 pregnancies in 1970 to up to 1:533 pregnancies in 1982-2002 (2). Often there are no symptoms during pregnancy until massive bleeding occurs before or during delivery. Thus it is extremely important to diagnose this pathology as early as possible and to plan further optimal care of the patients in order to minimize life-threatening complications. In this paper, a clinical case of the pathology diagnosed before delivery is presented and possibilities of visual diagnostics and tactical features are discussed. CASE REPORT A 32-year-old pregnant woman with a history of two cesarean deliveries arrived in Vilnius University Hospital Santariškių klinikos (a tertiary level hospital) at 22 weeks of gestation due to the pain of stinging nature in the epigastric area, which spread to the lower part of the abdomen later on. She denied smoking and drinking alcohol and her overall medical history was unremarkable. Placenta previa was diagnosed as ultrasound findings suggested placenta percreta was present. The sonographic images are presented in Figs. 1-3. The patient was hospitalized. Magnetic resonance imaging (MRI) of pelvic organs and cystoscopy were done for the clarification of the diagnosis. According to the MRI data, it resembles a placenta fused with the bladder, the area of alteration is a hypervascularity with a differentiated relatively large branch of a. iliaca interna sinistra (Figs. 4, 5). The cystoscopy did not reveal a bladder invasion (Fig. 6). The management of the patient was discussed by a multidisciplinary team, consisting of specialists in maternal-fetal medicine, gynaecologic surgery, gynaecologic oncology, vascular trauma and urologic surgery, transfusion medicine, intensive care, neonatology, interventional radiology, and anesthesiology. The multidisciplinary team suggested conservative care until the 27-28 week of pregnancy because the optimal timing of delivery for placenta percreta remains controversial. On the 28th week, during a repeated discussion, it was decided to extend the pregnancy until the 32nd week and then perform the planned caesarean hysterectomy. The multidisciplinary preoperative consultation was done before the scheduled operation and a course of antenatal corticosteroids for fetal lung maturation was given. The multidisciplinary team of specialists in obstetrics, gynaecologic oncology, anesthesiology, neonatology, urology and vascular surgery performed the operation. According to the decision of the multidisciplinary team, the use of pelvic devascularization due to possible intensive bleeding . 6. A cystoscopy image. Local redness of the bladder mucosa during the surgery was appointed. The balloon catheters to occlude the internal iliac arteries were placed but not inflated before the delivery of the neonate. Blood components were equated and prepared for possible transfusion. Central venous catheter and two peripheral venous catheters were entered before anesthesia. Anesthesia was general with endotracheal intubation. The midline vertical incision was performed. The placenta was extended through the uterine wall in the place of the prior uterine scar. The vertical corporal uterine incision was performed at the bottom 2 cm above the placental attachment. During the operation, a liveborn 2,000 g, 42 cm male neonate was delivered. According to the Apgar score, he was evaluated by 6 points after 1 minute, and by 8 points after 5 minutes. There was no attempt to extract the placenta: it was left in the uterus with a fragment of the umbilical cord. Due to the occurrence of massive bleeding after the delivery of the newborn, the balloon catheters of internal iliac arteries were inflated; transfusion of blood components was started. Total hysterectomy without the removal of tubes and ovaries was performed (Fig. 7 -Hysterectomy specimen opened). There was urinary bladder serosal involvement, consistent with placenta percreta. The uterus was detached from the urinary bladder. The total blood loss was 4,000 mL during the surgery. Slow oozing without a clearly identified source of bleeding was seen after the hysterectomy. The balloon catheters were deflated for this reason in one hour after the operation. Due to massive internal bleeding, relaparotomy was performed four hours after the operation, the abdominal cavity was revised, but a clearly identified source of bleeding was not seen and the left internal iliac artery was additionally ligated. The patient was cared for in the intensive care for three days. During the period after the operation, the patient was treated with antibiotics and anticoagulants. Total allogeneic red blood cells (16 units), free-frozen plasma (14 units), platelets (12 units) and cryoprecipitate (20 units) were transfused. The patient was discharged on day 7 in a good condition. The condition of the newborn after the birth was severe due to prematurity, the respiratory distress syndrome, and impaired microcirculation. During the first day, the newborn's health condition was stabilized, the CPAP therapy was completed. The newborn was transferred from the neonatal intensive care unit to the premature neonate department for further examination, treatment, and care. The newborn was released home after four weeks in satisfactory condition. DISCUSSION Placenta percreta is a very rare, but extremely life-threatening obstetrical pathology for the mother and the child, especially in the cases when it is not diagnosed before the birth and when it results in massive bleeding and a dramatic deterioration of the condition (1). Maternal mortality associated with placental invasion reaches 7% (2). The average blood loss during childbirth in women with placental invasion is 3000-5000 mL (2). More than 90% of these women need a transfusion of blood derivatives, 40% of whom need a transfusion of more than 10 units of red blood cells (2). Placenta percreta is the most common cause of hysterectomies associated with childbirth (2). If the pathology is determined before birth then the patient's care results significantly improve due to proper management of pregnancy and childbirth care. Since women with placenta previa or placenta accreta have a significant risk of premature birth, it is very important to diagnose it before the 36th week of pregnancy (3). Each caesarean section increases the risk of the invasive placenta during the next pregnancy. It was found that the frequency of the invasive placenta previa after each repeated caesarean section increases and is 3%, 11%, 40%, 61%, and 67%, respectively, after the first, second, third, fourth and fifth caesarean sections (4). Therefore, these women should pay special attention to the diagnostics of the invasive placenta. The diagnosis is usually determined by ultrasound and additional MRI, and is confirmed histologically (1). Transvaginal and transabdominal ultrasonography are complimentary to each other diagnostic methods, especially when there is placenta previa. Findings of ultrasound examination forcing to suspect placental invasion depend on the trimester of pregnancy. In the first trimester of the pregnancy these findings would be the following: implantation of the gestational sac in the lower uterine segment, uterine scar area, multiple irregular vas-cular spaces noted within the placental bed. It is reasonable to consider follow-up examinations at 28-30 and 32-34 weeks of gestation to confirm the diagnosis, to locate the placenta precisely, and to assess a possible bladder invasion (1,5,6). During the second and third pregnancy trimesters, the placental invasion reflects itself as irregular vascular lacunar spaces in the manifestation of the placenta. During the Doppler scan, a turbulent blood flow in placental lacunar spaces is recorded, thinning of retroplacental hypoechogenic line or its absence, uneven thinning of the myometrium, the intervention of placental tissue into the posterior wall of the bladder with uneven thinning of uterine and bladder gap and bright blood flow (1,5). The presence of the lacunar spaces (irregular vascular areas similar to "Swiss cheese" in the placental implantation area) in the placenta and increase in their number during 15-20 weeks of pregnancy mean very important prognostic signs of placenta accreta (sensitivity of 79% and positive predictive value -92%) (5). The more lacunar spaces are present the more likely the placental invasion into the nearby tissue (7). The thickness of myometrium below 1 mm is also a negative prognostic indicator (5). According to some authors, usage of this parameter is doubtful because, towards the delivery deadline, the wall of the lower uterine segment gets thinner naturally. However, it has been found that this indicator is characterized by 100% sensitivity and by 72% specificity (8). Bright thinning of the uterine wall may be highly threatening. In clinical practice, the index of the placenta accreta may be helpful in interpreting various sonographic and anamnesic factors (7). In this particular case evaluation of every risk factor (two prior cesarean deliveries, the placenta is on the anterior wall of the scar area) and ultrasound findings (lacunar spaces all over the placenta, smallest myometrial 1 mm thick and no bridging vessels) as well as calculation of the index of the placenta accreta, which was 8. It shows 91% probability of placental invasion, with sensitivity of 24%, specificity of 100% (7). The criteria of placental invasion as seen in the Doppler scan indicates abnormal hypervascularization of the tissue (myometrium and the bladder gap), enlarged diffusion lacunar spaces throughout the area of the placenta, which reaches myometrium and the cervix, low-resistance arterial blood flow, enlarged venous-type flow to blood vessels, and the locally extinct vascular tone in the hypoechogenic subplacental gap. It is highly important to identify pathological blood flow between the uterus and the bladder wall. This is one of the best indicators for the invasive placenta diagnostics. The sensitivity of colour Doppler imaging in the diagnosis of placenta previa accreta was 82.4% and the specificity was 96.8%. The positive and negative predictive values were 87.5% and 95.3%, respectively (9). MRI is not the first choice examination due to its high cost, accessibility, and convenience. It is most commonly used when placenta percreta is suspected, or when failing to confirm or deny this diagnosis by ultrasound examination, as well as before planned surgical treatment. MRI is now described as an examination that better predicts topography and placental tissue invasion. Distinctive MRI signs of the invasive placenta are intensive heterogeneous placental signals, dark intraplacental bands on T2-weighted images, abnormal placental vascularity, local interruptions in the myometrial wall, and directly visible placental tissue invasion into the nearby pelvic tissues, especially into the bladder (1). This clinical case shows that ultrasound and MRI findings complement each another, allowing the confirmation of the diagnosis of placenta percreta and planning optimal treatment. The optimal surgical approach during caesarean hysterectomy remains controversial. That is why it is believed that using balloon catheters to occlude the uterine, internal iliac arteries, sometimes the common iliac artery (10) decreases blood loss in the case of placenta praevia percreta. In the case of slow oozing without a clearly identified source of bleeding after hysterectomy and internal iliac arteries balloons deflation, ligation of one of the internal iliac artery may be reasonable to avoid relaparotomy. Prolonged occlusion of the internal iliac arteries or common iliac arteries may be associated with a reperfusion injury, thrombosis, and the formation of embolisms in the lower extremities. Therefore the occlusion time should be as short as possible (10). Hence, there is no reported maternal or fetal mortality related to prophylactic balloon occlusion of internal iliac arteries (11). CONCLUSIONS Antenatal diagnosis of placenta percreta is especially important. Methods of visual diagnostics are complementary. The optimal surgical approach during caesarean hysterectomy remains controversial. In the case of slow oozing without a clearly identified source of bleeding after hysterectomy and internal iliac arteries balloons deflation, ligation of one of the internal iliac arteriescan be reasonable to avoid residual haemorrhage and relaparotomy.
2018-04-03T00:41:57.434Z
2017-01-11T00:00:00.000
{ "year": 2016, "sha1": "5ab34f908d94a5aaa3290c4b4a2157ee4c3f05e0", "oa_license": null, "oa_url": "https://doi.org/10.6001/actamedica.v23i4.3423", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ab34f908d94a5aaa3290c4b4a2157ee4c3f05e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212694910
pes2o/s2orc
v3-fos-license
- Jarmużek Logic of Social Ontology and Łoś ’ s Operator In 1947 Jerzy Łoś proposed a positional logic based on the realization operator. We follow his work and present it in the context of fundamental challenges of sociology such as the complexity of social reality and reflexivity of social agents. The paper is an outline of the general concept, as it opens a discussion and sets ground for future elaborations. In this paper, we are considering the concept according to which the expressions put forward by Łoś’s system might be indexed not only by spatial and temporal variables, but also by social contexts. And as such Łoś’s system might be a significant improvement, a valuable addition for social simulations and computational sociology, which use multi-agent systems and agent-based modeling. We consider how Łoś’s operator might be useful for these disciplines, as it gives a chance to combine of formalization with the humanistic coefficient, which represents the issues of complexity and reflexivity of social agents. Aim of the paper Seventy years ago, Jerzy Łoś proposed the first positional logic that included a temporal parameter of physical events which would be useful  in his opinion  for natural science [18]. Łoś's idea has been largely forgotten; however, some interesting research and in fact some developments of his idea have occurred in different contexts, which are presented in further parts of this article. It must be underlined that no logic inspired by Łoś's positional logic was dedicated to the problems of reasoning about social phenomena. In Łoś's own works this logic was intended for natural science and it was further applied to philosophical problems. 1 Below we refer to some motivations and intentions of Jerzy Łoś: An analysis of inductive reasoning that leads to setting causal relations within natural sciences is a starting point in Łoś's theory. Let us consider a simple example of inductive reasoning: It flashes and thunders, again it flashes and thunders, it still flashes and thunders, if it flashes, then it thunders. Łoś noticed, that empirical causal sentences, that are conditions or conclusions in inductive reasoning are containing a moment of time after-effect. A sentence "it flashes and thunders" does not mean that it flashes and thunders at the same time, but that it flashes and in a moment it thunders. When it flashes, than usually it does not simultaneously thunder. One can say that when it flashes, than it thunders and does not thunder. The conclusion is paradoxical. The source of this paradox is a lack of time and space coordinates. When we take these coordinates into account, then we can say that when it flashes in place s and-in-time t 1 , then it thunders in place s in certain time t 2 , which is later than t 1 [. . . ]. [12, p. 39] The aim of our work is to present a new, broader perspective on an application of positional logic; in this case an application to social sciences issues. Positional logic should cope with two problems that are typical of social sciences, specifically for sociology. The first one is an ontological problem of social systems' complexity. However, it should be underlined here that this issue is not distinctive or exclusive for sociology; in fact all empirical sciences deal with it. Nonetheless, the complexity within social sciences is related to imperceptible variables, such as subjective opinions and convictions about the world. This leads to the second problem, a methodological one: how can social systems be researched? There are two, quite extreme options here, which together create a continuum. The first one is to follow the path of so-called social physics, which concentrates exclusively on grasping objective human behaviors and activities, without any focus on their convictions, knowledge, etc. Such a perspective enables one to sustain the standards that are typical of natural science, but it largely ignores an important social context. Another option is to focus on the world of meanings which individuals and collectives operate in. This means going towards the humanities and hermeneutics, which offer limited possibilities for making generalizations and accumulating knowledge. Our proposal is an attempt to build a bridge between these two options. It seems that the grammatical constructions typical of positional logic make it possible to express social contexts which are complex in their nature. This complexity is built upon many elements concurrently influencing each other (e.g., individuals, social roles, cultural patterns, social positions) and the reflexivity of individuals who are able to evaluate their own activities and in order to change their subjective convictions into an objective behavior. The first option described above is related to the possibility of analyzing social systems by multi-agent systems (MAS). Within the MAS analysis, the concept of an agent is broadened from an individual to different social levels such as institutions, organizations, social groups [28]. At each level, agents are active, they judge their own situation, take actions while evaluating their own interests and subjectively monitoring their social context. At the same time in the MAS analysis there is a question of submitting individual agency to collective agents. A society is not a simple aggregate of individual characteristics. Unintended macrosocial consequences of microsocial actions are a constantly present part of every social system. In MAS computer studies, researchers discovered that behavior, actions taken by computer agents are also hard to predict and are not just an aggregate of features of specific agents in a system. MAS is the approach that tries to elaborate certain solutions for reality of open systems, where agents/participants are heterogenous, express limited trust and have conflicting interests. As M. Dastani and co-authors show [3], MAS studies understand the need to apply such sociological categories like norms, roles, power structure into formalization and simulations. An analogous idea  the need to create simulations, models of social action  has arisen within sociology. The aim of such simulations was to gain a better understanding of transitions from the micro to the macro level. Another one was to capture reproducible social mechanisms. Simulations were developed within computational social science [16] with empirical data as the basis of analysis. On the other hand, they were used by researchers of agent-based modeling, agent-based simulations and artificial societies such as Joshua Epstein [4], or Michael W. Macy [20]. These researchers have overlooked the issue of agents' rootedness in the multidimensional world of meanings and social contexts. Our article  while relating to the above studies, ideas and challenges  sets an agenda for further application of positional logic to social studies, specifically, to sociology. It demonstrate that Jerzy Łoś's idea together with the operator of realization and modification of some approaches in positional logic, can be transmitted into the reality of social contexts and social simulations. Łoś's operator of realization In 1947, Łoś published the work on temporal logic "Podstawy analizy metodologicznej kanonów Milla" [18] (Foundations of methodological analysis of Mill's canons) and, a year later, an article about epistemic logic entitled "Logiki wielowartościowe a formalizacja funkcji intensjonalnych" [19] (Multi-valued logics and formalization of intensional functions). 2 Łoś's works were published in Polish, but short reviews by Henryk Hiż [8] and Roman Suszko [33] made them accessible to a wider audience. Although his work on temporal logic influenced the creation of this separate domain in logic, and work on epistemic logic was one of the first ones to appear, Łoś's accomplishments unfortunately were quickly forgotten within English-speaking academic world (see [37]). In both of his works Łoś used an original grammar construction for expressing relations between sentences and their context. It was called the operator of realization R. 3 If α is a term and p is a proposition, then R α p is also a proposition. The operator of realization R connects names with sentences and creates new sentences. Since a is sometimes called a position, the logics with Łoś's operator are called positional logics. In the article "Logiki wielowartościowe a formalizacja funkcji intensjonalnych" [19] (Multi-valued logics and formalization of intensional functions) Łoś applied the realization operator to model the knowledge of a subject. So the sentence "R a (p)" represents a fact that an agent a asserts/knows that p. With an axiomatic system, Łoś proposed a very idealistic concept of a rational subject of knowledge (see [17]). The connection between a subject and judgement can be approached differently when interpreted as a less classical kind of knowledge or simply a propositional attitude of an agent a. 4 In Łoś's work on temporal logic, position a in a sentence R a (p) is interpreted as a temporal object  a point in time or moment at which a sentence p is true. We will go back to this work and its approach, as it inspires us to apply the operator R in a broader context, without omitting any knowledge or propositional attitudes of an agent. Before doing so, we would like to examine other philosophical and logical interpretations of the operator of realization. Sentences that are in the range of operator R can be interpreted in different ways; the interpretation depends on how we understand the denotation of the individual a in the expression R a (p). This denotation is always a kind of context in which proposition p is referred to (for example, p can hold, be true, be known, be part of the set of beliefs etc). In the literature, such contexts as the following ones have been proposed as: • temporal: moments or some kind of intervals • spatial: points or certain parts of space • epistemic: minds of agents • mathematical: as solutions to some equations [26,27,12]. Let us consider such expressions as: Expression (0) says that sentence It rains is realized in a temporal context denoted by date 2018. Expression (1) says that sentence It rains is realized in a spatial context denoted by the name of place Toruń. Expression (2) says that sentence It rains is realized in the epistemic context denoted by the name Jan, or to put in more philosophically, is the subject of a certain propositional attitude of Jan (knowledge, belief, doubt, etc.). Finally, expression (3) says that sentence 3 + 5 = x is valid in the arithmetical context where x = 8. All these examples show how flexible the operator of realization is and how many meanings and applications it has when we interpret the relations of the name a and the sentence p in such an expression R a (p) as: • the statement that is denoted by sentence p, in a context denoted by name a, has a certain property. This property can be a logical value, subject of someone's propositional attitude or possibly another property. For example, in [11] an interpretation of the operator R was proposed according to which the realization of a sentence p in a position a means that there exists such position b that it stays in a binary relation with a and p is true at b. However, we must admit that the most often interpretation of position/context is a temporal one, as a moment or an interval of time, see [27,10,13]. It is a natural context of use of the realization operator  because of Łoś's groundbreaking work on temporal logic [18]. Let us accept for further considerations that the operator R relates a sentence to a context denoted by a name. Therefore, a position in positional logic is a kind of context. From our point of view, social contexts are specifically interesting. The realization operator is closely related to investigation of indexicals initiated by Bar Hillel in the book [2] published in 1971. Also in this aspect Łoś remained ahead of others for decades. David Kaplan in [14], Richard Montagues in [23], Dana Scott in [30] and Robert Stalnaker [32] proposed logical systems simulating the way contexts act. A review of how to get a proposition c(P ) expressing content of the sentence P uttered in context c could be found in [21]. The problem is that a content might be so complex that  as Bar Hillel has argued  a satisfactory definition of context is unlikely to be given. Anyway, we can agree that the context of an utterance is determined by the circumstances of utterance. By knowing them, we know the context. Łoś's realization operator is applied to the special cases of a context. However, one could hardly agree that they are similar to pragmatic contexts. Further in this paper, we consider the realization operator based on social contexts that are quite far from the notion of context known from pragmatics. For this reason, we only note some similarities between both concepts. Logics of the R-operator The logic MR is the simplest positional logic extracted from Łoś's temporal logic [9]. Its alphabet is built from sentential letters Var, positional letters Pl, the classical propositional connectives: ¬, ∧, ∨, →, ↔, the operator of realization R and brackets ), (. In this logic pure formulas of Classical Propositional Language have the function of quasi-formulas, i.e. their aim is to build correct formulas. Let us assume that set For CPL contains only the sentential letters Var and such expressions as: φ, ¬φ, where φ and ψ belong to For CPL . On the other hand, the set of correct formulas For MR , so formulas of MR, contains only such expressions as: MR logic excludes the nesting of the operator R in the manner that happens in the case of such sentences as: , where contexts are within contexts. In work [9] logic MR has been axiomatized with modus ponens: and with four schemas of axioms. The first schema is defined with the condition: every substitution of any tautology CPL with formulas MR is an axiom MR. (Ax1) s(A), if A ∈ Taut CPL and s is a function of substitution of sentential letters with formulas MR. For any formula A, B ∈ For CPL and any positional letter α ∈ Pl it is accepted also that: Additionally, if A is a tautology CPL, then every formula formed by realization operator and any positional letter α ∈ Pl is an axiom: Three kinds of semantics were proposed for such an axiomatized logic. Firstly, there were models W, d, v , where W is a non-empty domain, d is a mapping from Pl to W , and v is a classical valuation of formulas CPL at objects from W , i.e., mapping v : In [12] it was noted that such models are a bit redundant. It is enough to assume models with the valuation v : W ×For CPL −→ {0, 1} that fulfils classical conditions for those w ∈ W , that for some α ∈ Pl, d(α) = w. So the objects that are not denoted by any terms do not have to behave classically with respect to v. In the same book, an alternative semantics that was based on valuations only was also proposed [12, p. 92]. In the work cited above, the authors propose that positional logic is normal if the operator R is distributive over all classical connectives, i.e., if for all A, B ∈ For MR its laws are for * ∈ {∧, ∨, →, ↔}: Logic MR is the least normal positional logic. Consequently, it means that each positional logic that respects classical logic in positions as well as out of the operator R must include logic MR. MR is also maximal in a sense that one cannot extend it with additional formulas stating something about one position without inconsistency [15]. It is possible that normal positional logics are too strong for the social sciences. However, in our opinion, this is not the case. They can be weakened, for example by the use of many-valued semantics [34], algebraic semantics [36] or other techniques [11]. From a logical point of view both former proposals seem interesting, but many-valuedness in a logic for social sciences may be introduced in a different way than byua the use of weaker outer connectives. An important modification of the positional logic language is to add quantifiers and variables that denote positions to which sentences are referred by R. Such an extension of MR is logic MRQ. In its language, there are quantifiers, function constants and predicates. The logic was examined in [12]. In fact, MRQ is a combination of First Order Logic and MR, because in the logic we can quantify over positions and express different properties of positions with predicates. Clearly, MRQ is undecidable as it includes First Order Logic, unless we limit its lan-guage to monadic predicates. It is worth noting that the nesting of R is also forbidden in MRQ. In particular one can say nothing about relations between positions and cannot quantify in the range of R. So, in the language of logic MRQ there do not appear such expressions like "R α (r(β 1 , β 2 ))" and "R α (∃ x R x (A))", where "r(β 1 , β 2 )" states that the relation r holds between positions "β 1 " and "β 2 ", whereas "∃ x R x (A)" says that there exists a position x such that A ∈ For CPL is realized at x, and both facts happen in position α. The language of the first positional logic, the system that was designed for the natural sciences, was defined in a different way. In article [18], Jerzy Łoś also did not accepted the nesting of R. However, he additionally introduced quantifiers over temporal intervals and propositions in the range of R. In Łoś's language such expressions as "∃ x ∀ p R x (p)" were correct. Let us remember that in the paper, positions are moments in time. Therefore, propositions happen at moments of time. Moreover, Łoś applied the binary functional constant δ that shifts a time line. For example, the expression "R δ(x,ǫ) (p)" says that sentence p is true at the moment that appears after the move of time interval ǫ from moment x to some moment denoted by "δ(x, ǫ)" (after the length of ǫ starting from x  time for Łoś is representable by the real number line). The other important grammatical feature of Łoś's system is that sentences belonging to For CPL are present in the language. The use of these sentences means that their truth is settled regardless of the context. In the work on epistemic logic [19] Łoś simplified the language (compared to his previous works), but we are not considering this issue here. Łoś's works were an inspiration to many logicians, including Prior, the founder of tense logic. Other modifications and applications of the realization operator can be found in works by Garson and Rescher [7], Rescher and Uqhuart [27] and Rescher [25]. In particular, Prior's work [24], was clearly inspired by Łoś's work. This case, a complex one, but important for the history of temporal logic, was described in [12, pp. 15-16]. It shows that Łoś was the founder of temporal logic. The challenges of reasoning about social phenomena We would like to propose the broadening of Łoś's concept from physical phenomena to social ones. There are many indicators showing that the grammatical construction of the operator of realization makes it possible to do so. There are two reasons for constructing such a logic: a cognitive one and a practical one. The cognitive reason refers to the possibility of making assumptions for formal models of specific social phenomena. Such models should include variables that have up to now been mostly omitted in formalizations and simulations of social processes. This has happened because they were either too complex or the syntax of formal language was not sufficiently flexible. The modeling of social phenomena still faces many barriers. At the cognitive level, sociology struggles with the constant problem of complexity, which lies mainly in nesting individual activities in broader social contexts and in understanding interactions that happen between them. Meanwhile, well-known simulations of social actions and efforts to model social processes have a rather individualistic character  i.e. they deal with agents who act upon simple rules, not with agents who are deeply immersed in a broader social context. One of the reasons for this situation is that in simulations of social phenomena we can see the domination of the tradition that refers to the frequently cited research by Thomas Schelling [29]. He proposed a model of spatial segregation which shows how complex phenomena are an outcome of rather simple social interactions. Schelling's aim was to show that the spatial segregation in cities can emerge spontaneously, without being driven by pro-discrimination attitudes amongst the citizens. He created a simulation in a form of a field with 208 squares, 13 rows and 16 columns, where some parts were empty, but mostly were taken by agents who were marked with crosses or circles. At the beginning of the simulation, agents were distributed randomly. But, rather quickly, new layouts emerged, where the space was divided between the fields dominated by circles or by crosses. This happened because Schelling gave agents one simple conviction: they want to have at least 1/3 of their neighbours to be agents who are similar to them. When this desire was satisfied by relocating people, it turned out that both groups were separated. With this research Schelling started a whole series of social simulations, which were multi-agent simulations (see [4,20]). But he also provided a direction for further studies, which was to search for outcomes of agents' activity, which were based on a few simple assumptions. In this way, these agents are far from the real-world agents it would be desirable to represent. This legacy is a source of troubles, especially for researchers who try to use simulations in empirical studies (see [5]). The passage from individuals' action (actions of people like agents) to collective outcomes is still one of the great challenges of contemporary sociology. This problem is partially visible in the dichotomy between agency and structure. In sociological theories, individual activity  human agency and its individuality  clashes with the impact of social structures which determine this activity. Therefore, sociology explains observable social processes by pointing to repeatable patterns of behavior. Sociology looks for such patterns and interprets them as the key determinant of actions taken by humans. In sociological theory, we find mostly deterministic explanations that still deal with the structure vs agency dualism (see [31]). However, the sociological concepts which attempt to include a cultural dimension into the individual perspective are still present and alive. A classic example is the humanistic coefficient concept developed by Florian Znaniecki, who postulated the need not to limit researchers' observation only to their own direct experience of the data, but to reconstruct the experience of the people who are the subject of the research [39]. The humanistic coefficient concept assumes that individuals think about the consequences of their actions at the same time as performing them; they make generalizations about their goals and aims and they make their experience more objective in their own consciousness. This means that individual experience is treated as a collective one, as a commonly shared experience. Znaniecki's concept is on the one hand a kind of methodological postulate, but on the other is an attempt to find a passage from micro to macrosociological phenomena. This postulate is visible, although not directly expressed in most qualitative research, but it is absent from attempts to model social reality. The concept of agency is another attempt to understand society in humanistic categories, specifically to see active individuals as a part of morphogenetic processes of an emerging society. In this approach, an individual is mostly a social actor playing a specific social role. As an actor, however, one has some degree of freedom and possibilities to interpret socially imposed solutions. As Margaret Archer [1] puts it, it is a matter of a capability to reflect on one's own actions, which is the most important feature of humans. Reflexivity allows people to think about their own and others' activity, and to evaluate and make changes in collective action. Social change processes, especially new institutional solutions, are, in Archer's opinion, an effect of structural work where reflections, thoughts and actions are accumulated and at the same and are an inspiration or an engine for further changes [1]. Simulations of social processes have not been able to include a concept of agency and have not taken into account the humanistic coefficient postulate. In the methodology of qualitative research, the idea of deep insight into individual interpretation is, of course, present and is used to reconstruct common social patterns. However, quantitative methods which are a basis for computational sociology [16], are not sensitive at all to the problem of reflexivity. An application of Łoś's operator of realization and its logic makes it possible to cope with the complexity of social process. This also  in our opinion  enables us to accommodate the postulate of humanistic coefficient and makes the possible the empirical use of agency theory. The practical reasons (for the whole concept of forming a special logic for sociology) are related to the possibilities of building an ontology on the basis of empirical data (for example in order to solve specific social issues). This concept is in a way a classical one. When there are formalized theories, one can reason about some directly unobserved processes and can make assumptions on how they will proceed. This is exactly the main aim of all simulations and models of social systems. So, our proposal can be applied to such areas of sociology as applied sociology, policy sociology, or clinical sociology which all are oriented to using knowledge for practical solutions [6]. Any sociological intervention needs solid foundations. Such foundations should be driven by simulations that are as close to the social reality as possible, and that demonstrate not only how things are, but also how they will proceed. One of the most important challenges that we see is also setting an agenda for future studies. Firstly, it is not enough to create a logic that will only describe people's behavior (with individuals presented as agents in a certain time and space). There is a need to create a logic which includes broader contexts such as culture (e.g., specific values), communities (e.g., forms of social control), institutions (e.g., informal rules), etc. Such a logic should also be able to describe passages between these contexts. This is the issue of social complexity, where many types of social relations and entities have to be taken into account. Secondly, the biggest challenge for a new logic is to combine the humanistic coefficient with formalization. In other words, it is the problem of how to grasp not only a behavior but also a set of beliefs as separate variables. This is the issue of the humanistic coefficient. It seems that both of these issues can be represented in positional logic. Social phenomena in the context of R-operator In the object language with the R-operator it is possible to talk about sentences and points of their realization. However, social processes take place not only in time but also (similarly to physical processes in Łoś's logic) in more complex contexts. Even the physical interpretation of Łoś requires that the position in the realization operator is composed of a time and space parameter. It can look like this "R t, x,y,z (p)", where t, x, y, z is a time-space context, an event is described by a sentence p, while t is a time context, and x, y, z are three dimensions of physical space. When positions have set places under the operator R, it can be presented as "R t,x,y,z (p)". As mentioned before, social complexity has two dimensions. One is the quantity of components, the other is the process of the interlocking of these components considered from the humanistic perspective. There is a certain similarity of this proposal to pragmatic contexts of statement interpretation. Dana Scott in his [30] proposed understanding of the pragmatic context of a statement interpretation as an n-ary ordered set w, t, x, y, x , a, . . . , where w is a possible world, t is time, x, y, x is a place which interpretation refers to, while a, . . . is a set of other parameters which are necessary for a given utterance to become a logical proposition that is equipped with a certain logical value (for example who says, to whom etc.). It is an attempt to formalize a notion of context, which helps to pass from an utterance like It rains to a sentence that has a logical value. In order to express complex social contexts, and with a mechanical, quantitative understanding of complexity, one must assume that positions in the scope of the operator R are similar to Scott's determinants of a pragmatic context. The statement "R x 1 ,...,x n (p)" means that a phenomenon described by a sentence p has happened in the context of: . . . • variable x n  ready to be interpreted. With this, in a single description we get the rules of the complexity of a phenomenon. If social theories were expressed in positional logic, contexts could be determined and variables x 1 , . . . , x n could reflect social-world properties, where an event described by the operator of realization takes place. On the other hand the nesting of the operator of realization allows the inclusion of agential aspects of social phenomena, which is the second dimension of complexity. Hence, in our formal language, objective complexity can be expressed (by showing an aspect of knowledge, beliefs, position in a group etc.), but in the same moment the humanistic coefficient is considered and the aspect of interpretations of social phenomena by their participants is added. For example, the way participants in a stock exchange perceive its condition affects its further condition and performance. Similar remarks apply to banks or any other financial institution. A description of social phenomena is accurate when not only participants (individuals, groups, institutions) are described, but also when their beliefs about contexts are included. Therefore, we deal with social phenomena that happen within other phenomena (additionally we have a certain feedback loop). This problem was well described (although without any formalization) by Robert Merton in the self-fulfilling prophecy concept. Merton's idea is rooted in the sociological category of the definition of the situation proposed by the cooperator of Znaniecki, William Thomas, who wrote that "If men define situation as real, they are real in their consequences" [38]. Hence it is possible for false subjective beliefs of individuals to turn into objective truths. Merton shows that in a story of bank bankruptcy: false assumptions about reality affect people's activity, which turns these assumptions into truthful ones. Assuming that a bank will collapse, we withdraw our money and at the same time we lower the bank capital and speed up its real bankruptcy [22]. Therefore it seems that an iteration of operator R is required, for its multiple applications with nesting. For example, consider the expression: that, for example, can be read as follows: • on stock exchange s there is belief b that owner o of shares sh thinks (has got belief b ′ ) that stock exchange s will go down (p), but shares sh on stock exchange s will not go down (¬p), when we accordingly interpret positions s, b, o, sh, b ′ . On the other hand the utterance: complements the above with the belief that (the right conjunction argument): • on stock exchange s ′ it is thought (belief b ′′ ) that owner o of shares sh thinks (belief b ′′′ ) that stock exchange s will go down (p) and shares sh on stock exchange s will go down (p), too, where, additionally, we accordingly interpret positions s ′ , b ′′ , b ′′′ , What if we would like to express that the last utterance (belief b ′′′′ ) belongs to owner o ′ in the context of stock exchange s? Our language allows that. It could look like this: The iteration of contexts, especially nesting, is a basic tool for solving the problem of humanistic coefficient in the formalization of this type of presumption. Each sentence describing social complexity can be expressed in a certain social context. Although the operator R can be nested finitely many times  expressions of positional logic dedicated to social science are a limited string of symbols  there is no limit to the levels of nesting. In the language that we proposed, any social phenomenon can be considered from a broader context. The problem of nesting and iteration of the operator R can be found in the literature about positional logic. Most often, it is presented in the perspective of the temporal interpretation of the operator of realization R in the renowned works [26,27]. There, the semantics of certain positional logics identifies positions as numbers. So, iterations can be reduced to arithmetic operations. For example, when a sentence has two moments of time "R t 1 R t 2 (A)" with an assumption that t 1 and t 2 denote numbers from a set closed under addition +, we can reduce the above utterance to the expression "R t 1 +t 2 (A). For example, it can mean that, if: in two days (t 1 ) there is that in three days (t 2 ) there will be that A, so in five days (t 1 + t 2 ) there will be that A. The context has got a temporal aspect; an important one but not the only and most significant one. Therefore, we need more general interpretations of the nesting of contexts than arithmetical ones. In [12] there is a review of approaches to nesting R and proposals for new solutions. Generalizations related to complex, social contexts  such as positions x 1 , . . . , x n , where all or some parameters x i , 1 i n, can appear in other nestings  have never been considered before. However this requires further studies and logical examinations. Perspective of future research The broadening of positional logic language with positions for complex contexts, as well as their iterations, makes it possible to describe social systems with their ontological (mechanical) and humanistic complexity. This will be of use to theories that seek to describe complex social phenomena. It will permit more accurate modelling of contexts in which many agents participate in collective action. However, it is possible that in order to investigate such systems, it might be necessary to introduce non-classical logic mechanisms such as non-classical reasoning. It is worthwhile to keep a classical understanding of connectives, at least outside of the R operator's range. Simultaneously, sentences about occurring phenomena can be uncertain, i.e. neither completely true nor completely false but possessed of some degree of truth (or certainty). Therefore, we could introduce the notion of a degree of certainty of a phenomenon occurrence. Let v 0 , . . . , v i , . . . , v m mean an order of certainty; when v 0 means the phenomenon does not occur at all, v m means it surely occurs. The symbol v i , where 0 < i < m, means that the phenomenon occurs in a certain degree (of truth or certainty) between the classical values. As a consequence of our deliberations we can introduce to the language of our logic expressions "R x 1 ,...,x n ,v i (A)", where x 1 , . . . , x n are social contexts where a phenomenon occurs and is described by a sentence A, while v i (1 i m) denotes the logical value v i of a sentence "R x 1 ,...,x n (A)". With this we get a complex logic. It is classical at the object-level, but multi-valued or fuzzy at the level of nested positions. Another important tool for describing social phenomena is probability. It is quite similar to many-valuedness, although it is not the same. In the probability approach, we also assign numbers from the interval [0, 1] to phenomena. However, probability and many-valuedness differ on a level of meaning. Nevertheless the logic that is considered here allows us to include a position for probability measurement in the range of operator R. With this we get a logic with a probabilistic interpretation of social phenomena, but it requires some changes. To sum up, in this article we are setting an agenda. It seems that the potential to use Łoś's logic for modeling social processes is extensive. The possibility of expressing the complexity of social reality, its multi-dimensionality and correlations between agents is crucial for the general attractiveness of such a formalization, especially for future developments and improvements of agent based systems in sociology and the MAS analysis, which constantly seek ways to implement the reflexivity of agents.
2020-02-27T09:17:43.670Z
2020-02-25T00:00:00.000
{ "year": 2020, "sha1": "c6715136a36777bba656605f6c96672926289cf9", "oa_license": null, "oa_url": "https://apcz.umk.pl/czasopisma/index.php/LLP/article/download/LLP.2020.005/24913", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d2dcaca9b2e2fae219d0261a893cdd1a07d974a7", "s2fieldsofstudy": [ "Philosophy", "Sociology" ], "extfieldsofstudy": [ "Computer Science" ] }
245807503
pes2o/s2orc
v3-fos-license
GaAs nanowires on Si nanopillars: towards large scale, phase-engineered arrays Large-scale patterning for vapor–liquid–solid growth of III–V nanowires is a challenge given the required feature size for patterning (45 to 60 nm holes). In fact, arrays are traditionally manufactured using electron-beam lithography,for which processing times increase greatly when expanding the exposure area. In order to bring nanowire arrays one step closer to the wafer-scale we take a different approach and replace patterned nanoscale holes with Si nanopillar arrays. The method is compatible with photolithography methods such as phase-shift lithography or deep ultraviolet (DUV) stepper lithography. We provide clear evidence on the advantage of using nanopillars as opposed to nanoscale holes both for the control on the growth mechanisms and for the scalability. We identify the engineering of the contact angle as the key parameter to optimize the yield. In particular, we demonstrate how nanopillar oxidation is key to stabilize the Ga catalyst droplet and engineer the contact angle. We demonstrate how the position of the triple phase line at the SiO2/Si as opposed to the SiO2/vacuum interface is central for a successful growth. We compare our experiments with simulations performed in surface evolver™ and observe a strong correlation. Large-scale arrays using phase-shift lithography result in a maximum local vertical yield of 67% and a global chip-scale yield of 40%. We believe that, through a greater control over key processing steps typically achieved in a semiconductor fab it is possible to push this yield to 90+% and open perspectives for deterministic nanowire phase engineering at the wafer-scale. Atomic Force Microscopy of Spin-coated Pillars spin-coating of ZEP 20% dilution in Anisole at 2500rpm. This step appears to be important for the correct opening of the SiO 2 /Si pillars : In fact, the observed meniscus prevents the exposure of the sidewalls to the reactive ion etching (RIE) plasma, permitting to obtain a flat and directional etching. Nevertheless this was observed to be true only for relatively short etchings (<5min). For long etchings, i.e long enough to etch the ZEP on the side-walls, we found that spin-coating a thick layer of poly methyl methacrylate (thicker than the SiO 2 /Si pillars height) and etching down with two RIEs : a first O 2 plasma for the top PMMA and a second CHF 3 /Ar plasma for the oxide etching, yields better results. Knowing the pillars height from subfigure a. and measuring the step height in subfigure b. we can extrapolate the spin-coated ZEP thickness. For those spin-coating parameters the value is close to 30nm. Characterization of Ga predeposition droplets on 10nm Oxide. The following SEM images were used, along with the software Dropsnake(tm) to obtain an estimation of the Ga droplet's contact angles. This Ga pre-deposition follows the same Ga flux and substrate temperature than the NW growths performed in the main study, for the sake of comparison. The measurement accuracy is ± 5°. stepper lithography (DUVSL). They are exposed using an ASML tm PAS 5500/350C system and developed using a Süss Microtech tm ACS200 gen3 system. The resin used is M108Y to a thickness of 140nm with a buffered anti-reflective coating (BARC) of 40nm. The dose used is 14.5µC/cm 2 at a z = -0.2µm focus. Sample is then introduced in an SPTS APS plasma etcher where a plasma of CHF 3 /O 2 of 34sec is done for BARC etching. The wafer is then introduced in an Alcatel TM AMS200 DSE RIE, where a customized recipe using SF 6 and C 4 F 8 is used for creating the pillars. An oxygen plasma is performed in a Tepla tm Giga-Batch system for stripping the resist/BARC. A buffered hydrofluoric acid (7:1) bath is then used for 2 minutes to remove any trace of resist/BARC. A thermal oxidation is then done at 900°C for a variable amount of time depending on the wanted oxide thickness. parameters are analog to growth 1 presented in our main study. Subfigure a. Shows a crosssection view of the 105nm nominal diameter array after GaAs VLS NW growth. We can see that we have tilted and vertical nanowires. The contact angle appears to be close to the wurtzite (WZ) stable one during growth. Subfigure b. shows a TEM micrograph of a transferred GaAs NW with its base SiO 2 /Si pillar. The oxide is hard to see given its very low thickness. subfigure c. and d. show HRTEM images of a vertical NW on Si pillar, showing a defect-free WZ crystal structure. Subfigure d. also shows the image FFT confirming the WN crystal structure. From SEM characterization, all the vertical NWs were exhibiting morphological characteristics analogue to the ones observed by TEM. This leads us to think that all vertical NW from this growth are WZ, confirming the phase-engineering potential of the SiO 2 /Si pillar patterning method.
2022-01-08T16:26:47.531Z
2022-01-06T00:00:00.000
{ "year": 2022, "sha1": "c08ca86b37c77f76ee168aa1794da47ab8a40364", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/nh/d1nh00553g", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "75aa55325e29d0aa65f2e32f9a6b3cfc57c308ce", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
126169769
pes2o/s2orc
v3-fos-license
On Long-Term Space-Charge Tracking Simulation The nonlinear space-charge effects in high intensity accelerator can degrade beam quality and cause particle losses. Self-consistent macroparticle tracking simulations have been widely used to study these space-charge effects. However, it is computationally challenging for long-term tracking simulation of these effects. In this paper, we study a fully symplectic self-consistent particle-in-cell model and numerical methods to mitigate numerical emittance growth. We also discuss about a fast alternative frozen space-charge model that has a potential to improve computational speed significantly. Introduction The nonlinear space-charge effects present strong limit on beam intensity in high intensity/high brightness accelerators by causing beam emittance growth, halo formation, and even particle losses. Self-consistent macroparticle simulations have been widely used to study these spacecharge effects in the accelerator community [1,2,3,4,6,5,7,8,9,10,11,12,13,14]. In some applications, especially in high intensity synchrotron, one has to track the beam for many turns. It becomes computationally challenging for the long-term space charge tracking simulation since on one hand one needs to ensure the accuracy of the simulation results to avoid numerical artifacts, and on the other hand, one would like to reduce the computing time for fast physics applications. The charged particle motion inside an accelerator follows classical Hamiltonian dynamics and satisfies the symplectic conditions. It is desirable to preserve the symplectic conditions in the long-term numerical tracking simulation too. Violating the symplectic conditions in numerical integration results in unphysical results [15,16]. A gridless symplectic space-charge tracking model and a symplectic particle-in-cell (PIC) model were proposed in recent studies [17,18]. Even with the use of the symplectic space-charge model, there still exists artificial emittance growth caused by the smaller number of macroparticles used in the simulation compared with the real number of particles inside the beam. In this study, we proposed a threshold filtering method to mitigate the numerical emittance growth. In order to improve computational speed in long-term tracking simulation, we also explored a frozen space-charge model in the simulation. Symplectic Particle-In-Cell Model In the symplectic particle-in-cell (PIC) model, a single step macroparticle advance can be given as: where the transfer map M 1 corresponds to the single particle Hamiltonian including external fields and the transfer map M 2 corresponds to space-charge potential from multi-particle Coulomb interactions. This numerical integrator Eq. 1 will be symplectic if both the transfer map M 1 and the transfer map M 2 are symplectic. For a coasting beam inside a rectangular conducting pipe, the space-charge potential can be obtained from the solution of the Poisson equation using a spectral method [18]. The one-step symplectic transfer map M 2 of the particle i for the space-charge Hamiltonian is given as: where both p xi and p yi are normalized by the reference particle momentum p 0 , K = qI/(2π 0 p 0 v 2 0 γ 2 0 ) is the generalized perveance, I is the beam current, 0 is the permittivity of vacuum, p 0 is the momentum of the reference particle, v 0 is the speed of the reference particle, γ 0 is the relativistic factor of the reference particle, S(x) is the unitless shape function (also called deposition function in the PIC model), and the φ is given as: where the integers I, J, I , and J denote the two dimensional computational grid index, and the summations with respect to those indices are limited to the range of a few local grid points depending on the specific deposition function. The density related functionρ(x I , y J ) on the grid can be obtained from: In the PIC literature, some compact function such as linear function and quadratic function is used in the simulation. For example, a quadratic shape function can be written as [19,20]: The same shape function and its derivative can be applied to the y dimension. Using the symplectic transfer map M 1 for the single particle Hamiltonian including external fields from a magnetic optics code [21, 22,23] and the transfer map M 2 for space-charge Hamiltonian, one obtains a symplectic PIC model including the self-consistent space-charge effects. As a test of the above sympletic PIC model, we compared this model with another gridless symplectic space-charge model and a nonsymplectic PIC solver. Figure 1 shows the emittance growth evolution through a FODO lattice with 85 degree zero current phase advance and 42 degree depressed phase advance from these three models. These simulations used about 50, 000 It is seen that the symplectic PIC model and the symplectic gridless particle model agrees with each other very well. The nonsymplectic spectral PIC model yields significantly smaller emittance growth than those from the two symplectic methods, which might result from the numerical damping effects in the nonsymplectic integrator. The fast emittance growth within the first 20, 000 periods is caused by the space-charge driven 4 th order collective instability. The slow emittance growth after 20, 000 periods might be due to numerical collisional effects. Mitigation of Numerical Noise Induced Emittance Growth In long-term macroparticle space-charge tracking simulation, even with the use of self-consistent symplectic space-charge model, there still exists numerical emittance growth. Figure 2 shows the four dimensional emittance growth ( x x0 y y0 −1)% evolution of a 1 GeV, 30A current proton beam through 40, 000 turns of a lattice that consists of 10 FODO elements (zero current tune 2.417) with 25, 000, 50, 000, 100, 000, 200, 000, and 1.6 million macroparticles and 64 × 64 modes. The initial 0.5% jump of emittance growth is due to charge redistribution to match into the lattice. It is seen that with the increase of the number of macroparticles, the emittance growth becomes smaller. With the use of 1.6 million macropartices, there is little emittance growth which is expected in this linear lattice. The extra numerical emittance growth with small number of macroparticles is due to numerical collisional effect. This numerical collisional effect is caused by the artificially increased charge per macroparticle used in the simulation since the number of macroparticles is much less than the real number of protons inside the beam. The small number of macroparticles enhances the fluctuation of charge density distribution and induces numerical emittance growth. The numerical fluctuation can be smoothed out by using a numerical filter in the frequency domain. Instead of using a standard cut-off method beyond some frequencies, we proposed using an amplitude threshold method to remove unwanted high frequency noise. In this method, the mode with an amplitude below a threshold value multiplying the maximum amplitude in the density spectral distribution is removed from the distribution. The advantage of this method is instead of removing all high frequency modes, it will keep the high frequency modes with sufficiently large amplitudes. These high frequency modes can represent some real physics structures inside the beam. Figure 3 shows the spectral amplitude of a 2D Gaussian density distribution without and with 1% threshold filter. The standard cut-off filter with 16 × 16 and 32 × 32 modes are also indicated in above plot. Most high frequency noise is removed in this distribution by using the threshold filtering method. As a test of the threshold filtering method, we reran the above space-charge long-term simulation using 0 (no filtering), 0.005, 0.1 and 0.05 threshold filtering the charge density distribution during the simulation and 25, 000 macroparticles. Here, the larger threshold value, the less number of modes will be included in the simulation. It is seen that without numerical threshold filtering, there is significant emittance growth after 40, 000 turns. With 0.05 threshold filtering, there is little emittance growth, which is consistent with the expected physics emittance growth as seen by using 1600k macroparticles without filtering. In order to improve the computational speed, we explored a frozen space-charge model during the simulation. Here, instead of self-consistently updating the space-charge Poisson solver every time step, after some initial time steps, we store the solutions of the space-charge fields along the lattice and reuse those stored space-charge fields for the following long-term simulation. This model assumes that after some steps, the charge density distribution of the beam attains stable solutions and will not vary significantly from turn to turn. Figure 5 shows the total 4D Figure 5. 4D emittance growth evolution with self-consistent simulation (red) and frozen space-charge model (green). emittance growth evolution for the above example by using the self-consistent tracking and by using the frozen space-charge model. It is seen that the emittance growth evolution from the frozen space-charge model agrees with that from the self-consistent simulation quite well. The computational speed of the frozen space-charge model is about a factor of six faster than the self-consistent simulation in this case. Conclusion In this study, we suggested using a symplectic space-charge PIC model with threshold filtering in frequency domain of the charge density distribution to reduce the numerical artifacts in the simulation. By appropriately choosing threshold value, the numerical noise driven emittance growth can be significantly reduced in the long-term simulation. In order to improve the computing speed, we explored a frozen space-charge model that stores the space-charge field solutions after some initial steps and reuse those space-charge fields in the following longterm simulation. This method significantly reduces the computing time and yields reasonable simulation results in the above linear lattice example where the beam charge density distribution does vary much after 200 turns.
2019-04-22T13:12:40.461Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "00d34b07d8582c96a1a861d3a5d65a782f8faf8b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1067/6/062026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8473fc480961d4edc24bd8b56b616032305731c4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260541609
pes2o/s2orc
v3-fos-license
Pseudoprogression in the era of immunotherapy-based strategies for recurrent head and neck squamous cell carcinoma achieving complete response: A case report Rationale: In the last few years, treatment of head and neck squamous cell carcinoma (HNSCC) has been enhanced by the emergence of immunotherapy. A biological phenomenon unique to immunotherapy is pseudoprogression, an increase in tumor burden or the appearance of a new lesion subsequently followed by tumor regression. Patient concerns: A 78-year-old man complaining of a lump (6*4 cm) gradually swelling on the right side of his neck with recurrent buccal mucosa squamous cell carcinoma presented to our institution. Two months prior, he received resection of the buccal lesion but refused suggested adjuvant chemoradiotherapy after the operation. Diagnoses: Recurrent buccal mucosa squamous cell carcinoma. Interventions: Induction immunotherapy was initiated, followed by a new node appearing on the surface of the neck mass. We considered the presence of pseudoprogression and continued with immunotherapy. The patient received immunotherapy combined with chemotherapy and intensity-modulated radiation therapy (IMRT) consecutively. Outcomes: The patient experienced an excellent recovery with the disappearance of pain and the lump, along with return of a healthy appetite, weight gain and positive outlook. Complete response (CR) was also noted by magnetic resonance imaging (MRI) scan, with the upper right neck mass significantly retreated to unclear display. The patient is still alive with stable, asymptomatic disease at the time of this writing. Lessons: These results provide confidence in the safety and efficacy of radical chemo-radio-immunotherapy for the treatment of recurrent, unresectable or metastatic HNSCC. Introduction As the sixth most common cancer in the world, over 500,000 new cases of head and neck squamous cell carcinoma (HNSCC) occur annually. [1] Particularly for patients with recurrent/metastatic (R/M) HNSCC, the median survival is 6 to 12 months due to limited therapeutic options. [2] In recent years, with the appearance of new strategies, immunotherapy has made a breakthrough in the treatment of HNSCC. Anti-programmed death 1 (PD-1) monoclonal antibodies combined with chemotherapy have demonstrated better overall survival in R/M HNSCC (13.0 months vs 10.7 months). [3] Encouragingly, NCCN guidelines (version 3.2021) list pembrolizumab as the first-line preferred regimen for recurrent, unresectable or metastatic HNSCC. One rare but significant phenomenon associated with immunotherapy is pseudoprogression, which refers to an increase in tumor burden or the appearance of a new lesion followed by tumor regression. [4] Here, we report a complete response (CR) case in a patient with recurrent HNSCC in which pseudoprogression was observed in immunotherapy-based treatments. Case description A 78-year-old man complaining of a lump (6*4 cm) gradually swelling on the right side of his neck with recurrent buccal cancer presented to our institution on January 28, 2021. Four months prior, a lump (2 cm in diameter) was observed on the right cheek but was not considered serious until the mass continued growing after 2 months. The neck mass was biopsied and demonstrated squamous cell carcinoma. He received resection of the buccal lesion and palatal flap repair on November 9, 2020. The operation was carried out successfully with negative surgical margins and the patient recovered well. The postoperative pathological report revealed a diagnosis of moderately differentiated keratinizing squamous cell carcinoma (Fig. 1). The patient refused suggested adjuvant chemoradiotherapy after the operation. He was a nonsmoker, did not drink alcohol and had no family history of cancer. On admission to our institution, he complained of pain, fatigue, inappetence and weight loss and was administered Tramadol (50 mg, bid) to relieve the pain. Magnetic resonance imaging (MRI) on January 29, 2021 revealed a mass in the right neck and edema of soft tissue around the right parotid gland ( Fig. 2A). A neck mass biopsy revealed squamous cell carcinoma and was recognized as buccal mucosa squamous cell carcinoma cT4N1M0, stage IV according to the American Joint Committee on Cancer Staging Manual, 8th edition. RGFR amplification (copy number, 6) was detected by next-generation sequencing, with no other mutations such as copy number variations in NTRK, ALK, ROS1, and MET. The multidisciplinary team (Head and Neck Surgery, Oncology, Radiotherapy, Pathology, and Imaging Departments) discussed that he was unsuitable for surgical treatment and determined the combined treatment scheme of immunotherapy, chemotherapy and radiotherapy. Considering his age, poor physique and positive immunohistochemical analysis of PD-L1 (TPS = 2%, CPS = 3) (Fig. 3), the patient underwent 1 cycle of induction immunotherapy with Tislelizumab (200 mg) on February 2, 2021. Two hours after the injection, the patient developed high fever (39.7°C; 103.64°F) and neck swelling. Paracetamol (500 mg, q12), caffeine (65 mg, q12), and cooling fluid infusion treatment was administered and the patient's fever subsided the next day. Reexamination of MRI on February 19, 2021 showed that the metastasis of the right upper and middle neck was more significant than the previously observed (Fig. 2B) and a new node appeared on the surface of the neck mass (Fig. 4A). We considered the possibility of pseudoprogression caused by the On March 16, 2021, MRI revealed that the upper right neck metastasis had significantly receded compared to the front (Fig. 2C), and we found that the neck mass was significantly reduced, with the new node on the surface of the neck mass dissolving, forming a soft tissue sinus, with light yellow secretions flowing out of the sinus tract (Fig. 4B). Given that the patient tolerated the treatment better than previously, we gave him 2 cycles of Tislelizumab (200 mg) combined with Albumin-bound paclitaxel (400 mg) since March 16, 2021. The tumor receded and the sinus was gradually closed (Fig. 4C). reexamination of MRI on April 8, 2021 suggested that the tumor has considerably shrunk compared to the previous image (Fig. 2D). Encouraged by the curative effect, we performed radiotherapy of the neck lesion from April 14, 2021 to May 28, 2021. The intensity-modulated radiation therapy plans were adopted (dose The patient experienced an excellent recovery with the disappearance of pain and the lump (Fig. 4D), along with the return of a healthy appetite, weight gain and positive outlook. CR was also noted by MRI scan on May 28, 2021, with the upper right neck mass significantly retreated to unclear display (Fig. 2E). On June 21, 2021, he received 1 cycle of maintenance treatment with Tislelizumab (200 mg) which was well tolerated. The patient is still alive with stable, asymptomatic disease at the time of this writing (Fig. 5). Discussion We found an extremely rare case of visible pseudoprogression in HNSCC with the treatment of immunotherapy. In this case, the HNSCC patient with not obviously strong PD-L1 expression (TPS = 2%, CPS = 3) undergoing chemo-radio-immunotherapy achieved CR both clinically and radiologically. A CR was observed in this case with immunotherapy throughout the whole process of therapy. The study of Semrau et al [5] demonstrated that double immune checkpoint inhibitor (ICI) increased the response rate to induction chemotherapy for HNSCC. Wu et al [6] exhibited a case confirming the safety and effect of combining anti-PD-1 antibody and chemotherapy for senile patients with recurrent HNSCC. We made a new attempt by applying the combination of Tislelizumab and paclitaxel as induction therapy and achieved sound results. The application of ICI strengthens the anti-tumor effect of radiotherapy, [7] which has been supported by many reports. [8,9] In this case, the patient responded well to radiotherapy combined with Tislelizumab and underwent maintenance immunotherapy to prevent tumor recurrence. Pseudoprogression is the radiologic appearance of an increase in the size of tumor or tumor burden after ICI with subsequent tumor reduction. [4] The incidence of pseudoprogression is roughly 10%, [10] which was initially noted in anti-CTLA-4 therapy for melanoma [11] and then reported in non-small cell lung cancer, urothelial cancer and renal cancer. [12] In HNSCCs, pseudoprogression has also been reported, although it is rare, [4] with an incidence of about 1.3%. [13] The potential mechanism of pseudoprogression is that immune cells flow into the tumor micro-environment due to the reactivation of the immune system. [13] Therefore, extensive hemorrhage and inflammatory exudate in the tumor tissue lead to necrosis or/and cell death, eventually forming the appearance of significantly enlarged lesions. [14] The phenomenon of pseudoprogression has prognostic implications by benefits patients with a reduction in tumor burden. [10] According to the time at which the tumor shrinks, pseudoprogression is categorized as early and delayed pseudoprogression; the former is defined as a ≥ 25% increase in tumor burden at imaging assessment within 12 weeks from the start of immunotherapy but is not confirmed as progressive disease at the next imaging assessment, whereas the latter is defined as a ≥ 25% increase in tumor burden at any imaging assessment after 12 weeks but is not confirmed as progressive disease at the next imaging assessment. [15] In this case, the lump expanded after the first cycle of induction immunotherapy with Tislelizumab. Since the patient was generally in good clinical condition, we considered the presence of pseudoprogression and continued with immunotherapy. Over time, radiographic follow-ups confirmed our opinions. According to iRECIST criteria, physicians are encouraged to adhere to immunotherapy with a close imaging follow-up (no less than 4 weeks later and no longer than 8 weeks later) for patients with a generally good clinical condition or a better Karnofsky performance status score whose clinical status has not deteriorated. Several questions are raised: first, a more reliable method is urgently needed for the diagnosis of pseudoprogression. Second, what is the appropriate duration of induction and maintenance immunotherapy? Finally, is it feasible to reduce the radiation dose of radiotherapy when concurrent with immunotherapy? Further investigation is needed to explore the potential of immunotherapy. Conclusion The patient with recurrent, unresectable buccal cancer achieved CR treated with chemo-radio-immunotherapy in which visible pseudoprogression was observed. These results provide confidence in the safety and efficacy of radical chemo-radio-immunotherapy for the treatment of recurrent, unresectable or metastatic HNSCC.
2023-08-06T05:07:14.955Z
2023-08-04T00:00:00.000
{ "year": 2023, "sha1": "12d55d2175455f79046ad8fb82baeeff01f27515", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "12d55d2175455f79046ad8fb82baeeff01f27515", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257722714
pes2o/s2orc
v3-fos-license
Radiotherapy in Managing Metastatic Hepatocellular Carcinoma With Cardiac Involvement and Pulmonary Tumor Thromboemboli: A Case Report Hepatocellular carcinoma (HCC) is the most common liver cancer and presents various degrees of aggressiveness. In this case study, we reported the management of an aggressive HCC patient who was a young immigrant from a hepatitis B endemic country with locally advanced HCC with portal involvement at presentation. Patient was initially managed with Yttrium-90 (Y-90) instillation, then systemic treatment when he had disease progression. Despite multiple lines of systemic treatments, patient continued to progress and developed significant cardiac involvement and pulmonary tumor thromboemboli. His course of treatment was further complicated by hemoptysis, presumably from hemorrhagic tumor thromboemboli. Patient became ineligible for systemic treatment due to the risk of hemoptysis, thus, subsequently managed with a course of palliative radiotherapy. Unfortunately, patient developed hemorrhagic shock, cardiac failure, and septic shock during radiation treatment and expired shortly afterward. In this case report, we discussed multi-modal treatments, including Y-90, systemic treatment, and radiotherapy, in managing complicated and aggressive HCC. We also reported risk factors, prognostic factors, efficacy of Y-90 instillation and the necessity of a personalized treatment approach. In conclusion, there is no consensus on managing patients with metastatic HCC with cardiac and pulmonary involvement currently. Treatment modalities are often highly personalized and require multi-disciplinary discussion. Introduction Hepatocellular carcinoma (HCC) is the most common liver cancer, with rising incidence in the past decade. The incidence of HCC was 6.5 cases per 100,000, which was four times higher compared to 1.5 cases per 100,000 in 1973 [1]. The HCC can be managed with surgical resection, systemic treatment, and radiotherapy, depending on patient characteristics and stage at presentation. While it is not uncommon to have metastatic HCC to lung, bone and lymph nodes, aggressive HCC with cardiac involvement and pulmonary tumor thromboemboli is relatively rare [2]. Currently, there is no consensus and guideline in managing patients with advanced HCC with cardiac and pulmonary artery involvement. Treatment is highly personalized, clinically challenging, and provider-dependent. In this case report, we presented a patient with aggressive metastatic HCC with cardiac involvement and pulmonary thromboemboli, managed with systemic treatments, Ytterium-90 (Y-90) instillation and also palliative radiotherapy. The Y-90 TheraSphere@ is a Food and Drug Administration-approved radiopharmaceutical conjugated with resin beads. It is injected under image guidance into branches of hepatic arteries to treat localized liver neoplasm, such as HCC and liver metastasis [3]. The purpose of this case study is to report our institutional experience, review current literature, and discuss efficacy and personalized treatments. Case Presentation Patient is a 34-year-old African male immigrant with history of chronic hepatitis B infection who presented with small hepatic lesion on diagnostic ultrasound, while patient was treated for hepatitis B (Tenofovir 300 mg daily). This lesion was biopsied, with pathology consistent with moderately-differentiated hepatocellular carcinoma. On presentation, patient had mild tenderness to palpation due to hepatomegaly without symptoms of hepatic decompensation including jaundice, hepatic encephalopathy, esophageal varices, and ascites. Laboratory work-up revealed elevated Alpha Fetal Protein (AFP) at 3,679.0 ng/mL (normal <10.0 ng/mL), normal Carcinoembryonic Antigen and Carbohydrate Antigen 19-9, and unremarkable renal and liver functions on comprehensive metabolic panel. The Magnetic Resonance Imaging Abdomen with Intravenous (IV) contrast showed a 4.5 cm segment II/III lesion invading the portal and left hepatic vein ( Figure 1). Patient was staged as T4N0M0, stage IIIB localized HCC based on American Joint Committee on Cancer staging 8th edition. Multidisciplinary Tumor Board discussion recommended patient to undergo Y-90 instillation for localized HCC. Interventional radiology successfully performed a mapping angiogram prior to proceeding with Y-90 instillation, which demonstrated extensive enhancement within the left hepatic lobe and tumor thrombus enhancement within portal vein, without significant pulmonary shunting (defined as > 20%). Shortly afterwards, the patient safely received TheraSphere@ Y-90 instillation to the left hepatic lobe according to the mapping angiogram. Based on the volume and location of the tumor, a total of 240 Gray (Gy) to a volume of 550 mL was planned for delivery. Patient had a 5.1% pulmonary shunt and an estimated 1% residual activity at the end of the procedure. Patient tolerated the procedure well and was discharged home. During his 2-week post-procedure follow-up, patient did not have significant acute toxicities and symptomatically improved. The AFP was further elevated to 16,109 ng/mL, with mild transaminitis: Aspartate Aminotransferase 126 U/L (normal 12-40 U/L), Alanine Transaminase 66 U/L (normal 11-41 U/L) and elevation of Alkaline Phosphatase 126 U/L (normal 40-115 U/L). Total bilirubin was 0.3 mg/dL within normal limits (0.2-1.1 mg/mL). Renal function showed normal serum creatinine and estimated glomerular filtrate rate. Patient underwent Computed Tomography (CT) scans per National Comprehensive Cancer Network guidelines to assess treatment response and to continue disease surveillance at 3-month follow-up. At the time, patient presented with worsening tenderness to palpation of the right upper quadrant and middle abdomen secondary to hepatomegaly. The CT Abdomen and Pelvis with intravenous (IV) contrast scan revealed heterogeneously enhancing partially exophytic hepatic lesion involving segments II, III and IV, complete occlusion of left portal vein and large left hepatic thrombus extending into Inferior Vena Cava and right atrium, suggesting significant progression of disease (Figures 2a-2d). CT Chest with IV contrast revealed small bilateral indeterminate lung nodules without definitive evidence of distal metastasis. His AFP level, though persistently elevated, declined to 3,160 ng/mL and stable hepatic and renal functions. Patient remained functionally well with an Eastern Cooperative Oncology Group score of 0 and Child-Pugh class A. After an extensive discussion with patient and his family, it was decided that the patient should proceed with systemic therapy with the objective of achieving a treatment response, to make patient a surgical candidate, although unlikely due to the extent and aggressiveness of disease. He then received atezolizumab (1200mg) and bevacizumab (15 mg/kg) every 21 days per IMbrave-150 clinical trial [4]. Patient then started to experience multiple symptoms, including fatigue, weakness, lower extremity pitting edema, and shortness of breath. After receiving two cycles of atezolizumab and bevacizumab, patient developed hemoptysis and presented to the emergency room (ER) for evaluation, during which he was found to have numerous pulmonary metastases, a right-to-left intrapulmonary shunt, and pulmonary thromboemboli on CT Angiogram Pulmonary Emboli protocol (CT-PE) (Figures 3a-3b). Though, his AFP at the time further declined to 2,666 ng/mL. Patient was admitted to the medical intensive care unit for hemoptysis. Bronchoscopy showed dilated ectatic vessel at the carina between left upper lobe and left lower lobe, suspecting main bronchus invasion from pulmonary tumor thromboemboli metastasis (Figures 4a-4c). The lesion was cauterized with Argon ablation and subsequently achieved satisfactory hemostasis. Patient was discharged home afterwards. During his post-hospitalization visit, atezolizumab and bevacizumab were discontinued due to hemoptysis with concern of increased risk of hemorrhage. In addition, atezoliaumab alone is not known to be better than sorafenib alone. Patient then switched to receiving lenvatinib 12 mg orally daily to continue his systemic treatment. His AFP level stabilized and continued to decline while receiving systemic treatments. Patient condition improved clinically and he was able to return to work briefly. Unfortunately, after three months of systemic treatment, patient again developed massive hemoptysis and acute respiratory failure, requiring intubation. Repeat CT-PE revealed bilateral pulmonary thromboemboli, portal vein and suprahepatic inferior vena cava involvement and extensive pulmonary metastasis. Patient received serial bronchoscopy while intubated, which did not identify an overt source of bleeding, though observed active bleeding distal to previously identified and cauterized lesion in the left main bronchus, beyond the visualization capacity of bronchoscopy. His bleeding subsequently subsided and he was extubated successfully. Patient was eventually discharged home, but returned in short intervals due to ongoing hemoptysis. His clinical presentation and imaging findings suggested that the pulmonary tumor thromboemboli have eroded into the left bronchi, causing persistent hemoptysis. After extensive discussion with patient and his family, a course of palliative radiation to the left main bronchial segment and adjacent pulmonary artery with tumor thromboemboli was recommended. The goal of radiation was to temporize bleeding for symptom control, and to potentially restart systemic therapy after achieving hemostasis. Patient was simulated in a supine position with permanent ink marking for alignment purposes. The planned radiation dose was 36 Gy in 24 fractions, 1.5 Gy per fraction, twice a day. Radiation was delivered via Anterior-to-Posterior/Posterior-to-Anterior approach 3-Dimension conformal technique via 6x energy photon beam treatment. Initially, the patient responded to radiation treatment and hemoptysis improved during the first week of treatment. However, patient developed significant abdominal and lower extremity swelling, evidence of heart failure due to fluid retention while on treatment. He was medically managed with diuretics. During the second week of radiation treatment, patient presented with massive hemoptysis, hypotension, and tachycardia during on-treatment visit. The patient was emergently transferred to the ER for hemorrhagic shock secondary to massive hemoptysis. He had multiple blood transfusions and his course of hospitalization was further complicated by hepatorenal syndrome, bacterial peritonitis, and sepsis. The patient and his family decided to pursue comfort care measures after the goals of care discussion. Patient expired shortly afterwards, 10 months after his initial diagnosis. Discussion Hepatocellular carcinoma is the most common primary liver cancer. Risk factors for HCC include smoking, alcohol use, non-alcoholic steato-hepatitis and viral hepatitis infections [5]. In the United States, the incidence of liver and intrahepatic bile duct cancers is estimated to reach 41,260 cases and 30,520 deaths in 2022 with greater than 75% attributed to HCC [6]. Among HCC patients, it was reported that about 15% of patients is found to have extrahepatic metastasis at initial presentation [2]. The treatment for HCC is multimodal, which includes systemic therapy, surgical resection, and radiotherapy. The main risk factor contributing to the development of HCC in our patient is chronic hepatitis B infection. This is prevalent in African countries, where hepatitis B vaccination and treatment are not readily available to citizens. According to the World Health Organization, the prevalence of hepatitis B infection was 6.1% in Africa, which is higher compared to 0.7 % in North and South America combined [7]. In addition, the prevalence of HCC was 8% in African countries, which is also higher compared to 5% in North America [8]. These developing countries have limited resources, which makes prevention, diagnosis, and treatment of HCC more challenging. HCC can present as an extremely aggressive disease, as detailed in this case report. Increasing tumor diameter is an important, but not the only prognostic factor for tumor aggressiveness. The AFP increase and percentage of portal vein thrombosis can change exponentially, instead of linearly, as tumor diameter increases [9]. This implies that the biological characteristics may have changed with ultra-high serum AFP and evidence of portal vein thrombosis, predicting a much more aggressive course [9]. Our patient has significantly elevated serum AFP and image evidence of portal vein thrombosis at the presentation; both were poor prognostic factors. Patient presented with a stage II localized HCC and progressed to stage IV metastatic HCC with cardiac and pulmonary involvement in six months, deteriorated quickly and expired in the following six months after multiple lines of treatment. Our patient presented with a significantly more aggressive disease, comparing to the data from the LEGACY study, which reported median progression-free survival and overall survival of 40.7 and 57.9 months respectively for HCC < 8cm undergoing Y-90 instillation [10]. Our patient was treated with both systemic therapy and radiotherapy, including the innovative Y-90 intravascular TheraSphere@ injection. In the PREMIERE trial, a landmark phase 2 study which randomized 45 patients with Barcelona Clinic Liver Cancer early and intermediate stage HCC to undergo conventional Trans-arterial Chemoembolization (cTACE) or Y-90 TheraSphere, Y-90 TheraSphere was shown to significantly improve median time to progression compared to cTACE (>26 months vs 6.8 months, p = 0.0012) [11]. The Y-90 has also been found to have the highest rate of pathologic complete response compared to other locoregional therapies including Radiofrequency Ablation, cTACE and Stereotactic Body Radiotherapy in the setting of bridging HCC patients to transplant [12]. Several clinical trials have assessed the role of Y-90 in combination with systemic treatment, particularly sorafenib, for advanced HCC. The addition of sorafenib to Y-90 was noted to result in increased gastrointestinal and dermatologic toxicities [13]. In regards to patients with liver metastases, multiple studies, especially among metastatic colorectal cancer patients, suggest benefits in local control when Y-90 TheraSphere@ is used alone or in combination with chemotherapy [14]. The use of Y-90 in advanced HCC is recommended based on multidisciplinary tumor board discussion of each individual. Our patient had a centralized segment IV/V lesion abutting the portal vein, which made surgical resection challenging. Y-90 technique was deemed particularly useful given the presence of a centrally located lesion, abutting critical vascular structures in a poor surgical candidate [15]. The most common extrahepatic sites of metastasis for HCC include lungs (47%), lymph nodes (45%) and bones (37%) [2]. Metastatic HCC could present with cardiac or pulmonary artery involvement, but relatively uncommon in literature. There are several case reports regarding HCC with cardiac involvement. Patients often presented with cardiac symptoms including dyspnea on exertion, lower extremity edema and orthopnea [16][17][18]. Imaging workup showed a mass in either the atrium or ventricle, often right-sided and laboratory markers showed elevated liver enzymes and an abnormal coagulation profile [16][17][18]. A clinicopathologic study revealed 18 out of 439 cases of patients had HCC with cardiac involvement on autopsy [19]. More recently, it has been estimated that 5-10% of HCC patients may develop cardiac involvement [20]. Despite the uncommon presentation, the prevalence of metastasis to pulmonary artery and cardiac involvement could be higher than expected. Currently, there is no consensus or study on managing patients with metastatic pulmonary tumor thromboemboli and cardiac involvement. The role of radiotherapy in managing tumor thromboemboli is unclear and has not been established as the standard of care. In this case report, our patient received palliative external beam radiation to the left main bronchus and adjacent pulmonary artery in managing tumor thromboemboli. The radiation regimen was also tailored to his specific clinical scenario, instead of the standard and commonly utilized radiation dose, such as 30 Gy in 10 fractions or 37.5 Gy in 15 fractions. The 36 Gy in 24 fractions radiation dose was used to balance the need for hemostasis and tumor regression. It was thought that a drastic decrease in tumor size in the pulmonary artery might worsen hemoptysis and lead to hemorrhage. The determined treatment was highly personalized and clinician-dependent in this case scenario. There is a need to establish consensus for HCC with cardiac metastasis and tumor thromboembolism in guiding future practice. Conclusions Hepatocellular carcinoma with cardiac involvement and pulmonary thromboemboli is a rare and aggressive presentation and poses a significant challenge in the clinical management of patients. Current treatment options include surgical resection, liver transplant, systemic treatments, external beam radiation, and Y-90 instillation. Management of such an aggressive disease often require multi-disciplinary discussion. In our experience, we offered a multimodal management with a personalized approach in this scenario, considering the patient's functional status and goals of care. Future study is warranted to establish guideline and consensus in managing these patients. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-25T15:22:07.595Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "aae456fd52e8b99c9bcb74fbd7b5d764d8107e71", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/142441/20230323-32762-1slp7iu.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad20ffd7719590a52425ac4fcb8d2793f6491ad7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
11001391
pes2o/s2orc
v3-fos-license
Rotating Concentric Circular Peakons We study invariant manifolds of measure-valued solutions of the partial differential equation for geodesic flow of a pressureless fluid. These solutions describe interaction dynamics on lower-dimensional support sets; for example, curves, or filaments, of momentum in the plane. The 2+1 solutions we study are planar generalizations of the 1+1 peakon solutions of Camassa&Holm [1993] for shallow water solitons. As an example, we study the canonical Hamiltonian interaction dynamics of $N$ rotating concentric circles of peakons, whose solution manifold is $2N-$dimensional. Thus, the problem is reduced from infinite dimensions to a finite-dimensional, canonical, invariant manifold. The existence of this reduced solution manifold and many of its properties may be understood, by noticing that it is also the momentum map for the action of diffeomorphisms on the space of curves in the plane. We show both analytical and numerical results. Introduction and overview Geodesic flow in n dimensions As first shown in Arnold [1966] [1], Euler's equations of ideal fluid dynamics represent geodesic motion on the volume-preserving diffeomorphisms with respect to the the L 2 norm of the velocity. More generally, a time-dependent smooth map g(t) is a geodesic on the diffeomorphisms with respect to a kinetic energy norm KE = 1 2 u 2 , provided its velocity, the right-invariant tangent vector u =ġg −1 (t), satisfies the vector Euler-Poincaré equation, Here ad * is the adjoint with respect to L 2 pairing · , · : g * × g → R of the ad-action (commutator) of vector fields u, w ∈ g. That is, The momentum vector m ∈ g * is defined as the variational derivative of kinetic energy with respect to velocity, This defining relation for momentum closes the Euler-Poincaré equation (1) for geodesic motion with respect to the kinetic energy metric KE = 1 2 u 2 . For more details, extensions and applications of the Euler-Poincaré equation to both compressible and incompressible fluid and plasma dynamics, see Holm, Marsden and Ratiu [1998] [12]. Geodesic flow with H 1 velocities in two dimensions In this paper, we consider the solution behavior of the Euler-Poincaré equation (1) when the momentum vector is related to the velocity by the twodimensional Helmholtz operation, where ∆ denotes the Laplacian operator. This Helmholtz relation arises when the kinetic energy is given by the H 1 norm of the velocity, The H 1 kinetic energy norm (5) is an approximation of the Lagrangian in Hamilton's principle for columnar motion of shallow water over a flat bottom, when potential energy is negligible (the zero linear dispersion limit) and the kinetic energy of vertical motion is approximated by the second term in the integral, [3]. In this approximation, the physical meaning of the quantity m = δ(KE)/δu in the Helmholtz relation (4) is the momentum of the shallow water flow, while u is its velocity in two dimensions. See Kruse et al. [2001] [4] for details of the derivation of the geodesic equation for approximating 2D shallow water dynamics in this limit. Problem statement: Geodesic flow on H 1 in cylindrical symmetry The present work studies azimuthally symmetric solutions of the Euler-Poincaré equation (1) in polar coordinates (r, φ), m = m r (r, t)r + m φ (r, t)φ and u = u r (r, t)r + u φ (r, t)φ . In the standard basis for cylindrical coordinates, momentum one-forms and velocity vector fields are expressed as, m · dx = m r dr + rm φ dφ and u · ∇ = u r ∂ r + u φ r ∂ φ . Solutions (6) satisfy coupled partial differential equations whose radial and azimuthal components are, respectively, In these coupled equations, nonzero rotation u φ generates radial velocity u r , which influences the azimuthal motion. Without rotation, u φ = 0, and the solution becomes purely radial. The system of equations (8) and (9) for geodesic motion conserves the H 1 kinetic energy norm, The corresponding momenta satisfy the dual relations, The Helmholtz relation (4), between m φ and u φ , for example, becomes This relation between velocity and momentum defines the symmetric invertible Helmholtz operator in cylindrical geometry with finite boundary conditions at r = 0 and → ∞. The velocity u(r, t) is obtained from the momentum m(r, t) by the convolution u(r) = G * m = ∞ 0 G(r, ξ)m(ξ)ξdξ (an extra factor ξ arises in cylindrical geometry) with the Green's function G(r, ξ). The Green's function for the radial Helmholtz operator in (12) is given by, where I 1 and K 1 are modified Bessel's functions. This Green's function will play a significant role in what follows. Equations (8) and (9) where δh/δm r = u r , δh/δ(rm φ ) = u φ /r and the Hamiltonian operator D, is the matrix D = {(m r , rm φ ) , (m r , rm φ )}, which defines the Lie-Poisson bracket for geodesic motion in cylindrical geometry. Note that D is skewsymmetric with respect to the L 2 pairing with cylindrical radial measure. 2 Measure-valued momentum maps and solutions of geodesic flow in n dimensions Measure-valued solution ansatz Based on the peakon solutions for the Camassa-Holm equation [2] and its generalizations to include the other traveling-wave pulson shapes [8], Holm & Staley [9] introduced the following measure-valued ansatz for the solutions of the vector EP equation (1), where the dimensions satisfy k < n. The fluid velocity corresponding to the momentum solution ansatz (16) is given by where G(x, y) is the Green's function for the Helmholtz operator in n dimensions. These solutions are vector-valued functions whose momenta are supported in R n on a set of N surfaces (or curves) of codimension (n − k) for s ∈ R k with k < n. In three dimensions, for example, they may be supported on sets of points (vector peakons, k = 0), quasi one-dimensional filaments (strings, k = 1), or quasi two-dimensional surfaces (sheets, k = 2). Substitution of the solution ansatz (16) into the EP equation (1) implies the following integro-partial-differential equations (IPDEs) for the evolution of such strings, or sheets, Importantly for the interpretation of these solutions given later in Holm and Marsden [2003] [11], the independent variables s ∈ R k turn out to be Lagrangian coordinates. When evaluated along the curve x = Q a (s, t), the fluid velocity (17) satisfies, (19) Consequently, the lower-dimensional support sets (defined on x = Q a (s, t) and parameterized by coordinates s ∈ R k ) move with the fluid velocity. Moreover, equations (18) for the evolution of these support sets are canonical Hamiltonian equations, The corresponding Hamiltonian function H N : (R n × R n ) ⊗N → R is, This is the Hamiltonian for canonical geodesic motion on the cotangent bundle of a set of N curves Q a (s, t), a = 1, . . . , N, with respect to the metric given by G. [10]. We refer to that paper for more details of the solution dynamics. The non-locality of the dynamics of the measured-valued solutions, described in the literature, makes analytical progress difficult. In this paper, we derive measured-valued solutions that are rotationally symmetric. For this circular symmetry, the non-locality integrates out and the motion reduces to a set of ordinary differential equations. Most of the paper is devoted to these circular solutions, which we call rotating peakons. The set of solutions obeying translational symmetry, but having two velocity components, is studied in the appendix. For a solution of N planar peakons, there are 2N degrees of freedom, with 2N positions and 2N canonically conjugate momenta. Hence, evolution of the N planar peakon solution is governed by a set of 4N nonlocal partial differential equations. These reduce to ordinary differential equations in the presence of either rotational, or translational symmetry. We also demonstrate that these solutions emerge from any initial condition with this symmetry in the plane. Potential applications of measure-valued solutions of geodesic flow One of the potential applications of the two-dimensional version of this problem involves the internal waves on the interface between two layers of different density in the ocean. Fig. 1 shows a striking agreement between two internal wave trains propagating at the interface of different density levels in the South China Sea, and the solution appearing in the simulations of the EP equation (1) in two dimensions. Inspired by this figure, we shall construct a theory of propagating one-dimensional momentum filaments in two dimensions. For other work on the 2D CH equation in the context of shallow water waves, see Kruse, et al. (2001) [4]. Another potential application of the two-dimensional version of this problem occurs in image processing for computational anatomy, e.g., brain mapping from PET scans. For this application, one envisions the geodesic motion as an optimization problem whose solution maps one measured twodimensional PET scan to another, by interpolation in three dimensions along a geodesic path between them in the space of diffeomorphisms. In this situation, the measure-valued solutions of geodesic flow studied here correspond to "cartoon" outlines of PET scan images. The geodesic "evolution" in the space between them provides a three dimensional image that is optimal for the chosen norm. For a review of this imaging approach, which is called "template matching" in computational anatomy, see Miller and Younes [2002] [13]. Peakon Momentum Map J : T * S −→ g * in n dimensions Holm and Marsden [2003] [11] explained an important component of the general theory underlying the remarkable reduced solutions of the vector EP equation (1). In particular, Holm and Marsden [2003] [11] showed that the solution ansatz (16) for the momentum vector in the EP equation (1) map is Poisson, provided it is coadjoint equivariant. In particular, J maps the canonical Poisson bracket on the image space T * S into the Lie-Poisson bracket on the target space g * .) In symbols, this is The n−dimensional peakon momentum solution ansatz J (for any Hamiltonian) is given by Holm and Staley [2003] [10] as the superposition formula in (16), By direct substitution using the canonical Q, P Poisson brackets, one computes the Poisson property of the map J in n Cartesian dimensions. Namely, in the sense of distributions integrated against a pair of smooth functions of x and y. This expression defines the Lie-Poisson bracket {· , ·} LP (m) defined on the dual Lie algebra g * , restricted to momentum filaments supported on the N curves x = Q a (s, t), where a = 1, 2, . . . , N. Its calculation demonstrates the following. (1), is a momentum map. The Poisson property of the momentum map J in (22) is, of course, independent of the choice of Hamiltonian. This independence explains, for example, why the map extends from peakons of a particular shape, to the pulsons of any shape studied in Fringer and Holm [2001] [8]. The solution ansatz (16) now rewritten as the momentum map J in (22) is also a Lagrange-to-Euler map, because the momentum is supported on filaments that move with the fluid velocity. Hence, the motion governed by the vector EP equation (1) occurs by the action of the diffeomorphisms in G on the support set of the fluid momentum, whose position and canonical momentum are defined on the cotangent bundle T * S of the space of curves S. This observation informs the study of geodesic motion governed by equation (1). For complete details and definitions, see Holm and Marsden [2003] [11]. Peakon momentum map J : The goal of the present work is to characterize the measure-valued solutions of the vector EP equation (1), by using the momentum map J in (22) This solution ansatz is also a momentum map, as shown in Holm and Marsden [2003] [11]. On a Riemannian manifold, the corresponding Lie-Poisson bracket for the momentum on its support set becomes For example, in cylindrical symmetry one has √ det g = r and the vector m depends only on the radial coordinate r. For solutions with these symmetries, the Lagrangian label coordinate s is unnecessary, as we shall see in the cylindrical case, and the equations for Q a and P a will reduce to ordinary differential equations in time. We shall consider the dynamics of circles of peakons, whose motion may have both radial and azimuthal components. These are rotating circular peakons. Suppose one were to mark a Lagrangian point on the a−th circle, a = 1, . . . , N. Then the change in its azimuthal angle φ a (t) could be measured as it moved with the azimuthal fluid velocity u φ along the the a−th circle as its radius r = q a (t) evolved. Translations in the Lagrangian azimuthal coordinate would shift the mark, but this shift of a Lagrangian label would have no effect on the Eulerian velocity dynamics of the system. Such a Lagrangian relabeling would be a symmetry for any Hamiltonian depending only on Eulerian velocity. Thus, the azimuthal relabeling would result in the conservation of its canonically conjugate angular momentum M a , which generates the rotation corresponding to the relabeling symmetry of the a−th circle. The a−th circle would be characterized in phase space by its radius r = q a (t), and its canonically conjugate radial momentum, denoted as p a . The rotational degree of freedom of the a−th circle would be represented by its conserved angular momentum M a and its ignorable canonical azimuthal angle φ a . The only nonzero canonical Poisson brackets among these variables are, Momentum map for rotating circular peakons In terms of their 4N canonical phase space variables (q a , p a , φ a , M a ), with a = 1, 2, . . . , N, the superposition formula (22) for N rotating circular peakons may be expressed as, We shall first verify that this formula is a momentum map, and then in section 4 we shall derive it, by requiring it to be a valid solution ansatz for the geodesic EP equation (1) in polar coordinates. As a consequence, the motion governed by the system of partial differential equations (8) and (9) for geodesic motion in the plane with azimuthal symmetry has a finite dimensional invariant manifold in the 2N−dimensional canonical phase space (q a , p a ) for each choice of the N angular momentum values M a , with a = 1, 2, . . . , N. Later, we shall also examine numerical studies of these solutions when the kinetic energy is chosen to be the H 1 norm of the azimuthally symmetric fluid velocity. By direct substitution using the canonical Poisson brackets in (26), one computes the Poisson property of the map J in (27). Namely, These equalities are written in the sense of distributions integrated against a pair of smooth functions of r and r ′ . They demonstrate the Poisson property of the map J in (27), which is also the solution ansatz for the rotating circular peakons. They also express the Lie-Poisson bracket {· , ·} LP (m r , rm φ ) for momentum filaments defined on the dual Lie algebra g * and restricted to the support set of these solutions. Hence, we have demonstrated the following: Proposition 3.1 The map J in (27) is a momentum map. On comparing the formulas in (28) with the Hamiltonian operator D for the continuous solutions in (15), one sees that the momentum map (27) essentially restricts the Lie-Poisson bracket with Hamiltonian operator D to its support set. Next, we shall derive the momentum map (27) by requiring it to be a valid solution ansatz for the geodesic EP equation (1) in polar coordinates. Azimuthally symmetric peakons 4.1 Derivation of equations We seek azimuthally symmetric solutions of the geodesic EP equation (1) in polar coordinates (r, φ), for which We shall derive the momentum map (27) and the canonical Hamiltonian equations for its parameters (q a , p a , M a ) by assuming solutions in the form, D. D. Holm, V. Putkaradze & S. Stechmann Rotating Circular Peakons 14 These solutions represent concentric cylindrical momentum filaments which are rotating around the origin. The corresponding velocity components are obtained from where G(r, r ′ ) = G(r ′ , r) is the (symmetric) Green's function for the radial Helmholtz operator given in formula (13). Hence, the fluid velocity corresponding to the solution ansatz (30) assumes the form, with Green's function G(r, q j (t)) as in formula (13). In addition, the kinetic energy of the system is given by Substitution of the solution ansatz (30) for the momentum and its corresponding velocity (32) into the radial equation (8) gives the system, Multiplying this system by the smooth test function rψ(r) and integrating with respect to r yields dynamical equations for p i and q i . In particular, the ψ(q i ) terms yielḋ and, after integrating by parts, the ψ ′ (q i ) terms yielḋ By equation (32) we see thatq i (t) =r · u(q i , t), so the radius of the i−th cylinder moves with the radial velocity of the flow. This procedure is repeated for the φ component of the EP equation, by substituting the solution ansatz (30,32) into equation (9), to find the system Upon multiplying this system by rψ(r) and integrating with respect to r, the term proportional to ψ ′ (q i ) again recovers exactly the q i −equation (35). The term proportional to ψ(q i ) giveṡ after using the q i −equation (35) in the last step. This integrates to This is the same Hamiltonian as obtained from substituting the momentum map (27) into the kinetic energy KE in equation (33). Hence, we may recover the reduced equations (34) for p i , (35) for q i and (37) for M i from the Hamiltonian (38) and the canonical equations, This result proves the following: Solution properties The remaining canonical equation for the i−th Lagrangian angular frequency is,φ Thus, as expected, the ignorable canonical angle variables φ = {φ i }, with i = 1, 2, . . . , N, decouple from the other Hamiltonian equations. In addition, we see that so the angular velocity of the i−th cylinder also matches the angular velocity of the flow. Therefore, we have shown: Proposition 4.2 The canonical Hamiltonian parameters in the momentum map and solution ansatz (27) provide a Lagrangian description in cylindrical symmetry of the flow governed by the Eulerian EP equation for geodesic motion (1). Angular momentum, fluid circulation and collapse to the center. Finally, the fluid circulation of the i−th concentric circle c i , which is traveling with velocity u, may be computed from equations (29) and (30) (with a slight abuse of notation) as, We see that the "angular velocity" v i = M i /q i is the fluid circulation of the i−th concentric circle. Since the angular momentum M i of the i−th circle is conserved, its circulation v i (t) varies inversely with its radius. Consequently, this circulation would diverge, if the i−th circle were to collapse to the center with nonzero angular momentum. Radial peakon collisions We consider purely radial solutions of equation (8), with m φ = 0, which satisfy Such radial solutions have no azimuthal velocity. Without azimuthal velocity, the vector peakon solution ansatz (30) for momentum reduces to the scalar relation, The corresponding radial velocity is where the Green's function G(r, q i (t)) for the radial Helmholtz operator is given by formula (13). Radial peakons of this form turn out to be the building blocks for the solution of any radially symmetric initial value problem. We have found numerically that the initial value problem for equation (44) with any initially confined radial distribution of velocity quickly splits up into radial peakons. This behavior is illustrated in Fig. 2. The initial distribution of velocity splits almost immediately into a train of radial peakons arranged by height, or, equivalently, speed. The head-on "peakon-antipeakon" collisions are of special interest. In the case of equal strength radial peakon-antipeakon collisions, the solution appears to develop infinite slope in finite time, see Fig. 3. This behavior is also known to occur for peakon-antipeakon collisions on the real line. If the strengths of the peakon and antipeakon are not equal, then the larger one of them seems to 'plow' right through the smaller one. This is shown in Fig. 4. The figures shown were produced from numerical simulations of the Eulerian PDE (43). The momentum m r was advanced in time using a fourth-order Runge-Kutta method. The time step was chosen to ensure the Hamiltonian 1 2 m · u r dr was conserved to within 0.1% of its initial value. The spatial discretizations ranged from dr = 10 −4 to dr = 0.02 depending on the desired resolution and the length of the spatial domain, and the spatial derivatives were calculated using finite differences. Fourth-and fifth-order centered differencing schemes were used for the first and second derivatives, respectively. The momentum m r was found from the velocity u r using the finite difference form of the radial Helmholtz operator, and the velocity u r was found from the momentum m r by inverting the radial Helmholtz matrix. For the peakon interaction simulations, the initial conditions were given by a sum of peakons of the form (45) for some chosen initial p i and q i . For a peakon collapsing to the center, which will be described next, the boundary condition at the origin is important. If the PDE (43) were extended to r < 0, then the velocity would be an odd function about the origin. In addition, when the peakon was sufficiently close to the origin (q i < 0.1), the sign of its momentum m r was reversed to begin its expansion away from the origin. For comparison with the simulations of the Eulerian PDE (43), simulations of the Lagrangian ODEs (39) were also performed. A fourth-order Runge-Kutta method was used to advance the system in time, and a time step was chosen to ensure the Hamiltonian (38) was conserved to within 0.1%. The results of these simulations agreed with those of the Eulerian PDE simulations to within 1%. Bouncing off the center Let us first consider the case when only one peakon collapses onto the center with the angular momentum being zero. The Hamiltonian in this case is, which can be approximated when q → 0 as Thus, the momentum p(t) is nearly constant just before the collapse time, t * , and is approximately equal to −2 √ H, more precisely, The equation of motion for q(t) yields,q = pI 1 (q)K 1 (q) = − √ H + o(q). If q → 0 at t → t * , then we necessarily have near the time of collapse t → t * . The case of N radial peakons can be considered similarly. If only one peakon (let us say, number a) collapses into the center at time t * , so q a (t) → 0 as t → t * , and the motion of the peakons away from the center is regular in some interval (t * − δ, t * + δ) (as will be the case unless a peakon-antipeakon collision occurs during this interval), then conservation of the Hamiltonian implies p 2 a G(q a , q a ) + 2p a A + B = 2H, where Since q a = min(q 1 , . . . , q N ), the Helmholtz Green's function in expression (13) implies that the quantities A and B are bounded at times close to t * , and that G(q a , q a ) is bounded as well. Consequently, equation (46) implies that p a is also bounded at times close to t = t * . Numerical simulations confirm our predictions: At the moment of the impact at the center, the amplitude of the peakon remains bounded and approaches the value of − √ H ≈ −2.23, as illustrated on Fig. 5. The slope of the solution at the origin has to diverge. This can be seen on the example of a single peakon as follows. Since Thus, if q(t) → 0 as t → t * , the slope ∂ r u(r = 0, t) must diverge. Therefore, the following Proposition is true: Proposition 5.1 A radially symmetric peakon with no angular momentum, collapsing to the center, has bounded momentum and unbounded slope at the origin close to the moment of collapse. Numerical Results for Rotating Peakon Circles Simulations of rotating peakons were performed for the Eulerian system of PDEs (8,9) using the same numerical methods as those used for non-rotating peakons. Fig. 6 shows the results of an initial value problem simulation when u r is initially 0 and u φ is initially a Gaussian function. Radial velocity in both directions is almost immediately generated, and rotating peakons soon emerge moving both inward and outward but all rotating in the same direction. A rotating peakon approaches the center but turns around before reaching the origin. The radial velocity u r , the angular velocity u φ , and the velocity magnitude |u| are all shown. Fig. 7 shows a rotating peakon as it approaches the origin. A sort of angular momentum barrier is reached, and the peakon turns around and moves away from the origin. Thus, a peakon's behavior as it approaches the origin is reminiscent of Sundman's Theorem; if a peakon has nonzero angular momentum, then a full collapse to the origin will not occur. This result can be understood as follows. For a single rotating peakon, the Hamiltonian (38) becomes From the theory of Bessel functions we know that G(q, q) → 1/2 when q → 0, and G(q, q) > 0 for all q. Moreover, it can be shown that G(q, q) is strictly decreasing with increasing q > 0, so we conclude that G(q, q) > G(q 0 , q 0 ) if 0 < q < q 0 . Thus, when 0 < q < q 0 , The estimate (47) provides the lower bound q(t) can reach in the process of evolution for each value of the parameter q 0 . This estimate can be further optimized as follows. Since G(q 0 , q 0 ) is strictly decreasing starting with the value of G(0, 0) = 1/2, there is q * (M, H) such that for each value of M, H > 0. Then, q ≥ q * (M, H) is the desired optimal estimate. We can summarize this result in This cylindrical motion included rotation, or, equivalently, circulation, which drives the radial motion. The momentum map with non-zero circulation for these concentric circles yielded a generalization of the circular CH peakons that included their rotational degrees of freedom. The canonical Hamiltonian parameters in the momentum map and solution ansatz (27) for the concentric rotating circular peakons provided a finite-dimensional Lagrangian description in cylindrical symmetry of the flow governed by the Eulerian EP partial differential equation for geodesic motion (1). Numerically, we studied the basic interactions of these circular peakons amongst themselves, by collisions and by collapse to the center, with and without rotation. The main conclusions from our numerical study were: • Collapse to the center without rotation occurs with bounded canonical momentum and with vertical radial slope in velocity at r = 0, at the instant of collapse. • For nonzero rotation, collapse to the center cannot occur and the slope at r = 0 never becomes infinite. The main questions that remain are: • Numerical simulations show that near vertical or vertical slope occurs at head-on collision between two peakons of nearly equal height. A rigorous proof of this fact is still missing. • Is the motion integrable on our 2N dimensional Hamiltonian manifold of concentric rotating circular peakons for any N > 1 and any choice of Green's function? • How does one determine the number and speeds of the rotating circular peakons that emerge from a given initial condition? • How does the momentum map with internal degrees of freedom generalize to n dimensions? All of these challenging problems are beyond the scope of the present paper and will be the subjects of future work. one-dimensional peakon is where m satisfies the one-dimensional version of (1). We propose the following extension of these solutions: wherex,ŷ are unit vectors in x, y directions, respectively. The solution lives on line filaments, which are parallel to the y axis and propagate by translation along the x axis. However, the y component of momentum now has a nontrivial value. Such solutions represent momentum lines which propagate perpendicular to the shock's front and "slide" parallel to the front, moving surrounding 'fluid' with it. Upon substituting (50) into equations of motion (1), we see that the x and the y components of (1) both give the same equation of motion for q i (t): This compatibilty is what makes the factorized solution (50) possible. The equation of motion for p i iṡ and for v i ,v i = 0 . Thus, v i can be considered as a set of parameters. The (p i , q i ) still satisfy Hamilton's canonical equations, with Hamiltonian now given by, Finally, the "angle" variables y i (t) conjugate to v i (t) with canonical Poisson bracket {y i , v j } = δ ij satisfy:
2014-10-01T00:00:00.000Z
2003-12-05T00:00:00.000
{ "year": 2003, "sha1": "125adeda35562da3f35bf4a6e09bc3ac8009d23b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0312012", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "692845c0ac05b3365238074a6576aedd48be33d7", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
263749972
pes2o/s2orc
v3-fos-license
Combination of Walnut Peptide and Casein Peptide alleviates anxiety and improves memory in anxiety mices Introduction Anxiety disorders continue to prevail as the most prevalent cluster of mental disorders following the COVID-19 pandemic, exhibiting substantial detrimental effects on individuals’ overall well-being and functioning. Even after a search spanning over a decade for novel anxiolytic compounds, none have been approved, resulting in the current anxiolytic medications being effective only for a specific subset of patients. Consequently, researchers are investigating everyday nutrients as potential alternatives to conventional medicines. Our prior study analyzed the antianxiety and memory-enhancing properties of the combination of Walnut Peptide (WP) and Casein Peptide (CP) in zebrafish. Methods and Results Based on this work, our current research further validates their effects in mice models exhibiting elevated anxiety levels through a combination of gavage oral administration. Our results demonstrated that at 170 + 300 mg human dose, the WP + CP combination significantly improved performances in relevant behavioral assessments related to anxiety and memory. Furthermore, our analysis revealed that the combination restores neurotransmitter dysfunction observed while monitoring Serotonin, gamma-aminobutyric acid (GABA), dopamine (DA), and acetylcholine (ACh) levels. This supplementation also elevated the expression of brain-derived neurotrophic factor mRNA, indicating protective effects against the neurological stresses of anxiety. Additionally, there were strong correlations among behavioral indicators, BDNF (brain-derived neurotrophic factor), and numerous neurotransmitters. Conclusion Hence, our findings propose that the WP + CP combination holds promise as a treatment for anxiety disorder. Besides, supplementary applications are feasible when produced as powdered dietary supplements or added to common foods like powder, yogurt, or milk. Introduction Anxiety disorders are the most common category of mental disorders characterized by feelings of unease, worry, fear, tension, and apprehension (1).According to the World Health Organization (WHO), approximately 380 million individuals worldwide are affected by anxiety disorders (2).It is associated with a decreased quality of life and overall functioning.Besides emotional symptoms, anxiety leads to brain dysfunction, such as depression and dementia (3).Research has found that chronic stress can lead to alterations in both the neuroendocrine and neurotransmitter systems, which in turn impact the creation and management of memories (4,5).In a 2015 study, Luiz Pessoa explored the multiple interactions between anxiety and cognition functions within the brain.In particular, he looked at how these interactions in the prefrontal cortex (PFC) can minimize response conflicts and selectively affect working memory.Additionally, negative emotions have been shown to influence processes related to memory disproportionately (6). Although anxiolytic drugs such as antidepressants and Benzodiazepines have been developed to address anxiety symptoms, their efficacy differs among individuals and may lead to addiction or other adverse effects (7).It is worth noting that the Food and Drug Administration has released no new anxiolytic agents since 2007 (8).In addition to drugs, there is growing interest in the potential anxietyalleviating properties of food like yogurt combined with specific ingredients.This natural, secure, uncomplicated, and budget-friendly approach can be conveniently managed by individuals.Consequently, utilizing food to alleviate anxiety presents a strategy that mitigates the risks linked to psychotropic medications. Walnut peptide (WP) is a bioactive peptide extracted from walnut protein, a nutritious food rich in polyunsaturated fatty acids, proteins, and minerals (9).WP has been reported to enhance sleep quality memory and cognition in mouse models and human clinical trials (10).Meanwhile, in numerous preclinical and clinical investigations, casein peptide (CP), an essential bioactive peptide derived from milk, has exhibited anxiety-reducing properties, establishing its potential as a therapeutic intervention (11).Due to WP's and CP's nutritional benefits and functional properties, they are widely used in food and beverage products.Adding WP and CP to powder, yogurt, or milk, the dairy products people consume daily is a convenient way to intake various nutrients.Combining WP and CP as the nutrient combination could show antianxiety and memory-improving effects as WP and CP-only exhibits and reduce production costs.WP and CP may relieve anxiety through neurotransmitters, such as dopamine (DA), serotonin, gamma-aminobutyric acid (GABA), and acetylcholine (ACh), which regulate emotions, cognitive functions, and memory formation (12).However, the roles of walnut peptide and casein peptide in neurotransmission remain unclear.Our previous study discovered that the nutrient combination WP + CP at 56.7 + 100 μg/ mL showed effects of antianxiety, antioxidants, neuroprotection, and memory improvement in zebrafishes (13), whether they exhibit synergistic effect and alleviates anxiety in anxiety mice model is currently unknown. This study aims to investigate and highlight the antianxiety and memory-improving effects of WP + CP as a cost-effective nutrient combination in mice.Furthermore, we seek to elucidate the underlying mechanisms involved, including neurotransmitters, neurotrophic factors (BDNF), and microglial cells.We established the chronic anxiety model in mice by elevated open platform.Then, we tested the antianxiety and memory-improving effects of the WP + CP combination and the combination added in powder, yogurt, or milk.Still, we could determine whether the effects of WP and CP combinations are consistent in powder, yogurt, and milk consumed by anxiety mice.This was the first time that the effects of WP + CP combinations were evaluated on anxiety-relieving and memory improvement in rodents. Animals A total of 100 male C57BL/6 J mice (6-8 weeks old) were acquired from Charles River Laboratories Animal Technology Co., Ltd.(Beijing, China) for this study.The mice were housed in standard cages under specific pathogen-free (SPF) conditions, with a consistent 12-h lightdark cycle, and given standard laboratory chow and distilled water (Manufacturer: Beijing Keao Xieli Feed Co., Ltd., Production License: SCXK (jing) 2015-0013, Batch Number: 20229811).Kangcheng Biotech, Ltd., Co. (Sichuan, China) facilities were used to maintain the mice in a room with ambient temperature (21°C-24°C) and humidity (40%-60%).In each cage, a group of four mice is accommodated.The animal production license number was SYXK (Chuan) 2019-215.Four mice were housed in each cage.All procedures followed the guidelines of the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) and were approved by the Institutional Animal Care and Use Committee (IACUC) of the West China Hospital, Sichuan University (Approval No. 2019194A). Model establishment and grouping The anxiety model was established by the elevated open platform (EOP) with a slight adjustment based on a previous study (14).Mice were exposed to the clear square Plexiglas board (10 cm × 10 cm) at 1 m, 1 h per day for 30 days (Figure 1; Supplementary Figure 1).The control mice were not exposed to the elevated open platform (EOP) modeling, yet their living conditions and environment were consistent with those of the other experimental groups. Mice were randomly assigned to one of 10 groups based on their body weight on day one: control, model, buspirone, powder, yogurt, milk, C, C + powder, C + yogurt, and C + milk.Control, model, and buspirone were designed to test the stability of our modeling system.C was short for the WP + CP combination.Since the combination will enter the powder, yogurt, or milk market, we set the groups of products containing C as C + powder, C + yogurt, and C + milk.The base for the product was powder, yogurt, and milk, respectively.All the nutrients were given by intragastric administration.The protocols utilized for administering intragastric (IG) treatment to mice subjected to the EOP model are described herein (Figure 1).Mice arriving at the laboratory were designated as "eligible mice" if they weighed between 21-25 g and covered 3,000-5,000 cm in the open field test (OFT; Supplementary Figure 2).This criterion ensured a consistent and standardized selection process.Locomotor activity, assessed by distance in the open field test, served as a prerequisite for inclusion in the study and minimized potential confounding factors affecting the reliability and validity of outcomes.Behavioral assessments were conducted from day 30 through day 41, after which Frontiers in Nutrition 03 frontiersin.orgthe mice from each group were humanely sacrificed and sampled a week following the conclusion of the behavioral experiment. Sample information Buspirone hydrochloride (Sigma Chemical Company, St. Louis, MO, United States), a 5-HT1A receptor agonist, was dissolved in saline and given intraperitoneally (IP) once daily for 30 days at 2 mg/ kg, as previously reported (15).The powder, yogurt, and milk doses were determined according to the recommended human dose (16)(17)(18).The information of combination and products containing combination are shown in Table 1. Assessment of animals' physical states Body weight, food intake, and coat state score were evaluated every Wednesday between 10 and 12 a.m.(Figure 1).More details were shown in Supplementary material. Behavioral assessment 2.5.1. Open-field test Each mouse was carefully placed in the center of an open field box (50 × 50 × 50 cm) and allowed to explore for 10 min, following established procedures (20).The software was used to record and analyze the distance traveled within the open field and the times each mouse entered the central area of the arena (measuring 25 × 25 cm).The Topscan Package provided by Clever Sys Inc. based in the United States was used to quantify these parameters accurately. Elevated plus-maze test The Elevated plus-maze (EPM) consisted of two arms, open and closed, with the closed arms having 20 cm high walls.Both arms were 30 cm in length and 5 cm in width, and the EPM was elevated to a height of 60 cm above the ground, forming a 90° angle between the arms.Before the test, the mice could explore the central area for Light-dark box test As previously mentioned, anxiety was tested in the light/dark box (LDB) (22).A Plexiglas box with two equal chambers (18 × 12 × 12 cm) and an opening in the middle (5 cm length × 5 cm width) was used.The experiment was conducted in a dimly lit room (60 lx), and an infrared camera recorded the dark box.The mice were placed in the lightbox facing the dark box, and their exploration behaviors were recorded for 10 min after they crossed the opening (22). Novel object recognition test Recognition was assessed using the Novel object recognition (NOR) test, which comprised a learning trial and a subsequent test trial (23).In the former, two identical objects were presented to the mouse, which was allowed to explore them for 10 min in an open field box (60 × 25 × 25 cm).After a one-hour interval, the mouse underwent a test trial, during which one of the objects was replaced with a novel object.The mouse was given 5 min to explore both objects.The discrimination ratio of the recognition index was calculated using a specific formula (23). Discrimination ratio Ratio of total time spent with the no = v vel object divided Total time spent on exploring either obje c ct multiplied Experimental design and timeline.The experimental design for EOP and the subsequent treatment was illustrated in a timeline extending over 30 days.The mice were subjected to adaptive feeding for 7 days, after which they were randomly assigned to 10 groups based on body weight and total distance traveled.Each group comprised 10 mice.Following the assignment of groups, the mice were subjected to a daily, one-hour session on an elevated platform.They were orally administered with nutrients before being returned to their cages.Behavioral tests (OFT, NOR, EMP, LDB, and AT) were performed after 30 days of model administration to assess anxiety-relieving and cognitive improvement.Additionally, parameters such as body weight, food intake, coat state scores, and SPT were measured every week (as indicated by red dots in the figure).Finally, on day 42 of the experiment, the mice were humanely euthanized, and their brain tissues and serum were collected for analysis.The Avoidance test (AT) protocol was conducted in a light/dark shuttle box with minor adjustments from day 36 to day 41, as previously described (24).The AT, including the active avoidance test (AAT) and passive avoidance test (PAT), involved three consecutive days, starting with an adaptation day where the mouse explored both compartments for 2 min.On the training day, the mouse received an electric shock upon entering the dark compartment, while on the PAT test day, the mouse explored the bright compartment without shock.The fourth day was designated the AAT, where the mouse explored the dark compartment without shock. Sucrose preference test To prepare for the sucrose preference test (SPT), a 1% sucrose solution and distilled water were simultaneously introduced into each mouse's cage.This allowed for 2 days of taste adaptation.On the third day, after a six-hour fasting and water deprivation period, the mice were placed individually in cages that contained pre-weighed bottles of 1% sucrose solution and distilled water.Six hours later, the positions of these bottles were swapped.Following an additional 12 h, both bottles were removed.The remaining liquid was measured to calculate the sucrose preference of the mice using the following formula (25): Sucrose preference percentage Sucrose consumption Total liq = u uid consumption ×100% SPT was performed weekly, consistent with the time points of body weight, food intake, and coat state scores (Figure 1). Liquid chromatograph mass spectrometer The brains of mice were weighed and homogenized in RNase-free water at a 1:4 ratio for detection.The samples were centrifuged at 12,000 g for 5 min at a temperature of 4°C.Protein precipitation was The general packaging size for liquid dairy products is 200 mL; the powder is around 25 g dissolved in 180-200 mL water.The administration dosage for the mice was converted according to the surface area of a 60 kg human body.According to the conversion ratio between humans and mice (9.1) ( 19), the daily combination for mice can be calculated as the following formula: conversion ratio of surface area between mice and humans (9.1) × a daily dose of an adult (25 g/d powder)/average weight of adults (60 kg), that is 9.1 × 25 g/60 kg = 3.79 g/kg.The feeding volume was 20 mL/kg for each mouse.The combinations of WP + CP were tested in three dosages: 85 + 200 mg, 170 + 300 mg, and 170 + 600 mg.The open field test (OFT) and elevated-plus maze (EPM) showed that a combination with 170 + 300 mg had the best antianxiety-improving effects (Supplementary Figures 3A-C).Furthermore, there were discernible outcomes observed in the light-dark box (LDB), novel object recognition (NOR), and passive avoidance test (PAT) tests for both the C-medium and C-high groups (Supplementary Figures 3D-F).Therefore, the medium dosage (170 mg WP + 300 mg CP) was selected for the following studies.System, and the data are expressed as ng per g tissue.HPLC-MS, using Analyst ® software, detected serotonin, ACh, GABA, and DA in the prefrontal cortex (PFC).A Waters column was utilized with mobile phases A and B. The chromatographic gradient is presented in Table 2, and the injection volume was set at 5 μL, with dexamethasone and verapamil as internal standards. RNA extraction and real-time reverse transcription polymerase chain reaction analysis The modified guanidine isothiocyanate-phenol-chloroform method, incorporating RNX+ reagent, was used to isolate RNA from the right hemisphere hippocampus, following the manufacturer's protocol, with subsequent treatment using RNase-free water to prevent DNA contamination.Spectrophotometry, using a Thermo-Nano Drop 2000c-spectrophotometer, was used to quantify the concentration and purity of all RNA samples and determine the mean absorbance ratio and optical density (OD) at 260/280 nm.The cDNA was synthesized using HiScript III RT SuperMix for qPCR (+gDNA wiper; Vazyme, R323, China) and ChamQ Universal SYBR qPCR Master Mix (Vazyme, Q711, China).The primers used to analyze β-actin and brain-derived neurotrophic factor (BDNF) were β-actin, 5′-CCACCATGTACCCAGGCATT-3′ (forward), and 5′-CAGCTCAGTAACAGTCCGCC-3′ (reverse); BDNF, 5′-TCCG GGTTGGTATACTGGGTT-3′ (forward) and 5′-GCCTTGTCCGTG GACGTTT-3′ (reverse).RT-PCR analyses were conducted using the Bio-Rad CFX Manager (Bio-Rad, CFX Connect, United States) according to the manufacturer's protocol.The DNA amplification process consisted of an initial cycle at 95°C for 30 s, followed by 40 cycles of denaturation (95°C for 10 s), annealing (60°C for 30 s), and extension (95°C for 15 s).The 2 −ΔΔCt method was used to calculate the expression of β-actin and BDNF, with a single calibrated sample serving as the reference for comparison with the expression of all unknown samples. Ionized calcium-binding adapter molecule 1 immunohistochemistry Ionized calcium-binding adapter molecule 1 (Iba1) is a protein that marks microglial cells.The mouse brains were fixed in 10% buffered formalin and then embedded in paraffin.For immunohistochemical analysis, four μm sections were prepared.The anti-Iba1 antibody from Wako (Richmond, VA) was used at a concentration of 1:1,000 to detect Iba1.We quantified the number of Iba1-positive cells, the total number of cells, and the number of positively stained cells in the areas with the highest tumor-cell density in 10 non-overlapping microscopic fields (at 400 × magnification) of tumor-bearing brains taken from mice in each group. Statistical analysis The statistical analysis was conducted using SPSS 26.0 software (IBM Ltd., United Kingdom).Data conforming to normal distributions were represented as mean ± standard errors of the mean (SEM).Unpaired t-tests were utilized to analyze the differences between the control and model groups.A one-way Analysis of Variance (ANOVA) followed by the post hoc Tukey Dunnett's multiple comparison tests was used to analyze differences among the model, buspirone, and each treatment group when the data followed a normal distribution and the variances were equal.A repeated measures ANOVA was employed to examine group variations in body weight, food intake, coat state, sucrose preference, and fecal amount.Interaction effects between repeated indicators and days were tested by Pillai's trace.In the case of interactions observed between a specific variable and days, the differences among each group were compared at the final time point.If no interactions were present, post-hoc analysis using Bonferroni's multiple comparison tests was conducted.The correlations between each effect were performed using Pearson's correlation.For all analyses, two-sided p-values < 0.05 was considered statistically significant. Changes of general states in each group During the anxiety model establishment and combination administration, the body weight, food intake, coat state score, and sucrose preference test (SPT) were determined weekly (Figure 1).We tested three WP + CP combination dosages, 85 + 200 mg, 170 + 300 mg, and 170 + 600 mg, respectively.During the anxiety experiment, the C-low group demonstrated markedly reduced scores on the Open field test (OFT) and EPM assessments compared to the C-medium and C-high groups.In the cognitive tests, discernible disparities in the NOR and PAT tests were solely observed with the C-medium and C-high groups when contrasted against the control group, thereby substantiating the decision to establish the combination dosage as the medium dose: 170 + 300 mg (Supplementary Figure 3).The body weights were maintained steady, slightly decreasing in some groups after EOP, and there was an interaction effect between body 2B.Food intake was the least in the model group, significantly lower than control (t = 4.358, df = 18, p < 0.001; Figure 2D).Differences existed in the model and treatment groups [F (8, 81) = 4.337, p = 0.001].The post hoc Bonferroni test revealed that buspirone, C, C + powder, C + yogurt, and C + milk could all increase the food intake of anxiety mice (p < 0.001, p = 0.005, p = 0.037, p < 0.001, p < 0.001; Figure 2D).The simple effects of food intake were analyzed in Supplementary Tables 2A,B. The EOP represents a prominent approach to anxiety modeling.Our study evaluated coat condition and anhedonia in mice following 30 days of long-term intragastric therapy and modeling.The interaction effect was found between coat state scores and days (Group*days, F = 2.173, df = 45, p < 0.001; Supplementary Tables 3A,B), and we could see the highest level of coat state scores in the model group (Figure 2E).SPT was performed six times, and an interaction effect existed between sucrose preferences and days (Group*days, F = 5.643, df = 45, p < 0.001; Supplementary Tables 4A,B).It showed that the sucrose preference was consistently decreased during the early stages of the modeling process.However, in the later stages (21 days later), almost all groups showed varying degrees of recovery (Figure 2F), and the simple effect of groups.When comparing changes in sucrose preference on day 35, the model was lower than the control (t = 2.402, df = 18, p = 0.027).One-way ANOVA was employed to analyze the model group and treatment groups in the present study [F (8, 81) = 8.492, p < 0.001].Following the analysis, a post hoc Bonferroni test was conducted, revealing that buspirone, yogurt, milk, C + yogurt, and C + milk help increase sucrose preference (p = 0.033, p = 0.006, p = 0.002, p < 0.001, p < 0.001; Figure 2G). Anxiety mice exhibited lower activity in the open-field experiment and preferred the periphery over the central area.Correspondingly, mice in an anxious state produced more fecal particles (26).Therefore, we aimed to measure mice's anxiety levels by quantifying fecal particle The influence of the combined administration of WP + CP on the fundamental physiological data in mice.Throughout the experiment, changes in body weight (A), and at the experimental endpoint of 49 days, discernible variations in body weight were observed among the different groups of mice (B).Food intake (C) was monitored weekly, and at the 42-day experimental endpoint, conspicuous disparities in food consumption were observed across the various mouse cohorts (D).Changes in coat state score (E), SPT (F), and the amount of feces (H) were recorded every week.On the 35th day of the experiment, notable variations in the percentage of water preference among the different groups were detected (G).The data, representing mean values ± SEM, n = 10 in each group, were analyzed using repeated measures analysis of variance (ANOVA) to investigate potential differences among groups.If the interactions between one index and days existed, we compared the difference among each group of the last time points (B,D,F).If the interactions did not exist, post-hoc analysis (Bonferroni's multiple comparison tests) could be done next.production on an elevated platform.Clearly, there was an interaction effect between feces amount and days (Group*days, F = 1.746, df = 120, p < 0.001; Supplementary Tables 5A,B).The C + milk group exhibited a lower fecal particle count than the model group.In contrast, the milk group demonstrated significantly higher particle levels than the C + milk group (Figure 2H). Results of behavioral tests in evaluating anxiety states During days 30-35, a sequential battery of tests was conducted to assess mice's anxiety-like behaviors, including the OFT, EPM, and LDB.The percentage of time spent in the inner zone was lower in the model group than in the control (t = 2.654, df = 18, p = 0.016).At the same time, buspirone treatment increased the center's exploration of mice compared to the model (t = 2.654, df = 18, p = 0.016; Figures 3A,B).The model group exhibited significantly fewer rearing times than the control (t = 3.557, df = 18, p = 0.002).One-way ANOVA was employed to analyze the model group and treatment groups in the present study [F (8, 81) = 5.186, p < 0.001].Following the analysis, a post hoc Bonferroni test was conducted, revealing that buspirone, C + yogurt, and C + milk induced more rearing times relative to the yogurt and milk (p = 0.004, p = 0.002, p = 0.010; Figure 3C).Supplementary Table 6 3D-F).This suggests that the combination and its products effectively alleviate anxiety-like behavior in mice. We employed the LDB test as a third behavioral assay to assess the anxiety state.The model group showed a remarkably shorter time spent in the light box than the controls (t = 4.632, df = 18, p < 0.001).One-way ANOVA was employed to analyze the model group and treatment groups in the present study [F (8, 81) = 8.536, p < 0.001].Following the analysis, a post hoc Bonferroni test was conducted, revealing that buspirone C, C + powder, C + yogurt, and C + milk prolonged the time spent in the lightbox (all p-value < 0.001).In particular, the C + powder and C + yogurt groups exhibited a substantially longer time spent in the light box, suggesting a reduced anxiety-like phenotype (Figures 3G,H).Trace of OFT, EPM, and LDB in the treatment group was shown in Supplementary Figure 4. Changes in stress-related hormones We subsequently investigated the alterations in stress-related hormones in the serum, the commonly used biomarkers in clinical settings that reflect changes in the hypothalamic-pituitary-adrenal axis.Notably, the levels of corticosterone and ACTH were elevated in the model compared with the control ([corticosterone]: t = 5.927, df = 18, p < 0.001; [ACTH]: t = 4.119, df = 18, p < 0.001, Figures 3I,J).Excluding the control group, a one-way ANOVA analysis was conducted on the remaining groups [F (8, 81) = 13.23,p < 0.001].Regarding serum corticosterone levels, group C demonstrated elevated concentrations in comparison to C + yogurt (p = 0.043) and C + milk (p < 0.001), followed by a subsequent decrease in serum corticosterone upon the introduction of C to the three bases (Figure 3I).One-way ANOVA was employed to analyze the model group and treatment groups in the present study [F (8, 81) = 6.446, p < 0.001].As the upstream regulatory factor, changes in serum ACTH were discovered to be more responsive than corticosterone reductions in serum ACTH were observed in C + powder/milk compared to the model (p = 0.006, p = 0.022).Furthermore, C + powder and C + milk triggered lower serum ACTH expression levels than the two bases ([powder vs. C + powder]: p = 0.010; [milk vs. C + milk]: p = 0.008; Figure 3J). The WP + CP combination enhanced memory In mice with anxiety We further conducted the NOR and avoidance (PAT and AAT) tests to evaluate memory impairment in anxiety mice.On day 32, the NOR test indicated a decrease in the recognition index in the model group compared to the control (t = 6.071, df = 18, p < 0.001).This suggests that the persistent stress led to a decline in memory function.The administration of buspirone was efficacious in improving memory (Figures 4A,B). In the PAT test, we observed a reduced latency in entering the dark box in the model and the three basal groups compared to the control (Figures 4C,D).One-way ANOVA was employed to analyze the model group and treatment groups in the present study [F (8, 81) = 8.646, p < 0.001].Additionally, the C, C + yogurt, and C + milk groups had longer latency times than the model group (p = 0.040, p = 0.012, p = 0.048), with the C + milk group displaying a longer latency time than the yogurt group ([powder vs. C + powder]: p = 0.001; [yogurt vs. C + yogurt]: p = 0.004; Figures 4C,D).During the AAT, a heightened latency was noted in the model group and the three basal groups upon entering the light box compared to the control ( Figures 4E,F).Excluding the control group, a one-way ANOVA analysis was conducted on the remaining groups [F (8, 81) = 8.201, p < 0.001].Moreover, the C + powder group showed a more extended incubation period than the C + milk group (p = 0.035).For additional trajectory plots illustrating the traces of treatment groups receiving the combination, kindly consult Supplementary Figure 4. WP + CP combination improved the imbalanced neurotransmitters and BDNF expression caused by anxiety The occurrence and development of anxiety are closely related to changes in neurotransmitters, such as Serotonin, GABA, DA, and ACh.Serotonin and GABA are the targets of commonly used antianxiety medications.We assayed neurotransmitter concentrations in mice's PFC, revealing a significant reduction in serotonin levels in the model compared to the control (t = 7.751, df = 18, p < 0.001).Excluding the control group, a one-way ANOVA analysis was conducted on the remaining groups [F (8, 81) = 3.007, p < 0.001].Notably, C + powder/yogurt induced higher serotonin levels than the model (p = 0.004, p = 0.020).In contrast, C + yogurt also caused higher serotonin concentration than yogurt (p = 0.037; Figure 5A).Furthermore, the model observed lower GABA concentrations than the control (t = 2.785, df = 18, p = 0.012).Excluding the control group, a one-way ANOVA analysis was conducted on the remaining groups [F (8,81) The DA concentration in the C group was considerably higher than the model (p = 0.005; Figure 5C).Lastly, in comparison to the control group, the model group displayed a slight decrease in ACh Moreover, the C + powder group exhibited a higher concentration of ACh when compared to the powder group (p = 0.004; Figure 5D).It is widely observed that reduced BDNF protein expression is induced by chronic stress (27); we investigated the BDNF expressions in the hippocampus.BDNF expression was inhibited in the model compared to the control (t = 3.283, df = 18, p = 0.004).Excluding the control group, a one-way ANOVA analysis was conducted on the remaining groups [F (8, 81) = 7.136, p < 0.001].Conversely, buspirone treatment effectively elevated the BDNF expression (p < 0.001; Figure 5E).Moreover, a marked increase in the relative expression level of BDNF was observed in the C/C + yogurt/C + yogurt/C + milk group compared to the model group. The correlations between behavioral tests and serotonin or BDNF expressions were highly related Through the comprehensive analysis of the data presented, we have verified the anxiolytic and memory-enhancing effects of WP + CP.As alterations in neurotransmitter and BDNF expression have been observed in response to anxiety, we aimed to explore the interplay among behavior, serum corticosterone, ACTH, neurotransmitters, and BDNF expression in mice (Figure 6A).Notably, a negative correlation was found between serotonin concentration in PFC and anxiety-like behaviors, such as total time in light in LDB (r = 0.646, p = 0.044, Figure 6B) and RI (r = 0.632, p = 0.050; Figure 6B).Furthermore, OE% exhibited a positive correlation with relative BDNF expression (r = 0.773, p = 0.009; Figure 6C), which was consistent with the correlation observed The memory improvement effects of the combination.The NOR test performed on the 32nd day.(A) Schematic of novel object recognition test.The recognition index analyzed by NOR in each group (B).On the 36th, the avoidance test was conducted.Schematic of the PAT (C).The latency to enter the dark compartment by PAT for each group on the 36th day is presented in panel (D).On the 41st day; the avoidance test was conducted.Schematic of the AAT (E), while the latency to avoidance by AAT for each group is shown in panel (F).The data are expressed as mean + SEM with n = 10 in each group.between total time in light in LDB (r = 0.883, p = 0.001; Figure 6D).Impressively, a strong correlation was also observed between the relative expression level of BDNF and NOR (r = 0.881, p = 0.001; Figure 6E) and AAT experiments (r = −0.887,p = 0.001; Figure 6F).Given the observed correlation between fecal particle production and behavioral outcomes, we further investigated the potential connections between behavior, BDNF expression, and fecal excretion, as outlined in Supplementary Figure 5. Notably, a negative correlation emerged between fecal production and behaviors such as total time spent in the lit area of the LDB (r = −0.848,p = 0.004; Supplementary Figure 5A), AAT latency (r = 0.840, p = 0.005; Supplementary Figure 5B), and RI (r = −0.827,p = 0.006; Supplementary Figure 5C).Additionally, the number of feces positively correlated with BDNF mRNA expression relative to control levels (r = −0.958,p < 0.001; Supplementary Figure 5D).These correlation analysis data indicated the relatively strong reciprocity between behaviors and neurotransmitters or behaviors and BDNF expressions.The correlation matrix of behavioral and biological indicators is shown in Supplementary Table 7. Besides neurotransmitters, numerous studies have highlighted the importance of low-grade inflammation in the pathophysiology of anxiety (28).We tested Iba1 expression, the biomarker of microglia activation, in the left hemisphere by immunohistochemistry.There were no significant differences in Iba1 microglial cell count (Supplementary Figure 6) between the control and model groups, indicating that the increase in Iba1 expression and coverage could not be ascribed to variances in the number of cells. Discussion The current study reported the anxiety-relieving and memoryimproving effects of the WP + CP at 170 + 300 mg (human dose), both as a combination and their derived products (powder, yogurt, and milk).The dosage was designed by integrating the results from the previous human trials (29,30), zebrafish experiments (13,31), and the current mouse experiments.Utilizing a combined nutrient combination of WP and CP in this study is essential as it not only exhibits antianxiety and memory improvement effects, as observed with WP and CP alone, but also presents a potential cost-reduction advantage.This study investigates the rationale behind utilizing this combination and presents the necessity for further research in this area. We employed the EOP paradigm to establish anxiety models in mice.Although foot shock stress is commonly used to establish an anxiety model, inconsistent stress induction procedures can lead to anhedonia and learned helplessness (32,33).Therefore, we chose the EOP paradigm, which utilizes a comparatively milder physical stimulus, to construct our mouse anxiety model.This approach provides a controlled and consistent method for inducing anxiety-like behaviors in mice, which helps investigate anxiety disorders' neurobiological basis and assess potential treatments. In this study, we evaluated the anxiety states of mice using the OFT, EPM, and LDB tests.The results indicated that the EPM and LDB tests had more positive outcomes than the OFT test.The study confirmed the sensitivity of EPM and LDB in detecting anxiolytic compounds, proving their reliability (34).Previous research has shown that OFT is less reliable due to the administration operation's interference with the actual experimental results, primarily when orally administered to mice fed different diets (35).In contrast, rearing times were the most sensitive index on OFT as it reveals a mouse's exercise capacity in vertical directions (36).In our study, we observed that incorporating the combination compound C + milk yielded superior antianxiety effects compared to the milk group, as evidenced by significant differences in OFT, serum corticosterone, and ACTH indicators.These findings strongly suggest a synergistic interaction between the combined compound and milk, enhancing its potential to alleviate anxiety (37, 38). Wang et al. and Zhao et al. investigated the neuroprotective effects of WP against memory deficits induced by lipopolysaccharide and walnut-derived peptide mechanism and pathway of mitophagy in mice (9,39).The studies of CP on stress were started 30 years ago in animal experiments and clinical trials, showing the effects of insomnia and anxiety-improving properties (40)(41)(42).Our previous studies in zebrafish revealed that the nutrient combination of WP + CP at 56.7 + 100 μg/mL exhibited antianxiety, antioxidant, neuroprotection, and memory improvement properties (13).This is the first time to evaluate the effect of WP and CP combination.Based on dose conversion between zebrafish and mice, the current study utilized WP + CP at a dosage of 25.87 + 45.50 mg/kg, equivalent to 170 + 300 mg of the equal human dose, which proved effective in improving anxiety and memory in mice. Studies have shown that ACTH secretion is regulated by various factors, including stress stimuli.When exposed to stress, the pituitary gland releases ACTH, which stimulates the synthesis and release of corticosterone from the adrenal cortex.Subsequently, the elevated levels of corticosterone suppress ACTH secretion through a negative feedback mechanism, maintaining the homeostasis of the endocrine system (43).The dysfunctional neurotransmitter systems exist in anxiety regulation (44).In primates, the PFC, the chief executive officer of the brain, regulates anxiety by engaging high-level regulatory strategies aimed at coping with and modifying the experience of anxiety (45).Based on these theories, we determined the concentrations of four neurotransmitters in the PFC of each mouse.In the chronic stress rodents, reduced 5-HT, GABA, and DA levels in PFC were reported (46)(47)(48), which is consistent with our model.We found that the product of combination (C + powder/yogurt) significantly increased the 5-HT concentration in PFC, demonstrating its antianxiety effects through 5-HT.However, the treated groups had even higher GABA concentrations than the control in our study.This could be due to the long-term administration of dairy products, which may promote intestinal absorption and affect immunity, the microbiome, and the gut-brain module (49)(50)(51).ACh plays a significant role in regulating muscles, the heart, the digestive tract, and the nervous system (52).Our study found that treating anxiety mice with the combinations and its products could increase ACh levels in PFC, which was in line with the memory-related behavioral tests. Previous studies have demonstrated reduced BDNF protein expression in the hippocampus induced by chronic stress (27), consistent with our experimental results.Recent research has revealed that the amygdala, known for its role in emotions, also processes non-conditioned stimuli (53)(54)(55).This sheds new light on the intricate relationship between cognition and emotion, traditionally associated with the hippocampus (56,57).The hippocampus is one of the key brain structures for emotional response and is particularly susceptible to endogenous stressors.Meanwhile, BDNF exerts its activity most in the hippocampus (58-60).Brain-derived neurotrophic Factor (BDNF) plays a pivotal role in the nervous system by facilitating neuronal growth, differentiation, and survival, thus serving as a critical neurotrophic factor (61). Ma et al. found that adult neurogenesis persists in the dentate gyrus of rodents and is stimulated by chronic treatment with conventional antidepressant drugs through the BDNF/ Tropomyosin receptor kinase B (TrkB) signaling pathway (56).Numerous studies have highlighted the importance of low-grade inflammation in the pathophysiology of anxiety, such as increased levels of proinflammatory cytokines in the brain (28).Inflammatory conditions promote tryptophan metabolism along the kynurenine pathway at the expense of the 5-HT pathway (62).We did not find a difference between the model and control, which may be due to the time of sampling and the brain region we chose to perform IHC assays.Additionally, activated microglia exhibited phenotypes termed M1 and M2 phenotypes.M1 microglia contribute to the development of inflammation, while M2 microglia exert anti-inflammation effects (63).We may find more precise and meaningful results if we detect M1 and M2 microglia phenotypes separately. One of the limitations of this study was the absence of BDNF protein expression.In the following studies, we will conduct in-depth studies on the specific mechanism of WP + CP on anxiety-relieving and memory improvement, particularly the serotonin and BDNF pathways.Besides, administrating buspirone by intraperitoneal (IP) injection rather than intragastric (IG) administration might cause a slight difference in anxiety levels.Previously, we proved that IP exhibited superior therapeutic efficacy, which is more suitable for administering positive control (35).However, it did not affect the evaluation of combinations in this study.Since this study was a basic animal research project, further investigation is required to determine the effective dose of WP and CP in humans. Conclusion Overall, the study investigated the impact of a WP + CP combination, administered at a human dose of 170 + 300 mg, on anxiety relief and memory improvement in mice with anxiety.The combination, either alone or dissolved in products such as powder, yogurt, or milk, exhibited similar efficacy, possibly through the modulation of neurotransmitters or the BDNF pathway. 5 min while facing the open arm.The open arm entry percentage (OE%) was calculated by dividing the number of entries into the open arm by the total number of entries into both arms and multiplying by 100.The open arm time percentage (OT%) was calculated by dividing the time spent in the open arm by 300 s (the total test duration) and multiplying by 100 (21). weight and days (Group * day, F = 3.027, df = 63, p < 0.001; Figure2A).The simple effect of body weight was shown in Supplementary Tables 1A,B.On day 49, the model group had the lowest body weight compared with the control (t = 3.054, df = 18, p = 0.007; Figure2B).One-way ANOVA was employed to analyze the difference between the model group and all the treatment groups (Buspirone, powder, yogurt, milk, C, C + powder, C + yogurt, and C + milk) in the present study.Since differences were evident among these eight groups [F (8, 81) = 13.04,p < 0.001], a post hoc Bonferroni test was conducted, revealing that buspirone, yogurt, C + yogurt, and C + milk led to a tremendous increase in mice's body weight compared to the model group (p < 0.001, p = 0.029, p < 0.001, p < 0.001, respectively; Figure2B).Compared with yogurt and milk, C + yogurt and C + milk increased body weight (p = 0.032, p = 0.001; Figure2B).Besides body weight, the food intake decreased gradually in general, especially in the model group, possibly due to anxiety-induced appetite loss.Due to the interaction effects between food intake and days (Group*day, F = 4.906, df = 54, p < 0.001; Figure2C), we only compared changes in food intake on the last time point, Day 42.The simple FIGURE 3 FIGURE 3 Effects of WP + CP on anxiety relieving in mice.(A-C) OFT performed on the 30th day.The representative pictures of control and model (A), changes of the percentage of inner zone time (B), and rearing times (C) by OFT in each group.(D-F) EPM performed on the 32nd day.Representative pictures of EPM (D), changes of OE% (E), and OT% (F) by EPM in each group.(G) LDB by trajectories on the 35th day.(G,H) LDB performed on the 35 th day.Representative pictures of LDB (G) and changes of total time in light of each group (H).(I,J) Serum concentrations of corticosterone (I) and ACTH (J) in each group.Data represent mean ± SEM; n = 10 in each group. FIGURE 5 FIGURE 5 Changes of neurotransmitters in PFC and BDNF mRNA expression in the hippocampus after anxiety (A-D).The concentration of serotonin (5-HT) (A), γ-aminobutyric acid (GABA) (B), dopamine (DA) (C), and acetylcholine (ACh) (D) in the right prefrontal tested by LC-MS.(E) BDNF mRNA relative expression changes in the hippocampus.Concentrations are given in ng/g wet tissue.Data represent mean ± SEM; n = 10. FIGURE 6 FIGURE 6 Correlations between neurotransmitters, BDNF, and anxiety-related indices.(A) Heatmap of the Correlation coefficient matrixes of the responses of various indicators for each mouse.The correlation coefficient matrices display a spectrum of colors, wherein intensifying shades of red correspond to heightened correlation levels while deepening shades of blue denote decreased correlation levels.(B,C) Correlations between 5-HT concentration in PFC and total time in light in LDB (B) and RI in NOR (C).(D-G) Correlations between relative BDNF mRNA expression and OE% in EPM (D), total time in light of LDB (E), RI in NOR (F), and AAT latency (G).Data were represented as mean ± SEM, n = 10 in each group. TABLE 1 The information of samples tested. TABLE 2 The chromatographic gradient of mobile phase (A: water, and B, acetonitrile). shows cases of the additional metrics of the open field activity.During the EPM test, the model group exhibited the lowest percentage of open-arm entries and time spent in the open arms.However, treating with buspirone, milk, C, and C + powder/yogurt/ milk increased open-arm exploration compared to the model (Figures
2023-10-08T15:07:50.742Z
2023-10-06T00:00:00.000
{ "year": 2023, "sha1": "73b7198de3e2c64754fed945df89a5f377fa7dec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1273531/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "074ca89b8bb171800793e9e63961570c2db4ddf7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271524282
pes2o/s2orc
v3-fos-license
Rocking block simulation based on numerical dissipation In this paper, a computational approach based on numerical dissipation is proposed to simulate rocking blocks. A rocking block is idealized as a solid body interacting with its foundation through a contact-based formulation. An implicit time integration scheme with numerical dissipation, set to optimally treat dissipation in contact problems, is employed. The numerical dissipation is ruled by the time step and the rocking dissipative phenomenon at impacts is accurately predicted without any damping model. A broad numerical campaign is conducted to define a regression law in analytic form for the setting of the time step, depending on the block size and aspect ratio, the contact stiffness, as well as the coefficient of restitution selected. The so-obtained regression law appears accurate and an a posteriori validation with cases not in the training dataset confirms the effectiveness of the approach. Finally, the comparison with available experimental tests highlights the approach efficacy for free rocking and harmonic loading cases (in a deterministic sense), and for earthquake-like loading cases (in a statistical sense). It is found that rocking blocks with sizes of interest for structural engineering (e.g., cultural heritage structures) can be simulated with time steps within 10–3 ÷ 10–1 s, so allowing very fast computations. Introduction In the last decades, rocking structures have been intensely investigated, and various models to predict the rocking motion have been developed.On the one hand, this was motivated by the need of analyzing the dynamic response of existing and cultural heritage structures, e.g., masonry and dry-stone walls [1][2][3][4][5], stone monuments [6][7][8][9], as well as statues [10,11], that typically experience damage/collapse due to seismic events.On the other hand, rocking structures attracted the attention of researchers as they might be used as seismic design strategies [12], given that the uplift of a rocking block limits the design forces acting in the superstructure, as well as in the foundation.This ''rocking isolation'' strategy can be used on both buildings [13] and bridges [14]. One significant issue with the seismic response of a rocking block is the large sensitivity to its defining features, i.e. minor changes in rocking block properties may result in significantly different timehistory responses.Indeed, experiments involving dynamically excited rocking specimens are rarely replicable, and the response is typically labeled as chaotic [12] (i.e., nonreproducible and unpredictable).Accordingly, a plausible approach to proceed with model validation, instead of the classical approach of comparing deterministically the specimen and the model responses under a specific ground excitation, should be based on statistical validation (as proposed in [15]). The most established model to predict the response of a rocking block has been introduced by Housner [16].Such well-known analytical model, even though the solution is typically obtained numerically given the event-by-event formulation, represents the socalled classical rocking theory, based on the hypotheses of (i) rigid block and rigid foundation, (ii) two potential contact points, (iii) no sliding, (iv) no bouncing, and (v) energy dissipation at impacts.Based on the classical rocking theory [16], several enhancements and extensions have been developed to treat a wide range of rocking structures with a multitude of different boundary conditions.However, the hypothesis of no sliding (as well as no bouncing) might be too strict for many actual applications, as sliding (and bouncing) is not always prevented.For this reason, more general analytical models accounting also for sliding (as well as bouncing) have been proposed [38][39][40][41][42][43][44][45][46][47][48].Nonetheless, most of these models did not find widespread actual applications given the complexities and limitations in the generalization of the problem.In this context, numerical approaches may represent an appealing choice to generalize the solution for rocking problems, as they are able to deal with complex geometries, boundary conditions, and mechanical aspects (such as sliding, bouncing, 3D effects, material nonlinearities etc.).When considering masonry and cultural heritage structures, the use of block-based numerical models [49] also allows to account for the actual masonry pattern as well as the interaction with adjacent structural elements.In this framework, the adoption of contact-based numerical approaches appears particularly appropriate to model rocking blocks.The explicit time integration scheme has been typically preferred, see e.g.applications within the so-called discrete element method (DEM) [50][51][52][53][54][55][56].The main drawback of these contact-based explicit approaches consists in the definition of a suitable damping model.Indeed, the choice and the characterization of a damping model (e.g., Rayleigh damping) is challenging and non-univocal [50], as the rocking block response is nonlinear, and a representative frequency of the rocking motion cannot be defined univocally.Accordingly, the setting of the damping model is mostly conducted to fit some reference response in a phenomenological fashion, rather than having a clear physical meaning.To bridge the gap between classical rocking theory and numerical models, in terms of energy dissipation, an equivalent viscous damping model calibrated on analytical solutions has been proposed in [57]. The adoption of implicit time integration schemes to model rocking blocks has found less interest in the scientific community, as such schemes are typically characterized by numerical (i.e., algorithmic) dissipation [58,59], and so the response depends on the time step chosen [60,61].Indeed, very small time steps should be adopted to reduce the amount of numerical dissipation that would lead to inconvenient simulations.However, by noticing that when dealing with rocking motion the setting of damping models appears as questionable as relying on numerical dissipation only, this paper investigates the possibility of utilizing an implicit time integration scheme with numerical dissipation without any damping model to simulate rocking blocks.In other words, the use of numerical dissipation to account for the rocking energy dissipation does not appear more problematic than ad hoc calibrated damping models, and it leads to superior computational efficiency.The potential benefit of this choice, beyond its simplicity, is indeed immediately clear: if only numerical dissipation is employed, rather large time steps can be used, so allowing very fast and convenient computations. In this framework, a pioneering approach was proposed in [62] where rocking blocks were modelled by means of beam elements with no-tension zerolength fiber cross-sections representing the rocking surfaces, using a corotational formulation to account for geometric nonlinearity.In particular, the adoption in [62] of the well-known implicit time integration scheme with numerical dissipation proposed in [58], also known as HHT or a-method, and quasi-rigid rocking surfaces allowed to obtain classical rocking solutions through numerical dissipation only, without having a strong dependency on the adopted time step (which could vary between 0.0001 s and 0.001 s), yet without directly controlling the energy dissipation. In this paper, a rocking block is idealized as a solid body interacting with its foundation through a contactbased formulation.The HHT time integration scheme is employed with an algorithmic setting to optimally treat dissipation in contact problems [63].The rocking dissipative phenomenon at impacts is investigated, correlating its dependency on the time step.A broad numerical campaign is conducted to define a regression law in analytic form for the setting of the time step, using as reference the classical rocking theory.Comparisons with available experimental tests are used to check the efficacy of the regression law. The paper is structured as follows.Section 2 discusses the modelling assumptions at the basis of the present computational approach.Section 3 presents the strategy adopted to define a suitable setting of the time step, as well as the obtained regression law together with its post validation.Section 4 shows the comparison with available experimental tests, particularly free rocking and harmonic loading cases (in a deterministic sense) from [64], and earthquake-like loading cases (in a statistical sense) from [15]. Modeling of rocking blocks In this section, the classical approach to model a rigid rocking block according to [16], used as reference, is firstly briefly recalled.Then, the proposed computational approach based on numerical dissipation to model a solid deformable body interacting with its foundation though a contact-based formulation is discussed together with few details about the adopted time integration method. Classical rocking theory According to the classical rocking theory [16], the symmetric rocking block (Fig. 1) is idealized as a rigid body, characterized by the semi-diagonal R and slenderness a ¼ atan B=H ð Þ, rocking on a rigid foundation.The hypotheses of no sliding and no bouncing yield. The equation of motion of the rigid block in inplane free rocking about pivot points O and O 0 , and measured using the rocking angle h (Fig. 1) is: is the frequency parameter of the rigid rocking block, with m representing the mass of the block, and I 0 representing the rotational moment of inertia with respect to the pivot points.For rectangular cuboid blocks, I 0 ¼ 4=3 ð ÞmR 2 and, hence, . Impacts between the block and the foundation occur when h ¼ 0. At any impact, the pivot point changes and the rotation changes sign.Importantly, impacts result in instantaneous energy losses.According to [16], the reduction of energy at any impact may be described using the coefficient of restitution r, defined as the ratio of postimpact to preimpact kinetic energy.Furthermore, within the preceding assumptions, the classical rocking theory [16] provided an estimation of the coefficient of restitution by employing the conservation of angular momentum: It should be underlined that, although various more recent formulations (see e.g.[31,57,65]) adopt as coefficient of restitution ffiffi r p , i.e. the ratio of the preand post-impact angular velocities, the original definition of ratio of kinetic energies is herein considered. In free rocking motion, maximum rocking angles h n and half rocking periods T n =2 along with the number of impacts n are defined, according to [16], as: Fig. 1 The rocking block where h n¼0 ¼ h 0 is the initial rocking angle, and T n¼0 =2 ¼ T 0 =2 is computed by doubling the time elapsed until the first impact.In the following, Eqs. ( 1) and ( 3) are referred to as analytical solutions.As a result, the rocking behavior is nonlinear due to (i) the change of pivot point (from O to O 0 and vice versa) and (ii) the jump discontinuity of the angular velocity preand post-impact caused by the impact energy dissipation, ruled by the coefficient of restitution.Further insights on the nonlinear nature of the rocking behavior can be found in [19,[66][67][68][69][70]. Numerical modeling A free-standing rocking block is idealized as a solid deformable body interacting with its foundation (Fig. 2a) though a contact-based formulation, characterized by a finite contact stiffness k n .The Young's modulus of the block is assumed so that the overall block stiffness is much higher than the contact stiffness, i.e., the block Young's modulus becomes irrelevant to the present study.Additionally, a reasonably high value of friction coefficient prevents sliding to occur.In other words, the contact stiffness k n is the only mechanical parameter which has a direct and significant effect on the rocking behavior of the block.The one-step implicit time integration method with numerical dissipation developed in [58], also called HHT method (as well as a-method), is considered.The HHT method approximates the solution of an undamped structural dynamics problem by means of the following relationships: For i ¼ 0,1; 2; . .., being Dt the time step, anda HHT , b HHT , and c HHT parameters governing the numerical dissipation and the stability of the algorithm (the subscript HHT has been added to avoid any confusion with other parameters).In Eq. ( 4), M is the mass matrix, K is the stiffness matrix, F is the external forces vector, and u is the displacement vector (superimposed dots symbolize time differentiation).The solution is initiated through initial conditions Þ .To optimally treat numerical dissipation in the elastodynamic contact problem, the setting of a HHT , b HHT , and c HHT is carried out according to [63].In particular, the time integration parameters are adopted to ensure unconditional stability, second-order accuracy, momentum transfer in dynamic rigid impact problems and optimal numerical dissipation, [62] as: Accordingly, once defined a HHT , b HHT , and c HHT , numerical dissipation is fully governed by the time step Dt.The higher the Dt, the higher will be the numerical dissipation. Proof of concept The possibility to find a certain time step Dt that guarantees good estimates of the rocking response is shown in Fig. 2. The free rocking response of a solid block (Fig. 2a) with initial rocking angle h 0 =a ¼ 0:5 obtained by the present solution is compared, in terms of normalized rocking angle (Fig. 2b) and normalized total energy (Fig. 2c), with the analytical solution given by Eq. (1) and the explicit numerical solution from [57].It can be observed that the free rocking response is accurately reproduced by the present solution and the step-like rocking dissipative phenomenon at impacts is well predicted, even without any damping model. It should be pointed out that the solution in Fig. 2 has been obtained with a rather large time step (in this case Dt ¼ 0:013 s), which thus allowed a very fast simulation (416 s on a commercial laptop) given the limited number of increments needed. Further features of rocking response of the present solution shown in Fig. 2 can be gathered by observing the phase portraits in Fig. 3. Indeed, the phase portraits show an overall good agreement between the analytical and the present solutions (see Fig. 3, left).In particular, it appears worth to highlight the response around impacts (see, e.g., the first impact in Fig. 3, top right).On the one hand, the analytical solution shows a sharp jump of the angular velocity at impact, fully governed by the coefficient of restitution.On the other hand, the present solution shows a smoother response at impacts, given the presence of a deformable contact interface.Indeed, the contact pressure distribution (see Fig. 3, bottom right) at initial condition (A) starts to sensibly change at the instant (B), i.e., the instant in which analytical and numerical solutions tend to drift apart.The first impact happens between (C) and (D), where in both cases the contact pressure is rather distributed on a considerable portion of the contact interface, with the maximum contact pressure observed in opposite corners.From the instant (E) the analytical and numerical solutions tend to overlap again.Although globally in agreement, the numerical solution represents the impact in a smoother way than the analytical one, the latter based on the hypothesis of only two potential contact points. The effect of the time step on the present solution is shown in Fig. 4 where the previously selected time step Dt ¼ 0:013 s and the analytical solution are compared with different time steps, i.e.Dt ¼ 0:026 s and Dt ¼ 0:007 s.As it can be noted, the time step has a direct effect on the energy dissipation, leading to normalized rocking angle time histories sensibly different (Fig. 4a).This effect is clearly shown in Fig. 4b, where the normalized total energy time histories show larger energy drops at impacts for larger time steps.Interestingly, the half rocking period versus normalized rocking angle diagram for subsequent impacts shown in Fig. 4c highlights that the present modelling strategy is able to accurately represent the nonlinear period-to-amplitude relationship as in Eq. (3), independently from the time step utilized.Indeed, the time step only rules the energy losses at impacts and therefore the distance between the points in the period-to-amplitude diagram.Consequently, the three cases considered in Fig. 4c lay on the same curve.As in the classical rocking theory the distance between the points in the period-to-amplitude diagram is governed by the coefficient of restitution in Eq. ( 2), it appears that the time step could be tuned to guarantee the desired energy dissipation at impacts.The free rocking response of the same block for different values of initial amplitude (h 0 =a) with the same Dt is compared with the analytical solution given by Eq. (3) in Fig. 5, in terms of normalized rocking angles (Fig. 5a) and half rocking periods (Fig. 5b), along with the number of impacts.As it can be observed, the problem appears amplitude independent, i.e. the same Dt can be utilized independently of the rocking angle.This feature appears particularly appealing, and guarantees a reasonable generalization of the present computational approach. As a result, the surgical use of the numerical dissipation of the time integration scheme allows the modelling of energy dissipation at impacts in a phenomenological manner, allowing fast numerical simulations.In other words, the proposed modelling strategy permits to account for the desired energy dissipation while using the largest possible time step. Setting of the time step In this section, a strategy for the setting of the time step Dt to guarantee good estimates of the rocking response based on an extensive numerical campaign and a multivariable nonlinear regression is discussed.The coefficient of restitution r is here assumed as an independent parameter [57].The choice to keep r independent allows the employment of any experimentally measured coefficient of restitution (e.g.[33,65,71]) or theoretically improved model (e.g.[47,[72][73][74]) to be adopted, which might differ from the one in Eq. ( 2) [33,64,65]. Furthermore, it has been found from preliminary analyses that the setting of the time step Dt is influenced, beyond the contact stiffness k n , as discussed in Section 2.2, and the coefficient of restitution r [57], by the size and the aspect ratio of the block, i.e. by R and H=B, respectively.Accordingly, the setting of the time step might be reasonably represented, similarly to what utilized in [57] for an akin problem, by the following simple function: where A 1 , A 2 , and A 3 are coefficient to be determined. It is here highlighted that such function will provide Dt ¼ 0 for r ¼ 1.This extreme case is out of interest for the present study as r\1 for all real cases.A suitable strategy for the identification of A 1 , A 2 , and A 3 is discussed in the following.Accordingly, the numerical campaign here discussed is composed of 1600 numerical simulations of free rocking.Given the amplitude independence shown in Fig. 5, all these simulations have been conducted by adopting h 0 =a ¼ 0:5.According to [64,76], a material density equal to 2600 kg/m 3 has been here adopted for the blocks, while the density variability for common stone/masonry construction materials is consider negligible for the calibration purposes of this work.In all cases, the free rocking response is analysed for 30 s, or the minimum time needed to reach the maximum rocking angle of a cycle equal to 0:05h 0 =a if greater than 30 s. To visualize the various blocks geometries considered, their proportions are highlighted in Fig. 6, organized with the same layout of Table 1. Regression for the setting of the time step The comparison against the analytical solution in Eq. ( 3) is performed in terms of rocking angle and half rocking period, for a significant number of impacts N, which is here identified as the number of impacts needed to reach a rocking angle lower than 0:05h 0 =a.An example of comparison between analytical and numerical solutions is shown in Fig. 7, in terms of normalized rocking angle (Fig. 7a) and half rocking period (Fig. 7b), along with the number of impacts for a certain r and a certain Dt. An error measurement between analytical and numerical solutions is here introduced.Firstly, the root-mean-square error of the rocking angle normalized on the initial angle (e h=a ), and of the half rocking periods normalized on the initial period (e T=2 ) is computed, for a significant number of impacts N, as: where the symbol Ä denotes values obtained through the analytical solution, and the symbol Ä denotes values obtained through the numerical solution.Secondly, these two error measures are combined into a unique global measurement of the relative error e G , defined as: This global measurement of the relative error e G is here used within a sort of optimization problem, where the optimal Dt is selected as the case with the lowest e G , aimed at investigating the optimal Dt to be used in numerical simulations given the block geometry, r, and k n .Accordingly, e G is computed for each of the 40 considered Dts, for each block listed in Table 1, for each value of contact stiffness k n , and for coefficients of restitution r varying within the range 0.80, 0.81, …, 0.99. Examples of distribution of e G along with the considered time steps are shown in Fig. 8, where the error trends between two different values of coefficient of restitution (a lower value 0.88, and a higher value 0.95) are compared.As it can be noted, the error e G reaches small values for a range of Dts.In particular, it is observed that lower values of r (e.g., 0.88) show smoother trends of e G , i.e., a wider range of Dt is characterized by small errors, while higher values of r (e.g., 0.95) show steeper trends of e G , i.e., a narrower range of Dt is characterized by small errors.By way of example, with reference to Fig. 8, an error e G 10% is obtained with 0:10 s Dt 0:17 s for r ¼ 0:95, while with 0:26 s Dt 0:40 s for r ¼ 0:95 (considering also that values of Dt [ 0:40 s have not been here considered).Accordingly, the adoption of a Dt in a neighborhood of the optimal Dt would still guarantee accurate results.This is particularly true for lower values of coefficient of restitution, i.e. for values of r expected in real historical structures with, e.g., mortar joints and/or defects. Moreover, it should be pointed out that the present approach is based on the HHT method and the convergence within each increment is not guaranteed.Indeed, for highly nonlinear problems (e.g.damage constitutive laws, contact cohesion, etc.) convergence may not be found within a time increment.This might be overcome by reducing the time increment (only for the non-converged increment, e.g., by 50%) to obtain a solution.This aspect may affect locally (in time) the dissipative properties of the solution, and the analysis report should be checked to judge the quality of the response.In any case, in all the simulations considered in this paper, non-converged increments have not been recorded.In the following, the optimal Dt has been chosen as the one with the minimum value of e G (anyway, considered only when e G 7% to exclude the extreme cases).In the following, the so-computed optimal Dts are referred to as ''measured optimal Dt''. A multivariable nonlinear regression analysis is then performed using the results of the numerical campaign and, in particular, the measured optimal Dt.As a result, the coefficients of Eq. ( 6) have been determined, and the resulting analytic formula for the setting of the time step (in the following, referred to as ''estimated optimal Dt''), with a coefficient of determination R 2 ¼ 0:970, is: with Dt in s, R in m, and k n in N/m 3 (being H=B and r dimensionless).It should be pointed out that R and H=B are raised to the same power as, after an initial investigation, it has been found that even if two different power parameters were supposed they would practically assume the same value during the multivariable nonlinear regression.Additionally, for rectangular cuboid blocks, Eq. ( 9) can be also written in terms of frequency parameter p and slenderness a as: with Dt in s, p in Hz, a in rad, and k n in N/m 3 (being r dimensionless).The results of the multivariable nonlinear regression analysis are shown in Fig. 9.In particular, the estimated versus measured optimal Dt plot is shown in Fig. 9a, where an overall good agreement between estimated and measured optimal Dt can be observed (as also confirmed by the rather high coefficient of determination).This agreement is further highlighted by the comparison between estimated and measured time steps along with r by varying the block size, i.e.R (Fig. 9b), the block aspect ratio, i.e.H=B (Fig. 9c), and the contact stiffness, i.e. k n (Fig. 9d).Interestingly, it is found that by increasing the block size R for a fixed r, also the optimal Dt increases (as it could be deduced by the coefficients in Eq. ( 9) and in Fig. 9b).This aspect is particularly appealing for real case applications, such as rocking of monuments and cultural heritage structures, as it allows faster dynamic simulations for larger structures. Post validation The validation of the regression function in Eq. ( 9) is conducted a posteriori with cases not included in the training set by adopting the blocks geometry in Table 2 with 3 different values of k n , i.e. 4e ?08, 7e ?08, and 12e ?08 N/m 3 (for convenience, labeled as k4, k7, and k12, respectively).It is here highlighted that such cases represent intermediate cases within the range of parameters investigated. The comparison of the predictions of Eq. ( 9) with the numerical results obtained with the blocks in Table 2 is shown in Fig. 10.As it can be seen in Fig. 10a, Eq. ( 9) well predicts the optimal Dt for cases not in the training set, as also confirmed by the coefficient of determination R 2 ¼ 0:980.This good prediction is also highlighted by the comparison between estimated and measured time steps along with r for k4 (Fig. 10b), k7 (Fig. 10c), and k12 (Fig. 10d).Accordingly, the analytic formula for the setting of the time step in Eq. ( 9) appears to be robust, accurate, and reliable.In the following, it is thus used to reproduce experiments. Comparison with experimental tests In this section, the outcomes of the experimental campaigns in [15,64] are used to compare the results of the present computational approach.Particularly, free rocking and harmonic loading cases are treated in a deterministic sense [64], while earthquake-like loading cases are treated in a statistical sense (according to [15]).For each case, the time step Dt for the present computational approach is adopted according to Eq. ( 9).In the following, the Dt adopted is symbolized in the graphs as ''t'' followed by the digits after the decimal (e.g.Dt ¼ 0:0123 s is concisely depicted as ''t0123''). Experimental campaign by Pen ˜a et al. (2008) In this section, the outcomes of the experimental campaign discussed in [64] are used as reference.In particular, three specimens are here considered, see Table 3.It is worth highlighting that the dimensions of these specimens are considerably smaller than the range of block dimensions adopted in the numerical campaign aimed at the setting of the time step (Table 1).Three values of contact stiffness, i.e. k2.5, k5, and k10, are considered in the numerical solutions, while the coefficients of restitution provided in [64] are utilized for the analytical solutions and as input for Eq. ( 9). The results comparison for Specimen 1 in free rocking is shown in Fig. 11, in terms of normalized rocking angle time history (Fig. 11a), normalized rocking angle (Fig. 11b) and half rocking period (Fig. 11c) along with the number of impacts.An overall good agreement between numerical and experimental results is observed, both following the analytical solution.All the three considered values of contact stiffness show consistent results, although the case k5 shows a small difference especially in the period (Fig. 11a).Anyway, it should be pointed out that the case k5 is still in good agreement with the analytical rocking period (Fig. 11c). The comparison of Specimen 1 with harmonic excitation is shown in Fig. 12, in terms of normalized rocking angle time histories.The three considered values of contact stiffness show very similar results, with peak amplitudes always slightly smaller than the experimental result.It is here worth to mention that a similar trend was also observed in [57].Nevertheless, the free rocking behavior (i.e., after 20 s) is accurately predicted by all cases. The comparison of the results for Specimen 2 are shown in Figs. 13 and 14 for free rocking and harmonic excitation, respectively.Free rocking time histories are more dispersed than the previous specimen (Fig. 13a), although the rocking angle and period decay along with the number of impacts is consistent with the analytical solution (Fig. 13b-c).Indeed, small differences between experimental results and the analytical solution can be noted for both rocking angle (Fig. 13b) and rocking period (Fig. 13c).It is worth noting that numerical results are anyway included in between this range of variability.Note that numerical time histories have been shifted towards left to agree with the initial rocking period measured in the experiment, which is shorter than the analytical prediction.The harmonic excitation response comparison (Fig. 14) highlights an overall good agreement for all the three considered cases, also for the free rocking behavior (i.e., after 10 s). The results comparison for Specimen 3 free rocking is shown in Fig. 15.The numerical time histories of the 3 cases are consistent between each other (Fig. 15a) and show a rocking angle decay close to the experimental one.However, the rocking periods (except for the initial one) appear to be significantly different between experimental and numerical.A similar shift in the rocking periods was also observed in [57].By looking at the rocking angle (Fig. 15b) and period (Fig. 15c) along with the number of impacts, is it possible to note that, on the one hand, an overall good agreement of the rocking angle (Fig. 15b) is observed between experimental, analytical, and numerical results, while, on the other hand, numerical results fit pretty well the analytical solution for half rocking periods, being the experimental periods systematically lower than the other solutions.This again shows the good consistency of the present Free rocking ?artificial ground motion n. 18, load factor: 0.5, see [64] for more details computational approach with the reference analytical solution. The results comparison for Specimen 3 subjected to an artificial ground motion are shown in Fig. 16.For this test, only the case k5 has been considered, which is characterized by an optimal time step Dt ¼ 0:0032 s (t0032).In such experimental test, the specimen collapses.This outcome is also obtained with the reference numerical solution (t0032), although collapse is reached few seconds before with respect to the experiment and the numerical normalized rocking angle time history differs significantly from the experiment.To check the sensitivity of the time step in the collapse response of this specimen, time steps equal to 0.0020, 0.0025, 0.0030, 0.0035 s are also shown in Fig. 16 for the sake comparison (with k5 in each scenario).As it can be noticed, collapse is obtained with t0020, t0030, and t0032, while no collapse is observed for t0025 and t0035.Additionally, a large variability of numerical rocking angle time histories is observed, although the adopted time steps are pretty similar.Accordingly, no clear trend can be deduced from Fig. 16, as the response appears chaotic.In this regard, the rocking response to earthquake-like ground motions is discussed in a statistical sense in the next subsection, according to [15] In this section, the experimental campaign discussed in [15] is considered and compared (Fig. 17) with the computational approach here proposed.Firstly, an equivalent cuboid solid block (2H ¼ 0:609 m; 2B ¼ 0:09135 m) is deduced from the value of frequency parameter p identified experimentally (4.8883 Hz) and tana ¼ 0:15 [15].In particular, an equivalent R is obtained from p, considering a rectangular cuboid block, and B and H are then determined according to a.By considering the coefficient of restitution determined experimentally (i.e., 0.9532), and the one coming from the classical rocking theory [16] (i.e., 0.9465), two optimal time steps, i.e. t00278 and t00319, respectively, are set according to Eq. ( 9), by considering a contact stiffness k5. The comparison between the experimental, analytical, and numerical results for a free rocking test is shown in Fig. 17a in terms of normalized rocking angle time history, and in Fig. 17b-c in terms of normalized rocking angle and half rocking period, respectively, along with the number of impacts.Although the need of resorting to an equivalent cuboid solid block, numerical results appear in a good agreement with the experimental/analytical ones, for both time steps considered, with less accuracy in the last part of the free rocking response (characterized by small rocking angles).Considering that also in this case the equivalent block dimensions are significantly smaller than the range of dimensions adopted in the numerical campaign for the setting of the time step (Table 1), such results are promising. The ''Case Lefkada 2H = 10 m'' described in [15] has been considered here (as it presents a full range of normalized rocking angles, including also collapses). This case conceives the application of 100 different artificial ground motions, generated by a stochastic model to match the physical characteristics of the 2003 Lefkada earthquake, subsequently scaled in time to indirectly increase the dimensions of the specimen.The actual accelerograms recorded on the shaking table [15] have been used as input in the numerical simulations.The results of these simulations are shown and compared with experimental and analytical results in Fig. 17d in terms of cumulative distribution functions of the maximum normalized rocking angle (Fðh max =aÞ) for the 100 tests.In Fig. 17d, adapted from [15], the 90% and 95% nonparametric confidence intervals (CI) are also reported for the experimental cumulative distribution function (the interested reader is referred to [15] for more details).As it can be noted, the cumulative distribution functions obtained numerically with t00278 and t00319 fit very well the experimental/analytical ones, being also included within the aforementioned CIs.Three phase portraits for conditions far from collapse (Signal 1, with h max =a ¼ 0:24), near to collapse (Signal 85, with h max =a ¼ 0:84), and collapse (Signal 66) are shown in Fig. 18, together with the rhomboidal separatrix [70] (red dotted lines), delimitating stable paths of rocking motion (being the maximum angular velocity evaluated as 2psina=2).In particular, by comparing the phase portraits of Signal 85 and Signal 66, it is further highlighted the randomness between collapse and no collapse conditions, which strongly depends on the signal and the phase between current rocking angle and the signal. As a counterexample, the numerical results with time steps very far from the optimal one (i.e., t03190, which is 10 times greater than the highest mentioned before, and t00032, which is 10 times smaller) show cumulative distribution functions (Fig. 17d) considerably far from the others.Indeed, the case t03190 appears completely outside from the considered CIs, while the case t00032 results on the CIs frontier for most of the curve and in the other side of the envelope with respect to the t03190 case.This outcome highlights the efficacy of the present approach in predicting the rocking response subjected to ground motions in a statistical sense, and the proposed setting of the time step appears robust and general. Conclusions In this paper, the possibility of utilizing an implicit time integration scheme with numerical dissipation and without any damping model to simulate rocking blocks has been investigated.According to the present (2008) [64] for Specimen 3, earthquake-like loading.Normalized rocking angle time histories computational approach, a rocking block has been idealized as a solid body interacting with its foundation through a contact-based formulation.The wellknown HHT method, set to optimally treat dissipation in contact problems, has been employed, being the numerical dissipation governed by the time step.The rocking dissipative phenomenon at impacts appeared to be accurately predicted by the proposed computational approach without the use of any damping model. A broad numerical campaign has been conducted to define a regression law in analytic form for the setting of the optimal time step.Such law has been found to be dependent on the block size and aspect ratio, the contact stiffness, as well as the coefficient of restitution selected.The so-obtained regression law appeared accurate and an a posteriori validation with cases not in the training dataset confirmed the effectiveness and the robustness of the approach.Interestingly, it has been found that by increasing the block size also the optimal time step increases (so allowing fast dynamic simulations even for large-scale structures).In particular, it has been found that rocking blocks with sizes of interest for structural engineering (namely cultural heritage structures) can be simulated with time steps within 10 -3 7 10 -1 s, so allowing very fast computations. Finally, the comparison with available experimental tests highlighted the efficacy of the present computational approach for free rocking and harmonic loading cases (in a deterministic sense), and for earthquake-like loading cases (in a statistical sense, i.e., in terms of cumulative distribution functions). Future developments will concern the extension of the present computational approach to multi-block rocking structures, e.g., by exploiting the concept of dynamically equivalent rocking structures [23] to set the time step in a straightforward way. Fig. 2 Fig. 2 Proof of concept for free rocking response.a Solid block rocking on its foundation.Comparison of free rocking response in terms of b normalized rocking angle time history, and c normalized total energy time history (being E the total energy), Fig. 3 Fig. 3 Phase portrait for the case shown in Fig. 2. Comparison between the analytical and the present solutions (left).Magnified phase portrait at the first impact (top right).Contact Fig. 4 Fig. 5 Fig. 4 Effect of the time step on the present solution based on numerical dissipation (see Fig. 2 for the settings).Comparison of the present solution (Dt ¼ 0:013 s) with two other time steps (Dt ¼ 0:026 s and Dt ¼ 0:007 s) in terms of a normalized Fig. 7 Fig. 8 Fig. 7 Example of comparison between analytical and numerical solutions, in terms of a normalized rocking angle and b half rocking period, along with the number of impacts, for r = 0.89 and Dt=0.009 s (case HB4_R0.62_k2.5) Fig. 9 Fig. 9 Results of the multivariable nonlinear regression analysis.a Estimated versus measured optimal time step plot (coefficient of determination R 2 ¼ 0:970).Comparison of estimated (solid lines) versus measured (hollow circles) time steps along with r by varying b the block size, i.e.R, c the block aspect ratio, i.e.H=B, as well as slightly the block size R, and d the contact stiffness, i.e. k n Fig. 10 3 Table 3 Fig. 10 Results of the a posteriori validation.a Estimated versus measured optimal time step plot (coefficient of determination R 2 ¼ 0:980).Comparison of estimated (solid lines) versus measured (hollow circles) time steps along with r for b k n ¼4e ?08 N/ m 3 , c k n ¼7e ?08 N/m 3 , and d k n ¼12e ?08 N/m 3 . Fig. 11 Fig. 11 Comparison with the experimental tests by Pen ˜a et al. [64] for Specimen 1, free rocking.a Normalized rocking angle time history.b Normalized rocking angle and c half rocking period along with the number of impacts Fig. 12 Fig. 12 Comparison with the experimental tests by Pen ˜a et al. (2008) [64] for Specimen 1, harmonic loading.Normalized rocking angle time histories Fig. 14 Fig. 14 Comparison with the experimental tests by Pen ˜a et al. (2008) [64] for Specimen 2, harmonic loading.Normalized rocking angle time histories Fig. 16 Fig. 16 Comparison with the experimental tests by Pen ˜a et al. (2008)[64] for Specimen 3, earthquake-like loading.Normalized rocking angle time histories Fig. 17 Fig.17Comparison with the experimental tests by Bachmann et al.[15].a Free rocking normalized rocking angle time history, adapted from[15].b Normalized rocking angle and c half rocking period along with the number of impacts.d Earthquake- Fig.17Comparison with the experimental tests by Bachmann et al.[15].a Free rocking normalized rocking angle time history, adapted from[15].b Normalized rocking angle and c half rocking period along with the number of impacts.d Earthquake- 3.1 Numerical campaignAn extensive numerical campaign is here carried out to estimate the coefficients A 1 , A 2 , and A 3 in Eq. (6).In total, 10 different block geometries with 3 different aspect ratios are considered, as specified in Table1. Table 1 Blocks geometries employed in the numerical campaign Blocks geometry proportions used in the numerical campaign.For actual sizes, refer to Table1 Table 2 Blocks geometry for post validation
2024-07-29T15:09:32.246Z
2024-07-26T00:00:00.000
{ "year": 2024, "sha1": "183719c44721d9712100b8b5e9381d05a42652cd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s11071-024-09974-1", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2f14fa2348ee68ba4a9e074e330b22fd35d2f4e2", "s2fieldsofstudy": [ "Engineering", "Geology" ], "extfieldsofstudy": [] }
60441459
pes2o/s2orc
v3-fos-license
Multi-tier Caching Analysis in CDN-based Over-the-top Video Streaming Systems Internet video traffic has been been rapidly increasing and is further expected to increase with the emerging 5G applications such as higher definition videos, IoT and augmented/virtual reality applications. As end-users consume video in massive amounts and in an increasing number of ways, the content distribution network (CDN) should be efficiently managed to improve the system efficiency. The streaming service can include multiple caching tiers, at the distributed servers and the edge routers, and efficient content management at these locations affect the quality of experience (QoE) of the end users. In this paper, we propose a model for video streaming systems, typically composed of a centralized origin server, several CDN sites, and edge-caches located closer to the end user. We comprehensively consider different systems design factors including the limited caching space at the CDN sites, allocation of CDN for a video request, choice of different ports (or paths) from the CDN and the central storage, bandwidth allocation, the edge-cache capacity, and the caching policy. We focus on minimizing a performance metric, stall duration tail probability (SDTP), and present a novel and efficient algorithm accounting for the multiple design flexibilities. The theoretical bounds with respect to the SDTP metric are also analyzed and presented. The implementation on a virtualized cloud system managed by Openstack demonstrate that the proposed algorithms can significantly improve the SDTP metric, compared to the baseline strategies. I. INTRODUCTION Over-the-top video streaming, e.g., Netflix and YouTube, has been dominating the global IP traffic in recent years. The traffic will continue to grow due to the introduction of even higher resolution video formats such as 4K on the horizon. As end-users consume video in massive amounts and in an increasing number of ways, service providers need flexible solutions in place to ensure that they can deliver content quickly and easily regardless of their customer's device or location. More than 50% of over-the-top video traffic are now delivered through content distribution networks (CDNs) [2]. Even though multiple solutions have been proposed for improving congestion in the CDN system, managing the everincreasing traffic requires a fundamental understanding of the system and the different design flexibilities (control knobs) to make the best use of the limited hardware resources. This is the focus of this paper. The service providers typically use two-tier caching approach to improve the quality of streaming services [3]- [5]. In addition to the distributed cache servers provided by the CDN, the edge router can also have a cache so that some videos could be stored in this cache and gets the advantage of the proximity to end-users. However, there are many edge routers which imply that the hot content could be stored at multiple edge routers. There is an additional cache at the distributed cache servers (in CDN) from which data can be obtained if not already at the edge router. Such multi-tier caching is related to fog computing where the caching could be distributed at multiple locations in the network [4]. We also assume that the edge cache can help provide advantages similar to multicasting. If another user on the edge router is already consuming the file, the part of the video already downloaded is sent directly to the new user and the later part is sent to multiple users who requested the content on the same edge router. This paper aims to analyze two-tiered caching in video streaming systems. In this paper, we consider an architecture of streaming system with a Virtualized Content Distribution Network (vCDN) [6], [7]. The main role of this CDN infrastructure is not only to provide users with lower response time and higher bandwidth, but also to distribute the load (especially during peak time) across many edge locations. The infrastructure consists of a remote datacenter that stores complete original video data and multiple CDN sites (i.e., distributed cache servers) that only have part of those data and are equipped with solid state drives (SSDs) for high throughput. In addition, we assume that a second caching tier is located at the edge routers. A user request for video content not served from the edge cache is directed to a distributed cache. If it is still not completely served, the remaining part of the request is directed to the remote datacenter (as shown in Fig. 1). Multiple parallel connections are established between the distributed cache server and the edge router, as well as between the distributed cache servers and the origin server, to support multiple video streams simultaneously. Our goal is to develop an optimization framework and QoE metrics that service providers (or infrastructure) could use to answer the following questions: How to quantify the impact of multi-tier video caching on end user experience? What is the best video multitier caching strategy for CDN? How to optimize QoE metrics over various "control knobs"? Are there enough benefits to justify the adoption of proposed solutions in practice? It has been shown that, in modern cloud applications such as Facebook, Bing, and Amazon's retail platform, the long tail of latency is of a major concern, with 99.9th percentile response times that are, orders of magnitude worse than the mean [8], [9]. Thus, this paper considers a QoE metric, called the stall duration tail probability (SDTP), which measures the likelihood of end users suffering a worse-than-expected stall duration, and develop a holistic optimization framework for minimizing the overall SDTP over joint caching content placement, network resource optimization and user request scheduling. SDTP, denoted by Pr(Γ (i) > σ), measures the probability that the stall duration Γ (i) of video i is greater than a pre-defined threshold σ. Despite resource and load-balancing mechanisms, large scale storage systems evaluations show that there is a high degree of randomness in delay performance [10]. In contrast to web object caching and delivery, the video chunks in the latter part of a video do not have to be downloaded much earlier than their actual play time to maintain the desired QoE, making SDTP highly dependent on the joint optimization with resource management and request scheduling in CDN-based video streaming. Quantifying SDTP with multi-tier cache/storage is an open problem. Even for single-chunk video files, the problem is equivalent to minimizing the download tail latency, which is still an open problem [11]. The key challenge arises from the difficulty of constructing and analyzing a scheduling policy that (optimally) redirects each request based on dependent system and queueing dynamics (including cache content, network conditions, request queue status) on the fly. To overcome these challenges, we propose a novel two-stage, probabilistic scheduling approach, where each request of video i is (i) processed by cache server j with probability π i,j and (ii) assigned to video stream v with probability p i,j,v . The two-stage, probability scheduling allows us to model each cache server and video stream as separate queues, and thus, to characterize the distributions of different video chunks' download time and playback time. Further, the edge caching policy plays a key role in the system design. This paper proposes an adaption of least-recently-used (LRU) caching mechanism [12], where each file is removed from the edge cache if it has not been requested again for a time that depends on the edge router and the file index. By optimizing these probabilities and the edge-cache parameters, we quantify SDTP through a closed-form, tight upper bound for CDNbased video streaming with arbitrary cache content placement and network resource allocation. We note that the analysis in this paper is fundamentally different from those for distributed file storage, e.g., [13], [14], because the stall duration of a video relies on the download times of all its chunks, rather than simply the time to download the last chunk of a file. Further, since video chunks are downloaded and played sequentially, the download times and playback times of different video chunks are highly correlated and thus jointly determine the SDTP metric. This paper proposes a holistic optimization framework for minimizing overall SDTP in CDN-based video streaming. To the best of our knowledge, this is the first framework for multi-tier caching to jointly consider all key design degrees of freedom, including bandwidth allocation among different parallel streams, multi-tier cache content placement and update, request scheduling, and the modeling variables associated with the SDTP bound. An efficient algorithm is then proposed to solve this non-convex optimization problem. In particular, the proposed algorithm performs an alternating optimization over the different dimensions, such that each sub-problem is shown to have convex constraints and thus can be efficiently solved using the iNner cOnVex Approximation (NOVA) algorithm proposed in [15]. The proposed algorithm is implemented in a virtualized cloud system managed by Openstack [16]. The experimental results demonstrate significant improvement of QoE metrics as compared to the considered baselines. The main contributions of this paper can be summarized as follows: • We propose a novel framework for analyzing CDN-based over-the top video streaming systems with the use of multiple caching tiers and multiple parallel streams between nodes. A novel two-stage probabilistic scheduling policy is proposed to assign each user request to different cache servers and parallel video streams. Further, the edge router uses an adaptation of LRU, and the distributed cache servers cache partial files. • The distribution of (random) download time of different video chunks are analyzed. Then, using ordered statistics, we quantify the playback time of each video segment. • Multiple recursive relations are set up to compute the stall duration tail probability. We first relate the compute of download time of each chunk to the play time of each chunk, Since the play time depends not only on download of the current chunk, but also on the previous chunks. Second, stall duration must account for whether the file has been requested by anyone within a window time of a certain size to get advantage of edge cache. If it had been requested, the stall duration is a function of the time of the last request and the stall duration at that time. In the steady state analysis, this will lead to a recursion. This analysis has been used to derive an analytical upper bound on SDTP for arbitrary distributed cache content placement, parameters of edge cache, and the parameters of the two-stage probabilistic scheduling (Appendix B). • A holistic optimization framework is developed to optimize a weighted sum of SDTP of all video files over the request scheduling probabilities, distributed cache content placement, the bandwidth allocation among different streams, edge cache parameters, and the modeling parameters in SDTP bound. An efficient algorithm is provided to decouple and solve this nonconvex optimization (Section V). • To better understand the SDTP and how it relates to the QoE of users, we correlate this metric to a well-known QoE metric (called mean stall duration). Since the optimal point for the mean stall duration is not the same as that of the SDTP, we optimize a convex combination of the two metrics and show how the two QoE metrics can be compromised based on the point on the curve that is appropriate for the clients (Appendix I). • The algorithm is implemented on a virtualized cloud system managed by Openstack. The simulation and trace-based results validate our theoretical analysis with the implementation and analytical results being close, thus demonstrating the efficacy of our proposed algorithm. The QoE metric is shown to have significant improvement as compared to competitive strategies (Appendix VI). The rest of this paper is organized as follows. Section II provides related work for this paper. In Section III, we describe the system model used in the paper with a description of CDN-based Over-the-top video streaming systems. Section IV provides an upper bound on the mean stall duration. Section V formulates the QoE optimization problem as a weighted sum of all SDTP of all files and proposes the iterative algorithmic solution of this problem. Experimental results are provided in Section VI. Section VII concludes the paper. II. RELATED WORK Video on Demand services and Live TV Content from cloud servers have been studied widely [17]- [21]. The placement of content and resource optimization over the cloud servers have been considered. In [20], authors utilize the social information propagation pattern to improve the efficiency of social video distribution. Further, they used replication and user request dispatching mechanism in the cloud content delivery network architecture to reduce the system operational cost, while maintaining the averaged service latency. However, [20] only considers video download. The benefits of delivering videos at the edge network is shown in [21]. Authors show that bringing videos at the edge network can significantly improve the content item delivery performance, in terms of improving quality experienced by users as well as reducing content item delivery costs. To the best of our knowledge, reliability of content over the cloud servers have not been considered for video streaming applications. There are novel challenges to characterize and optimize the QoE metrics at the end user. Adaptive streaming algorithms have also been considered for video streaming [22]- [24] [25] which are beyond the scope of this paper and are left for future work. Mean latency and tail latency have been characterized in [13], [14], [26] and [27], [28], respectively, for a system with multiple files using probabilistic scheduling. However, these papers consider only file downloading rather than video streaming. This paper considers CDN-based video streaming. We note that file downloading can follow as a special case of streaming, which makes our model more general. Additionally, the metrics for video streaming do not only account for the end of the download of the video but also for the download of each segment. Hence, the analysis for the content download cannot be extended to the video streaming directly and the analysis approach in this paper is very different from the prior work in the area of file downloading. More recently, the authors of [29] considered videostreaming over distributed storage systems. However, caching placement optimization and is not considered. Also, caching at the edge level is not considered. Moreover, only a single stream between each storage server and edge node is assumed and hence neither the two-stage probabilistic scheduling nor bandwidth allocation were considered. Similarly, there is no edge cache in [30]. Thus, the analysis and the problem formulation here is an extension of that in [29], [30]. III. SYSTEM MODEL A. Target System Our work is motivated by the architecture of a production system with a Virtualized Content Distribution Network (vCDN). Such services, for instance, include video-on-demand (VoD), live linear streaming services (also referred to as over-the-top video streaming services), firmware over the air (FOTA) Android updates to mobile devices, etc. The main role of this CDN infrastructure is not only to provide users with lower response time and higher bandwidth, but also to distribute the load (especially during peak time) across many edge locations. Consequently, the core backbone network will have reduced network load and better response time. The origin server has original data and CDN sites have only part of those data. Each CDN site is composed of multiple cache servers each of which is typically implemented as a VM backed by multiple directly attached solid state drives (SSDs) for higher throughput. The cache servers store video segments and a typical duration of each segment covers 5 − 11 seconds of playback time. Further, the typical vCDN architecture includes an additional cache at the edge, called edge cache. This edge cache allows for saving some recently accessed videos. This cache can also help multicasting content to another user connected to the same edge router. One of the typical policy that is used in edge cache is based on least-recently-used (LRU) caching policy [12]. In this paper, we will consider a modification of this strategy to weigh the eviction policy of contents dependent on their weight, placement, and access rates and thus can be optimized. When a client such as VoD/LiveTV app requests a certain content, it goes through multiple steps. First, it sees whether the content is in edge cache. If so, the content is directly accessed from the edge cache. Then, it sees whether the content has been requested by someone connected to the same edge router and is being sent to them. In this case, the content already received at the edge router is sent to the user and the remaining content is passed as received (equivalent to a multicast stream setup). If the content cannot be obtained in the two steps, the client then contacts CDN manager, choose the best CDN service to use and retrieve a fully qualified domain name (FQDN). Fourth, with the acquired FQDN, it gets a cache server's IP address from a content routing service (called iDNS). Then we use the IP address to connect to one of the cache servers. The cache server will directly serve the incoming request if it has data in its local storage (cache-hit). If the requested content is not on the cache server (i.e., cachemiss), the cache server will fetch the content from the origin server and then serve the client. In the rest of this section, we will present a generic mathematical model applicable to not only our vCDN system but also other video streaming systems that implement CDNlike two-tier caching structure. B. System Description We consider a content delivery network as shown in Fig. 1, consisting of a single datacenter that has an origin server, m geographically-distributed cache servers denoted by j = 1, . . . , m, and edge-cache storage nodes associated with the edge routers ∈ {1, 2, · · · , R}, where R is the total number of edge-routers, as depicted in Figure 2. The compute cache servers (also called storage nodes) are located close to the edge of the network and thus provide lower access latency for end users. We also assume that each cache server is connected to one edge router. Further, the connection from the edge router to the users is not considered as a bottleneck. Thus, the edge router is considered as a combination of users and is the last hop for our analysis. We also note that the link from the edge router to the end users is not controlled by the service provider and thus cannot be considered for optimized resource allocation from the network. The service provider wishes to optimize the links it controls for efficient quality of experience to the end user. A set of r video files (denoted by i = 1, . . . , r) are stored in the datacenter, where video file i is divided into L i equalsize segments each of length τ seconds. We assume that the first L j,i chunks of video i are stored on cache server j. Even though we consider a fixed cache placement, we note that L j,i are optimization variables and can be updated when sufficient arrival rate change is detected. We assume that the bandwidth between the data center and the cache server j (service edge router ) is split into d j parallel streams, where the streams are denoted as P S (d,j) βj , for β j = 1, · · · , d j . Further, the bandwidth between the cache server j and the edge router is divided into f ( ) j parallel streams, denoted as P S (f,j) ζj , for ζ ( ) j = 1, · · · , f ( ) j , and = 1, 2, · · · , R. Multiple parallel streams are assumed for video streaming since multiple video downloads can happen simultaneously. Since we care about stall duration, obtaining multiple videos simultaneously is helpful as the stall durations of multiple videos can be improved. We further assume that f for all j = 1, · · · , m and = 1, · · · , R. We note that the sum of weights may be less than 1 and some amount of the bandwidth may be wasted. While the optimal solution will satisfy this with equality since for better utilization, we do not need to explicitly enforce the equality constraint. We note that if the cache server serves multiple edge routers, the parallel streams between cloud storage and cache server will be the sum of d j to each edge router thus making the problem separated for each edge router. For ease, we will sometime omit to focus on links to one edge router only and the same procedure can be used for each edge router. We assume that the service time of a segment for data transfer from the data center to the cache server j is shiftedexponential with rate α while that between the cache server j and the edge router is also shiftedexponential with rate α (fj ) j, and a shift of η (fj ) j, . The shifted exponential distribution can be seen as an approximation of the realistic service time distribution in the prior works, e.g., [31], and references therein. We also note that the rate of a parallel stream is proportional to the bandwidth split. Thus, the service time distribution of P S j,νj , , respectively, and are given as follows. for all β j , ν j , and . We further define the moment generating functions of the service times of P S We also assume that there is a start-up delay of d s (in seconds) for the video which is the duration in which the content can be buffered but not played. Table III (in Appendix H) summarizes the key used notation in this paper. C. Edge-cache Model Edge cache ∈ {1, 2, · · · , R}, where R is the total number of edge-routers, stores the video content closer to end users. This improves the QoE to end users. We assume a limited cache size at the edge-router (edge-cache) of a maximum capacity of C ,e seconds, at edge-router . When a file is requested by the user, the edge cache is first checked to see if the file is there completely or partly (in case some other user is watching that content). If the file is not in the edge cache, the space for this video file is created in the edge cache, and a file or some other video files have to be evicted so as not to violate the space constraint. We consider edge cache policy as follows. The file i is removed from edge cache if it has not been accessed in time ω i, after its last request time from edge-router . The parameter ω i, is a variable that can be optimized based on the file preference and its placement in the CDN cache. This caching policy is motivated by LRU since the file is evicted if it has not been used in some time in the past. The key advantages of this approach is that (i) It is tunable, in the sense that the parameters ω i, can be optimized, and (ii) the performance of the policy is easier to optimize as compared to LRU. When a file i is requested, and someone has already requested from edge router in the last ω i, time units, the file is obtained from the edge router. Even if the file may not be completely in the edge-router yet (not yet finished downloading), the downloaded part is given directly to the new user and the remaining content is delivered as it becomes available to the edge cache. This is akin to multicasting the remaining part of the video to multiple users [32]. An illustration of the evolution of caching policy is illustrated in Figure 3, where the index is omitted since we consider the procedure at a single edge router. Video file i is requested at three times t 1 , t 2 , and t 3 . At t 1 , the file enters the edge cache. Since it had not been requested in ω i time units, it is evicted. When the file is again requested at t 2 , the space for the file is reserved in the edge cache. The file, when requested at t 3 is within the ω i duration from t 2 and thus will be served from the edge cache. If the file is still not in the edge cache completely, it will obtain the part already there directly while the remaining part will be streamed as it becomes available. Since the file i was not requested for time ω i after t 3 , the file is again evicted from the edge cache. We note that the arrival rate of the files is random. Thus, this file eviction policy may not satisfy the maximum edge cache constraint at all times. In order to handle this, we will first assume for the analytical optimization that the probability that the cache capacity of edge-cache is violated is bounded by which is small. That could lead us to obtain a rough estimate on the different parameters in the system. The hard constraint on the capacity can be made in run-time, by evicting the files that are closest to be going out based on when they were requested and the corresponding ω i, . This online adaptation will be explained in Appendix J. D. Queueing Model and Two-stage probabilistic scheduling If cache server j is chosen for accessing video file i on edge router , the first L j,i chunks are obtained from one of the e ( ) j parallel streams P S (e,j) νj , . Further, the remaining L i − L j,i chunks are obtained from the data center where a choice of β j is made from 1, · · · , d j and the chunks are obtained from the stream P S (d,j) βj which after being served from this queue is enqueued in the queue for the stream P S (d,j) βj , . However, if video file i is already requested at time t i within a window of size ω i , the request will be served from the edge-cache and will not be sent to a higher level in the hierarchy, e.g., cache server. We assume that the arrival of requests at the edge router for each video i form an independent Poisson process with a known rate λ i, . In order to serve the request for file i, we need to choose three things -(i) Selection of Cache server j, (ii) Selection of ν j to determine one of P S (e,j) νj , streams to deliver cached content, (iii) Selection of β j to determine one of P S (d,j) βj streams from the data-center which automatically selects the stream P S (d,j) βj , from the cache server, to obtain the non-cached content from the datacenter. Thus, we will use a two-stage probabilistic scheduling to select the cache server and the parallel streams. For a file request at edge-router , we choose server j with probability π i,j, for file i randomly. Further, having chosen the cache server, one of the e j streams is chosen with probability p i,j,νj , . Similarly, one of the d j streams is chosen with probability q i,j,βj , . We note that these probabilities only have to satisfy m j=1 π i,j, = 1∀i, ; We note that since file i is removed from the edge cache after time ω i , the requests at the cache server are no longer Poisson. We note that this could be alleviated by assuming that every time the file is requested, the time ω i is chosen using an exponential distribution. This change of distribution will make the distribution of requests at the cache server Poisson thus alleviating the issue. In the following, we will assume constant ω i , while still approximate the request pattern at cache servers as Poisson which holds when the times for which file remains in the edge cache is chosen using an exponential distribution. This approximation turns out to be quite accurate as will be shown in the evaluation results. Further, such approximations of Poisson arrivals are widely used in the literature in similar fashions. In particular, it is used to characterize the coherence time of an LRU-based caching, e.g., see [33] and references therein. Further, in [34] (Ch.9, page 470) authors approximate the arrivals of new and retransmitted packets in CSMA protocol as Poisson even though they are not due to the dependencies between them. Since sampling of Poisson process is Poisson, and superposition of independent Poisson processes is also Poisson, we get the aggregate arrival rate at P S j,νj , , respectively are given as follows. If ω i follows an exponential distribution (i.e., not fixed) with parameter ν i , the probability that the request of video file i is directed to the distributed cache servers and/or central server is given by Proof. Let t i be the inter-arrival request for video file i. Hence, if video file i is requested within ω i since its last request, video i request will be served from the edge-cache (i.e., t i ≤ ω i ). In contrast, if video file i is not at the edgecache, the request is forwarded to higher hierarchy levels (distributed storage cache and/or central cloud stroage). Since t i and ω i are exponentially distributed with paramters λ i and ν i , respectively, the probability of directing file i request to the distributed storage cache and/or central cloud stroage is given by P ( t i > ω i ). This proves the statement of the Lemma. Lemma 2. When the service time distribution of datacenter server (first queue) is given by shifted exponential distribution, the arrivals at the cache servers (second queue) are Poisson. Proof. The proof is provided in Appendix A. E. Distribution of Edge Cache Utilization We will now investigate the distribution of the edge-cache utilization at any time. This will help us in bounding the probability that the edge cache is more than the capacity of the cache. In the analytic part, we will bound this probability. However, the online adaptations in Appendix J will provide an adaptation to maintain the maximum edge cache capacity constraint at all times. Let X i, be the random variable corresponding to amount of space in the edge-cache for video file i. Since the file arrival rate is Poisson, and the file is in the edge-cache if it has been requested in the last ω i seconds. Then, X i, is given as where 1 − e −λ i, ω i, is the probability that file i is requested within a window-size of ω i, time units. The total utilization of the edge-cache j is given as The mean and variance of X can be found to be , respectively. Since r is large, we will approximate the distribution of X j by a Gaussian distribution with mean This distribution is then used as a constraint in the design of ω i, , where the constraint bounds the probability that the edge cache utilization is higher than the maximum capacity of the edge cache. Since X can be well approximated by a Gaussian distribution, the edge cache utilization, can be probabilistically bounded as follows, are the mean and variance, respectively. IV. STALL DURATION TAIL PROBABILITY This section will characterize the stall duration tail probability using the two-stage probabilistic scheduling and allocation of bandwidth weights. We note that the arrival rates are given in terms of the video files, and the service rate above is provided in terms of segment at each server. The analysis would require detailed consideration of the different segments in a video. In this section, we will assume that the edgerouter for the request is known, and thus we omit the subscript/superscript to simplify notations. In order to find the stall durations, we first consider the case where file i is not in the edge cache and has to be requested from the CDN. We also assume that the cache server j is used, with the streams β j and ν j known. We will later consider the distribution of these choices to compute the overall metric. In order to compute stall durations, we would first calculate the download time of each of the video segment, which accounts for the first L j,i segments at the cache j and the later L i −L j,i segments at the server. After the download times are found, the play times of the different contents are found. The detailed calculations are shown in Appendix B, where the distribution of T (g) i,j,βj ,νj , the time that segment g begins to play at the client i given that it is downloaded from β j and ν j queues, is found. The stall duration for the request of file i from β j queue, ν j queue and server j, if not in the edge-cache, i.e., Γ (i,j,βj ,νj ) U is given as as explained in Appendix B. We use this expression to derive a tight bound on the SDTP. The stall duration tail probability of a video file i is defined as the probability that the stall duration is greater than a predefined threshold σ. Since exact evaluation of stall duration is hard [29], [35], we cannot evaluate Pr Γ tot is random variable indicating the overall stall duration for file i. In this section, we derive a tight upper bound on the SDTP through the two-stage Probabilistic Scheduling as follows. We first note that the expression in equation (76) (Appendix B) accounts only for the stalls that would be incurred if the video segments are not accessed from the edge-cache (including stored, or multicasted). However, the user would experience lower stalls if the requested content is accessed from the edge-cache. Thus, we need an expectation over the choice of whether the file is accessed from the edge server, and the choice of (j, β j , ν j ) in addition to the queue statistics. For a video file i requested at time t i after the last request for file i, the stall duration for the request of file i can be expressed as follows: where d = means equal in distribution. This is because if file i is requested again within ω i time, then the multicast or stored file can lead to the reduced stall duration based on how much time has passed since the last request. Further, if the file has not been requested in the last ω i time units, then the file has to be obtained from the CDN, and thus the expression of random variable Γ (i) tot also includes randomness over the choice of (j, β j , ν j ) in this case. From (18), we can obtain the following result. Lemma 3. For a given choice of i,j,β j ,ν j |(j, β j , ν j ) using the following two lemmas, which will be used in the main result. The key idea is that we characterize the download and play times of each segments and use them in determining the SDTP of each video file request. Proof. The proof follows from (53) in Appendix B by replacing g by v and rearranging the terms in the result. where E e hiU i,j,β j ,v,L j,i |(j, β j , ν j ) and B Using these expressions, the following theorem summarizes the stall duration tail probability for file i. We include the edge router index in all the expressions in the result for the ease of using it in the following section. Theorem 1. The stall distribution tail probability for video file i requested through edge router is bounded by for ρ j,νj , < 1, where the auxiliary variables in the statement of the Theorem are defined as Proof. The detailed steps are provided in Appendix E. We note that δ (e, ) = δ (d, ) = 0, if the storage server nodes are not hosting the requested video files and δ (d,d, ) has nonzero value only if some sWe can also derive the mean stall duration for video file i in a similar fashion. The interested reader is referred to Appendix I for detailed treatment of this metric. To incorporate for weighted fairness and differentiated services, we assign a positive weight κ i, for each file i. Without loss of generality, each file i is weighted by the arrival rate λ i, in the objective (so larger arrival rates are weighted higher). However, any other weights can be incorporated to accommodate for weighted fairness or differentiated services. Let λ = i, λ i, be the total arrival rate. Hence, κ i, = λ i, /λ is the ratio of file i requests. Hence, the objective is the minimization of stall duration tail probability, averaged over all the file requests, and is given as i, By using the expression for SDTP in Section IV, the optimization problem can be formulated as follows. Here, in (36), equations (1) − (3) give the feasibility constraints on the bandwidth allocation, while equations (4) − (6) define the MGFs of the service time distributions, equations (7) − (10) give the feasibility of the two-stage probabilistic scheduling and (11) − (12) define the arrival rates at the different queues. Constraints (37)-(39) ensure the stability of the systems queue (do not blow up to infinity). Constraints (41)- (44) ensure that the moment generating functions exist. We note that some optimization variables can be combined to form a single optimization variable which results in having only five independent and separable variables as shown below. In the next subsection, we will describe the proposed algorithm for this optimization problem. B. Proposed Algorithm We first note that the two-stage probabilistic scheduling variables are independent and separable, thus we can combine them and define a single variable π such that π = (π , p , q). Similarly, since the bandwidth allocation weights are independent and separable, we concatenate them in a single optimization variable w, where w = w (e) , w (d) , w (d) . Hence, the weighted SDTP optimization problem given in (35)-(45) is optimized over five set of variables: server and PSs scheduling probabilities π (two-stage scheduling probabilities), auxiliary parameters h, bandwidth allocation weights w, cache placement L, and edge cache window size optimization ω. Clearly, the problem is non-convex in all the parameters jointly, which can be easily seen in the terms which are product of the different variables. Since the problem is non-convex, we propose an iterative algorithm to solve the problem. The proposed algorithm divides the problem into five sub-problems that optimize one variable while fixing the remaining four. These sub-problems are labeled as (i) Server and PSs Access Optimization: optimizes π, for given h, w, ω, and L, (ii) Auxiliary Variables Optimization: optimizes h for given π, w, ω, and L, (iii) Bandwidth Allocation Optimization: optimizes w for given π, h, ω, and L. (iv) Cache Placement Optimization: optimizes L for given π, h, ω, and w, (v) Edge-cache Window Size Optimization: optimizes ω for given π, h, L, and w. The algorithm is summarized as follows. 1) Initialization: Initialize h, π, w, ω and L in the feasible set. 2) While Objective Converges The proposed algorithm performs an alternating optimization over the different aforementioned dimensions, such that each sub-problem is shown to have convex constraints and thus can be efficiently solved using the iNner cOnVex Approximation (NOVA) algorithm proposed in [15]. The subproblems are explained in detailed in Appendix F. We first initialize π, w, h, ω, and L ∀i, j, ν j , β j such that the choice is feasible for the problem. Then, we do alternating minimization over the five sub-problems defined above. Since each sub-problem can only decrease the objective (properties of convergence of subproblems to a stationary point is given in Appendix F) and the overall problem is bounded from below, we have the following result. Theorem 2. The proposed algorithm converges to a stationary solution. Appendix J describes how our algorithm can be used in an online fashion to keep track of the systems dynamics at the edge-cache. VI. IMPLEMENTATION AND EVALUATION In this section, we evaluate our proposed algorithm for weighted stall duration tail probability. A. Testbed Configuration and Parameter Setup We construct an experimental environment in a virtualized cloud environment managed by Openstack [16] to investigate our proposed SDTP framework. We allocated one VM for an origin server and 5 VMs for cache servers intended to simulate two locations (e.g., different states). We implement the proposed online caching mechanism in the edge cache that takes the inputs of ω i, at each edge router. When a video file is requested, it is stored in the edge-cache for a window size of ω i, time units (unless requested again in this window). For the future requests within ω i or concurrent user requests, the requests for the video chunks are served from the edge-cache, and thus future/concurrent users would experience lower stall duration. If the file can be accessed from the edge router, higher caching level is not used for this request which consequently reduces the traffic at the core backbone servers. If the file cannot be accessed from the edge router, it goes to the distributed cache. We assume some segments, i.e., L j,i , of video file i are stored in the distributed cache node j, and are served from the cache nodes. The non-cached segments are served from the data-center. The schematic of our testbed is illustrated in Figure 4. Since the two edge-routers are likely in different states, they may not share the cache servers which is the setup we study in the experiments. We note that the theoretical approach proposed earlier is general and can work with shared cache servers across multiple edge routers. One VM per location is used for generating client workloads. Table II summarizes a detailed configuration used for the experiments. For client workload, we exploit a popular HTTP-traffic generator, Apache JMeter, with a plug-in that can generate traffic using HTTP Streaming protocol. We assume the amount of available bandwidth between origin server and each cache server is 200 Mbps, 500 Mbps between cache server 1/2 and edge router 1, and 300 Mbps between cache server 3/4/5 and edge router 2. In this experiment, to allocate bandwidth to the clients, we throttle the client (i.e., JMeter) traffic according to the plan generated by our algorithm. We consider 1000 threads (i.e., users) and set e ( ) j = 40 for all = 1, 2, d j = 20. Segment size τ is set to be equal to 8 seconds. Each edge cache is assumed to have a capacity, equivalent to 15% of the total size of the video files. Further, distributed cache servers can store up to 35% out of the total number of video file segments. The values of α j and η j are summarized in Table I. Video files are generated based on Pareto distribution [40] (as it is a commonly used distribution for file sizes [41]) with shape factor of 2 and scale of 300, respectively. While we stick in the experiment to these parameters, our analysis and results remain applicable for any setting given that the system maintains stable conditions under the chosen parameters. Since we assume that the video file sizes are not heavy-tailed, the first 500 file-sizes that are less than 60 minutes are chosen. When generating video files, the size of each video file is rounded up to the multiple of 8 seconds. For the arrival rates, we use the data from our production system for 500 hot files from two edge routers, and use those arrival rates. The aggregate arrival rates at edge router 1 and router 2 are Λ 1 = 0.01455s −1 , Λ 2 = 0.02155s −1 , respectively. In order to generate the policy for the implementation, we assume uniform scheduling, π i,j = k/n, p j,νj = 1/e j , q j,βj = 1/d j . Further, we choose t i = 0.01, w j,βj = 1/d j . However, these choices of the initial parameters may not be feasible. Thus, we modify the parameter initialization to be closest norm feasible solutions. Using the initialization, the proposed algorithm is used to obtain the parameters. These parameters are then used to control the bandwidth allocation, distributed cache content placement, the probabilistic scheduling parameters, and the edge caching window sizes. Based on these parameters, the proposed online algorithm is implemented. Since we assume the arrivals of video files are Poisson (and hence inter-arrival time is exponential with λ i for file i), we generate a sequence of 10000 video file arrivals/requests corresponding to the different files at each edge router. Upon an arrival of a video file at edge-cache, we apply our proposed online mechanism. For each segment, we used JMeter built-in reports to estimate the downloaded time of each segment and then plug these times into our model to obtain the stall duration which will be used for evaluation of the proposed method. B. Baselines We compare our proposed approach with multiple strategies, which are described as follows. 1) Projected Equal Server-PSs Scheduling, Optimized Auxiliary variables, Cache Placement, Edge-cache Window-Size, and Bandwidth Wights (PEA): Starting with the initial solution mentioned above, the problem in (35) is optimized over the choice of h, w, L, and ω (using Algorithms 2, 3, 4, and 5 respectively) using alternating minimization. Thus, the values of π i,j , p i,j,νj , and q i,j,βj will be approximately close to k/n, 1/e j , and 1/d j , respectively, for all i, j, ν j , β j . 2) Projected Proportional Service-Rate, Optimized Auxiliary variables, Bandwidth Wights, Edge-cache Window-Size, and Cache Placement (PSP): In the initialization, the access probabilities among the servers, are given as π i,j = µj j µj , ∀i, j. This policy assigns servers proportional to their service rates. The choice of all parameters are then modified to the closest norm feasible solution. Using this initialization, the problem in (35) is optimized over the choice of h, w, L, and ω, (using Algorithms 2, 3, 4, and 5, respectively) using alternating minimization. 3) Projected Equal Caching, Optimized Scheduling Probabilities, Auxiliary variables and Bandwidth Allocation Weights (PEC): In this strategy, we divide the cache size equally among the video files. Thus, the size of each file in the cache is the same (unless file is smaller than the cache size divided by the number of files). Using this initialization, the problem in (35) is optimized over the choice of π, h, w, and ω (using Algorithms 1, 2, 3, and 5, respectively) using alternating minimization. 4) Caching Hot Files, Optimized Scheduling Probabilities, Auxiliary variables, Edge-cache Window-Size, and Bandwidth Allocation Weights (CHF): In this strategy, we cache entirely the files that have the largest arrival rates in the storage cache server. Such hot file caching policies have been studied in the literature, see [12] and references therein. Using this initialization, the problem in (35) is optimized over the choice of π, h, w, and ω (using Algorithms 1, 2, 3, and 5, respectively) using alternating minimization. 5) Caching based on Least-Recently-Used bases at edgecache and Caching-Hottest files at storage nodes, Optimized Scheduling Probabilities, Auxiliary variables, Storage Cache Placement, and Bandwidth Allocation Weights (LRU): In this strategy, a file is entirely cached in the edge-cache servers upon request if space permits; otherwise, the least-recently used file(s) is removed first to evacuate the needed space for the new file. Further, the hottest files are partially cached in the distributed storage cache servers. Such hot file caching policies have been studied in the literature, e.g., [12] and references therein. Using this initialization, the problem in (35) is optimized over the choice of π, h, and w, (using Algorithms 1, 2, and 3, respectively) using alternating minimization. 6) Caching at edge-cache based on adaptSize policy [42] and Caching-Hottest files at storage nodes, Optimized Scheduling Probabilities, Auxiliary variables, Storage Cache Placement, and Bandwidth Allocation Weights (adaptSize): This policy is a probabilistic admission policy in which a video file is admitted into the cache with probability e −size/c so as larger objects are admitted with lower probability and the parameter c is tuned to maximize the object hit rate (OHR), defined as the probability that a requested file is found in the cache. In particular, given a c and an estimate on the arrival rate for the requests for each video file, one can estimate the probability that a given file will be served from the edge-cache. One can then use these probabilities to compute the OHR as a function of c and then optimize. The value of c is recomputed after a certain number of file requests, using a sliding window approach. We refer the reader to [42] for a more in-depth description. 7) Caching at edge-cache based on variant of LRU policy [33], Caching-Hottest files at storage nodes, Optimized Scheduling Probabilities, Auxiliary variables, Storage Cache Placement, and Bandwidth Allocation Weights (xLRU ): We denote by xLRU one of the these policies: qLRU , kLRU , and kRandom. A qLRU policy is the same as LRU except that files are only added with probability q. In kLRU , requested files must traverse k − 1 additional virtual LRU caches before it is added to the actual cache. kRandom is the same as kLRU except files are evicted from the cache at random. The other optimization parameters are optimized the same way as in the adaptSize policy. C. Experimental Results SDTP performance for different σ: Figure 5 shows the decay of weighted SDTP r i=1 λi λi P(Γ (i) > σ) with σ (in seconds) for the considered policies. Notice that SDTP Policy solves the optimal weighted stall tail probability via proposed alternating optimization algorithm. Also, this figure represents the complementary cumulative distribution function (ccdf) of the proposed algorithm as well as the selected baselines. We further observe that uniformly accessing servers and simple service-rate-based scheduling are unable to optimize the request scheduler based on factors like chunk placement, request arrival rates, different stall weights, thus leading to much higher SDTP. Moreover, the figure shows that an entire video file does not have to be present in the edge-cache. That's because when the user requests a cached video, it is served by first sending the portion of the video locally present at edge-cache while obtaining the remainder from the distributed cache servers and/or the origin server, and transparently passing it on to the client. In addition, we see that the analytical (offline) SDTP is very close to the actual (online) SDTP measurement on our testbed. Further, since adaptSize policy does not intelligently incorporate the arrival rates in adding/evicting the video files, it fails to significantly reduce the SDTP. To the best of our knowledge, this is the first work to jointly consider all key design degrees of freedom, including bandwidth allocation among different parallel streams, cache content placement, the request scheduling, window-size of the edge-cache and the modeling variables associated with the SDTP bound. Arrival Rates Comparisons: Figure 6 shows the effect of increasing system workload, obtained by varying the arrival rates of the video files from 0.01s −1 to 0.03s −1 with an increment step of 0.002s −1 on the SDTP. We notice a significant improvement of the QoE metric with the proposed strategy as compared to the baselines. Further, the gap between the analytical offline bound and actual online SDTP is small which validates the tightness of our proposed SDTP bound. Further, while our algorithm optimizes the system parameters offline, this figure shows that an online version of our algorithm can be used to keep track of the systems dynamics and thus achieve an improved performance. Effect of Number of files: Figure 7 shows the impact of varying the number of files from 150 to 550 on the weighted SDTP for the online algorithm. Clearly, weighted SDTP increases with the number of files, which brings in more workload (i.e., higher arrival rates). However, our optimization algorithm optimizes new files along with existing ones to keep overall weighted SDTP at a low level. We note that the proposed optimization strategy effectively reduces the tail probability and outperforms the considered baseline strategies. Thus, joint optimization over all optimization parameters help reduce the tail probability significantly. Also, the gap between online and offline performance is almost negligible which reflects the robustness of our algorithm. Additional performance evaluation is provided in Appendix K and Appendix L. VII. CONCLUSION This paper proposes a CDN-based edge-cache-aided overthe-top multicast video streaming system, where the video content is partially stored on distributed cache servers and access-dependent online edge caching strategy is used at the edge-cache. Further, this paper optimizes the weighted stall duration tail probability by considering two-stage probabilistic scheduling for the choice of servers and the parallel streams between the server and the edge router. Using the two-stage probabilistic scheduling and the edge caching mechanism, upper bound on the stall duration tail probability is characterized. Further, an optimization problem that minimizes the weighted stall duration tail probability is formulated, over the choice of two-stage probabilistic scheduling, bandwidth allocation, cache placement, edge-cache parameters, and the auxiliary variables in the bound. An efficient algorithm is proposed to solve the optimization problem and the experimental results on a virtualized cloud system managed by Openstack depict the improved performance of our proposed algorithm as compared to the considered baselines. Possible extensions to accommodate multiple quality levels and different chunk sizes are discussed in Appendix M. However, a complete treatment of adaptive bit-rate video streaming is left as a future work. VIII. ACKNOWLEDGMENT We would like to thank Prof. Tian Lan from GWU for helpful discussions related to this work. APPENDIX A PROOF OF LEMMA 2 We note that the arrivals at the second queue in two M/M/1 tandem networks are Poisson is well known in the queueing theory literature [43]. The service distribution of the first queue in this paper is a shifted exponential distribution. We note that the deterministic shift also does not change the distribution since the number of arrivals in any time window in the steady state will be the same. Thus, the arrival distribution in the second queue will still be Poisson. APPENDIX B DOWNLOAD AND PLAY TIMES OF A SEGMENT NOT REQUESTED IN ω i Since we assume the edge-router index, we will omit in this section. In order to characterize the stall duration tail probability, we need to find the download time and the play time of different video segments, for any server j and streams with the choice of β j and ν j , assuming that they are not requested in ω i . The optimization over these decision variables will be considered in this paper. A. Download Times of first L j,i Segments We consider a queueing model, where W i,j,νj be the (random service time of a coded chunk g for file i from server j and queue ν j . Then, for g ≤ L j,i , the random download time of the first L j,i segments g ∈ {1, . . . , L j,i } if file i from stream P S (e,j) νj is given as Since video file i consists of L j,i segments stored at cache server j, the total service time for video file i request at queue P S (e,j) νj , denoted by ST i,j,νj , is given as Hence, the service time of the video files at the parallel stream P S (53) We note that the above is defined only when MGFs exist, i.e., Since the later video segments (L i − L j,i ) are downloaded from the data center, we need to schedule them to the β j streams using the proposed probabilistic scheduling policy. We first determine the time it takes for chunk g to depart the first queue (i.e., β j queue at datacenter). For that, we define the time of chunk g to depart the first queue as where W βj . Using similar analysis for that of deriving the MGF of download time of chunk g as in the last section, we obtain where the load intensity at queue β j at datacenter, ρ To find the download time of video segments from the second queue (at cache server j), we notice that the download time for segment g includes the waiting to download all previous segments and the idle time if the segment g is not yet downloaded from the first queue (P S j,βj is the waiting time from queue P S i,j,βj ,νj , E With the above recursive equations from y = L j,i to y = g, we can obtain that where Similarly, for y > L j,i , we have It is easy to see that the moment generating function of U i,j,βj ,g,y for y = L j,i is given by where the load intensity at queue β j at cache server j, ρ (d) j,βj is given by Similarly, the moment generating function of U i,j,βj ,g,y for y > L j,i is given as . We further note that these moment generating functions are only defined when the MGF functions exist, i.e., C. Play Times of different Segments Next, we find the play time of different video segments. Recall that D (g) i,j,βj ,νj is the download time of segment g from ν j and β j queues at client i. We further define T (g) i,j,βj ,νj as the time that segment g begins to play at the client i, given that it is downloaded from β j and ν j queues. This start-up delay of the video is denoted by d s . Then, the first segment is ready for play at the maximum of the startup delay and the time that the first segment can be downloaded. This means T (1) i,j,βj ,νj = max d s , D (1) i,j,βj ,νj . (71) For 1 < g ≤ L i , the play time of segment g of video file i is given by the maximum of (i) the time to download the segment and (ii) the time to play all previous segment plus the time to play segment g (i.e., τ seconds). Thus, the play time of segment g of video file i, when requested from server j and from ν j and β j queues, can be expressed as This results in a set of recursive equations, which further yield by T (Li) i,j,βj ,νj = max T i,j,βj ,νj + τ, D (Li) i,j,βj ,νj = max T i,j,βj ,νj + 2τ, D i,j,βj ,νj + τ, D (Li) i,j,βj ,νj where F i,j,βj ,νj ,z is expressed as We now get the MGFs of the F i,j,νj ,βj ,z to use in characterizing the play time of the different segments. Towards this goal, we plug Equation (74) into E e tF i,j,ν j ,β j ,z and obtain where E e tD (z−1) i,j,β j ,ν j can be calculated using equation (53) when 1 < z ≤ (L j,i + 1) and using equation (68) when z > L j,i + 1 . The last segment should be completed by time d s + L i τ (which is the time at which the playing of the L i − 1 segment finishes). Thus, the difference between the play time of the last segment T (Li) i,j,βj ,νj and d s + (L i − 1) τ gives the stall duration. We note that the stalls may occur before any segment and hence this difference will give the sum of durations of all the stall periods before any segment. Thus, the stall duration for the request of file i from β j queue, ν j queue and server j, i.e., Γ (i,j,βj ,νj ) U is given as Next, we use this expression to derive a tight bound on the SDTP. APPENDIX C PROOF OF LEMMA 3 From (18), we can get By taking the expectation of both sides in (77), we can write where the expectation in the second case is over the choice of (j, β j , ν j ) in addition to the queue statistics with arrival and departure rates. Since the arrivals at edge-cache of video files are Poisson, the time till first request for file i, i.e., t i , is exponentially distributed with rate λ i . By averaging over t i , we have Performing the integration and simplifying the expressions, we get This can be further simplified as follows. where (a) and (b) follow by setting c = 1 − e −λiωi , a = e −λiωi , b = 1 − λi λi+hi 1 − e −(λi+hi)ωi , c = c/b and a = a/b. We also recall that the expectation in E e hiΓ (i,j,β j ,ν j ) U is over the choice of (j, β j , ν j ) and the queue arrival/departure statistics. APPENDIX D PROOF OF LEMMA 5 We have where (a) follows from (62), the inequality above follows by replacing the max y (.) by y (.). Moreover, the last step follows from (68). Hence, we can write this proves the statement of the Lemma. APPENDIX E PROOF OF THEOREM 1 The SDTP for the request of file i can be bounded using Markov Lemma as follows This can be further simplified as follows ≤ c e −hiσ + a e −hiσ E e hiΓ (i,j,β j ,ν j ) where F i,j,βj ,νj ,z and D (v) i,j,βj ,νj are given in Appendix B in Equations (74) and (53), respectively. Further, (c) follows from (19), (d) follows from (73) and by setting c = c e −hiσ , a = a e −hi(σ+ds+(Li−1)τ ) , (e) follows by upper bounding the maximum by the sum, and (f) follows from (74). Using the two-stage probabilistic scheduling, the SDTP for video file i is further bounded by Using Lemmas 4 and 5 for E e hiD (v) r |(j, β j , ν j ) , we obtain the following. where step (f ) follows by substitution of the moment generating functions, and the remaining of the steps use the sum of geometric and Arithmeticogeometric sequences. Note that the subscript is omitted in the above derivation for simplicity. , . This proves the statement of the Theorem. APPENDIX F SUB-PROBLEMS OPTIMIZATION In this section, we explain how each sub-optimization problem is solved. 1) Server-PSs Access Optimization: Given the bandwidth allocation weights, the cache placement, edge-cache window size, and the auxiliary variables, this sub-problem can be written as follows. Input: h, w, ω, and L Objective: min (35) s.t. (36)- (39), -(41)-(44) var. π In order to solve this problem, we use iNner cOnVex Approximation (NOVA) algorithm proposed in [15]. The key idea for this algorithm is that the non-convex objective function is replaced by suitable convex approximations at which convergence to a stationary solution of the original non-convex optimization is established. NOVA solves the approximated function efficiently and maintains feasibility in each iteration. The objective function can be approximated by a convex one (e.g., proximal gradient-like approximation) such that the first order properties are preserved [15], and this convex approximation can be used in NOVA algorithm. Let U q ( π, π ν ) be the convex approximation at iterate π ν to the original non-convex problem U ( π), where U ( π) is given by (35). Then, a valid choice of U ( π; π ν ) is the first order approximation of U ( π), e.g., (proximal) gradient-like approximation, i.e., where τ u is a regularization parameter. Note that all the constraints (36)-(39) are separable and linear in π i,j,k . The NOVA Algorithm for optimizing π is described in Algorithm 1. Using the convex approximation U π (π; π ν ), the minimization steps in Algorithm 1 are convex, with linear constraints and thus can be solved using a projected gradient descent algorithm. A step-size (γ) is also used in the update of the iterate π ν . Note that the iterates {π ν } generated by the algorithm are all feasible for the original problem and, further, convergence is guaranteed, as shown in [15] and described in lemma 6. In order to use NOVA, there are some assumptions (given in [15]) that have to be satisfied in both original function and its approximation. These assumptions can be classified into two categories. The first category is the set of conditions that ensure that the original problem and its constraints are continuously differentiable on the domain of the function, which are satisfied in our problem. The second category is the set of conditions that ensures that the approximation of the original problem is uniformly strongly convex on the domain of the function. The latter set of conditions are also satisfied as the chosen function is strongly convex and its domain is also convex. To see this, we need to show that the constraints (37)-(41) form a convex domain in π which is easy to see from the linearity of the constraints. Further details on the assumptions and function approximation can be found in [15]. Thus, the following result holds. Lemma 6. For fixed h, w, ω, and L, the optimization of our problem over π generates a sequence of decreasing objective values and therefore is guaranteed to converge to a stationary point. Input: π, w, ω, and L Objective: min Proof. The proof is given in Appendix G. Algorithm 2 shows the used procedure to solve for h. Let U (h; h ν ) be the convex approximation at iterate h ν to the original non-convex problem U (h), where U (h) is given by (35), assuming other parameters constant. Then, a valid choice of U (h; h ν ) is the first order approximation of U (h), i.e., where τ h is a regularization parameter. The detailed steps can be seen in Algorithm 2. Since all the constraints (41)- (44) have been shown to be convex in h, the optimization problem in Step 1 of Algorithm 2 can be solved by the standard projected gradient descent algorithm. Lemma 8. For fixed π, w, ω, and L, the optimization of our problem over h generates a sequence of monotonically decreasing objective values and therefore is guaranteed to converge to a stationary point. 3) Bandwidth Allocation Weights Optimization: Given the auxiliary variables, the server access and PSs selection probabilities, edge-cache window size, and cache placement, this subproblem can be written as follows. Input: π, L, ω, and h Objective: min Proof. The proof is given in Appendix G. Algorithm 3 shows the used procedure to solve for w. Let U w (w; w ν ) be the convex approximation at iterate w ν to the original non-convex problem U (w), where U (w) is given by (35), assuming other parameters constant. Then, a valid choice of U w (w; w ν ) is the first order approximation of U (w), i.e., (91) where τ t is a regularization parameter. The detailed steps can be seen in Algorithm 3. Since all the constraints have been shown to be convex, the optimization problem in Step 1 of Algorithm 3 can be solved by the standard projected gradient descent algorithm. Lemma 10. For fixed π , h, ω, and L, the optimization of our problem over w generates a sequence of decreasing objective values and therefore is guaranteed to converge to a stationary point. 4) Cache Placement Optimization: Given the auxiliary variables, the server access and PS selection probabilities, edge-cache window size, and the bandwidth allocation weights, this subproblem can be written as follows. Input: π, h, ω and w Objective: min Algorithm 3 NOVA Algorithm to solve Bandwidth Allocation Optimization sub-problem 1) Initialize ν = 0, γ ν ∈ (0, 1], > 0, w 0 such that w 0 is feasible, 2) while obj (ν) − obj (ν − 1) ≥ 3) //Solve for w ν+1 with given w ν 4) Step 1: Compute w (w ν ) , the solution of w (w ν ) =argmin b U (w, w ν ), s.t. (36)- (38), (41)- (44), using projected gradient descent 5) Step 2: Algorithm 4 NOVA Algorithm to solve Cache Placement Optimization sub-problem Step Step 2: Algorithm 4 shows the used procedure to solve for L. Let U L (L; L ν ) be the convex approximation at iterate L ν to the original non-convex problem U (L), where U (L) is given by (35), assuming other parameters constant. Then, a valid choice of U L (L; L ν ) is the first order approximation of U (L), i.e., (92) where τ L is a regularization parameter. The detailed steps can be seen in Algorithm 3. Since all the constraints have been shown to be convex in L, the optimization problem in Step 1 of Algorithm 3 can be solved by the standard projected gradient descent algorithm. Lemma 11. For fixed h, π, ω and w, the optimization of our problem over L generates a sequence of monotonically decreasing objective values and therefore is guaranteed to converge to a stationary point. 5) Edge-cache Window size Optimization: Given the server access and PS selection probabilities, the bandwidth allocation weights, the cache placement, and the auxiliary variables, this sub-problem can be written as follows. Algorithm 4 shows the used procedure to solve for ω. Let U ω (ω; ω ν ) be the convex approximation at iterate ω ν to the original non-convex problem U (ω), where U (ω) is given by (35), assuming other parameters constant. Then, a valid choice of U ω (ω; ω ν ) is the first order approximation of U (ω), i.e., (93) where τ ω is a regularization parameter. The detailed steps can be seen in Algorithm 3. Since all the constraints have been shown to be convex in ω, the optimization problem in Step 1 of Algorithm 3 can be solved by the standard projected gradient descent algorithm. Lemma 12. For fixed h, π, ω and w, the optimization of our problem over ω generates a sequence of monotonically decreasing objective values and therefore is guaranteed to converge to a stationary point. Proof. The proof is provided in Appendix G. A. Proof of Lemma 7 The constraints (42)-(44) are separable for each h i and due to symmetry of the three constraints it is enough to prove convexity of E(h) = r f =1 π f,j q f,j,βj λ f e −λiωi α α−hi L f −L j,f − Λ j,βj + h i , assuming that the edge router is unfold, without loss of generality. Thus, it is enough to prove that E (h) ≥ 0. We further note that it is enough to prove that D (h) ≥ 0, where . This follows since B. Proof of Lemma 9 The constraint (42) j,νj , respectively. Note that we omit the subscript for simplicity, w.l.o.g. Thus, it is enough to prove convexity of the following three equations j,βj , and h < α (e) j,νj , respectively. Since there is only a single index j, β j , and ν j , here, we ignore the subscripts and superscripts for the rest of this proof and prove for only one case due to the symmetry. Thus, it is enough to prove that E 1 (α) ≥ 0 for h < α. We further note that it is enough to prove that D 1 (α) ≥ 0, where D 1 (α) = 1 − h α Lj,i−Li . This holds since, APPENDIX H KEY NOTATIONS USED IN THIS PAPER The key used notation in this paper are shown in Table III. APPENDIX I MEAN STALL DURATION In this section, a bound for the mean stall duration, for any video file i, is provided. Since probabilistic scheduling is one feasible strategy, the obtained bound is an upper bound to the optimal strategy. Parameters of Shifted Exponential distribution of service time from data center to cache server j α Using equation (76), the stall duration for the request of file i from β j queue, ν j queue and server j, Γ (i,j,βj ,νj ) U is given as An exact evaluation for the play time of segment L i is hard due to the dependencies between F i,j,νj ,βj ,z (i.e., equation (74)) random variables for different values of j, ν j , β j and z, where z ∈ (1, 2, ..., L i + 1). Hence, we derive an upper-bound on the playtime of the segment L i as follows. Using Jensen's inequality [44], we have for g i > 0, Thus, finding an upper bound on the moment generating function for T (Li) i will lead to an upper bound on the mean stall duration. Hence, we will now bound the moment generating function for T (Li) i . Using equation (88), we can show that Substituting (100) in (99), the mean stall duration is bounded as follows. j,νj , < 1 and the involved MGFs exist, ∀j, ν j , β j . We note that for the scenario, where the files are down-loaded rather than streamed, a metric of interest is the mean download time. This is a special case of our approach when the number of segments of each video is one, or L i = 1. Thus, the mean download time of the file follows as a special case of Theorem 3. APPENDIX J ONLINE ALGORITHM FOR EDGE-CACHE PLACEMENT We note that for the setup of the edge cache, we assumed that the edge-cache has a capacity of C e, seconds (ignoring the index of the edge cache). However, in the caching policy, we assumed that a file f is removed from the edge cache if it has not been requested in the last ω f, seconds. In the optimization, we found the parameters ω f, , such that the cache capacity is exceeded with probability less than . However, this still assumes that it is possible to exceed the cache capacity some times. This is, in practice, not possible. Thus, we will propose a mechanism to adapt the decision obtained by the optimization formulation so as to never exceed the edge cache capacity. When a file i is requested, the last request of file i is first checked. If it has not been requested in the last ω i, seconds, it is obtained from the CDN. In order to do that, the space of the file is reserved in the edge-cache. If this reservation exceeds the capacity of the edge-cache, certain files have to be removed. Any file f that has not been requested in the last ω f, seconds is removed from the cache. If, even after removing these files, the space in the edge-cache is not enough for placing file i in the edge-cache, more files must be removed. Assume that H is the set contains all files in the edge-cache, and t f,lt is the last time file f has been requested. Then, if another file needs to be removed to make space for the newly requested file, the file argmin f ∈H (t f,lt +ω f, −t i ) is removed. This continues till there is enough space for the new incoming file. Note that multiple files may be removed to make space for the incoming file, depending on the length of the new file. This is similar concept to LRU where a complete new file is added in the cache, and multiple small files may have to be removed to make space. The key part of the online adaptation so as not to violate the edge-cache capacity constraint is illustrated in Figure 8. This flowchart illustrates the online updates for an edge-cache when a file i is requested at time t i . APPENDIX K EDGE-CACHE PERFORMANCE AND FURTHER EVALUATION Convergence of the proposed algorithm: Figure 9 shows the convergence of our proposed SDTP algorithm, which alternatively optimizes the weighted SDTP of all files over scheduling probabilities π, auxiliary variables t, bandwidth allocation weights w, cache server placement L, and windowsize ω i . We see that for r = 500 video files of size 600s with m = 5 cache storage nodes, the weighted stall duration tail probability converges within a few iterations. Effect of scaling up the bandwidth of the cache servers and datacenter: The effect of increasing the server bandwidth on the weighted SDTP is plotted in Figure 10. Intuitively, increasing the storage node bandwidth will increase the service rate of the storage nodes by assigning higher bandwidth to the users, thus, reducing the weighted SDTP. Effect of the bound percentage in the SDTP: Figure 11 plots the weighted SDTP versus , i.e., probability that the cache size is exceeded. We see that the SDTP increases significantly with an increase in . This is because as increases, there are more edge capacity constraint violations and the the online adaptations may not remain the optimal choice. In the following figures, a trace-based implementation is performed, where the video ID, time requests, video lengths, etc. are obtained from one-week traces of a production system from the major service provider in the US. We note that the arrival process is not Poisson in this case, while the proposed approach still outperform the considered baseline approaches. Effect of the arrival rates on the TTFC: Figure 12 shows the effect of different video arrival rates on the TTFC for differentsize video lengths. The different sizes for video files are obtained from real traces of a major video service provider. We compared our proposed online algorithm with the analytical offline bound and LRU-based (explained in Section IV, B) policies. We see that the TTFC increases with arrival rates, as expected, however, since the TTFC is more significant at high arrival rates, we notice a significant improvement in the download time of the first chunk by about 60% at the highest arrival rate in Figure 12 as compared to the LRU policy. Effect of arrival Rates on the MSD: The effect of different video arrival rates on the mean stall duration for differentsize video length is captured in Figure 13. We compared our proposed online algorithm with five baseline policies and we see that the proposed algorithm outperforms all baseline strategies for the QoE metric of mean stall duration. Thus, bandwidth, size of the time-window, access and placement of files in the storage caches are important for the reduction of mean stall duration. Further, obviously, the mean stall duration increases with arrival rates, as expected. Since the mean stall duration is more significant at high arrival rates, we notice a significant improvement in mean stall duration (approximately 15s to about 5s) at the highest arrival rate in Figure 13 as compared to the LRU policy Effect of edge-cache capacity: We study the miss-rate (per- centage of how many video file requests are not served from the edge-cache) performance of the edge-cache. Clearly, the miss-rate decreases with the increasing size of the capacity of the edge-cache. However, when the edge-cache capacity is approximately 35% of the entire video sizes, the missrate is around 20%. Further, adaptSize policy does not neither optimize the time to live window of files ω i 's nor intelligently incorporate the arrival rates in adding/evicting the video files, and thus its performance becomes less sensitive to varying the cache size. The variant versions of LRU (qLRU with q = 0.67, kLRU and kRandom with k = 6) obtain closer performance compared to that of the basic LRU where kLRU performs that best among them as it somehow maintains a window (krequests) for admitting a file into the cache and adapts LRU policy in the eviction process. APPENDIX L JOINT MEAN-TAIL OPTIMIZATION We wish to jointly minimize the two QoE metrics (MSD and SDTP) over the choice of server-PSs scheduling, bandwidth allocation, edge-cache window-size and auxiliary variables. Since this is a multi-objective optimization, the objective can be modeled as a convex combination of the two QoE metrics. The first objective is the minimization of the mean stall duration, averaged over all the file requests, and is given as . The second objective is the minimization of stall duration tail probability, averaged over all the video file requests, and is given as i, λ i, λ Pr Γ (i, ) ≥ x . Using the expressions for the mean stall duration and the stall duration tail probability, respectively, optimization of a convex combination of the two QoE metrics can be formulated as follows. j,νj , (g i ) − 1), ∀i, j, ν j , (108) var π, q, p, h, g, w (c) , w (d) , w (e) , L, ω. Clearly, the above optimization problem is non-convex in all the parameters jointly. This is can be easily seen in the terms which are product of the different variables. Since the problem is non-convex, we propose an iterative algorithm to solve the problem. This algorithm performs an alternating optimization over the different aforementioned dimensions, such that each sub-problem is shown to have convex constraints and thus can be efficiently solved using NOVA algorithm [15]. The subproblems are explained in detailed in Appendix F. Mean-Tail tradeoff: There is a tradeoff between the MSD and SDTP. Hence, we now investigate this tradeoff in order to get a better understanding of how this traddeoff can be compromised. To do so, we vary θ in the above optimization problem to get a tradeoff between MSD and SDTP. Intuitively, if the mean stall duration decreases, the stall duration tail probability also reduces, as depicted in Figure 15. Therefore, a question arises whether the optimal point for decreasing the mean stall duration and the stall duration tail probability is the same? Based on our real video traces, we answer this question in negative since we find that at the desgin values that optimize the mean stall duration, the stall duration tail probability is 10+ times higher as compared to the optimal stall duration tail probability. Similarly, the optimal mean stall duration is 7 times lower as compared to the mean stall duration at the design values that optimizes the stall duration tail probability. As a result, an efficient tradeoff point between the two QoE metrics can be chosen based on the point on the curve that is appropriate for the clients. APPENDIX M EXTENSION TO DIFFERENT QUALITY LEVELS In this section, we show how our analysis can be extended to cover the scenario when the video can be streamed at different quality levels. We assume that each video file is encoded to different qualities, i.e., Q ∈ {1, 2, · · · , V }, where V is the number of possible choices for the quality level. The L i chunks of video file i at quality Q are denoted as G i,Q,1 , · · · , G i,Q,Li . We will use a probabilistic quality assignment strategy, where a chunk of quality Q of size a Q is requested with probability b i,Q for all Q ∈ {1, 2, · · · , V }. We further assume all the chunks of the video are fetched at the same quality level. From Section III-D, we can show that the for a file of quality Q requested from edge-router , we choose server j with probability π (Q) i,j, . Further, we can show that the aggregate arrival rate at P S
2019-02-11T01:23:00.000Z
2019-02-11T00:00:00.000
{ "year": 2019, "sha1": "34d56b5345e278a3cac0a2ae4111fe4315c6579a", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tnet.2019.2900434", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "7687576ab18a86678361cb788557d1dd56916108", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
42541241
pes2o/s2orc
v3-fos-license
Making fair comparisons in pregnancy medication safety studies: An overview of advanced methods for confounding control Abstract Understanding the safety of medication use during pregnancy relies on observational studies: However, confounding in observational studies poses a threat to the validity of estimates obtained from observational data. Newer methods, such as marginal structural models and propensity calibration, have emerged to deal with complex confounding problems, but these methods have seen limited uptake in the pregnancy medication literature. In this article, we provide an overview of newer advanced methods for confounding control and show how these methods are relevant for pregnancy medication safety studies. | INTRODUCTION More than half of all pregnant women in Western countries take medication during pregnancy, 1-3 making studies of medication safety a pressing public health concern. Studying medication safety in pregnancy presents particular challenges: Effects of medications on fetal development can be unpredictable, vulnerability to exposure changes during pregnancy, and outcomes may occur early in fetal development but be detected later. 4 In the general population, knowledge of medication efficacy and safety is primarily based on randomized controlled trials. However, randomized trials routinely exclude pregnant women due to uncertainties about the effects of medications on fetal development, meaning that studies of medication safety in pregnancy must rely on reproductive toxicity studies in animals and on observational data in humans. Several landmark cases, such as the thalidomide disaster, have taught us that animal models for teratogenicity do not necessarily translate to humans. Observational studies, using data from cohort studies, registries, and administrative databases, 5 are opportunities for understanding the risks of medication acknowledged that observational studies are the best method for assessing the maternal and fetal safety of using medication during pregnancy. 6 However, confounding is a major source of bias in observational studies. Recent years have seen the rapid development of advanced methods for dealing with confounding, yet uptake of these methods has been slow in the pregnancy medication literature. This is unfortunate, because in this field, it is arguably especially important that researchers use the best methods for confounding control, because the consequences for getting the wrong answer are so profound: Failing to detect true effects of medication exposure can have enormous effects in the population, and falsely raising the alarm for a safe drug can result in women forgoing needed therapies and, in some cases, terminating wanted pregnancies. 6 In this paper, we advocate for a greater use of advanced methods for confounding control in the pregnancy medication safety research field and provide an overview of these methods under the following framework: 1. How does this method help us to make fair comparisons between the exposed and unexposed groups? 2. How has this method been applied in the pregnancy medication literature? 3. How is the method used in practice? 4. What are the important assumptions for this method? 5. What are the major strengths and limitations of the method? Table 1 provides an outline of pregnancy medication studies using advanced methods to deal with confounding. This paper gives a useful reference for both students and experienced researchers who wish to gain new skills in advanced methods for confounding control. | CONFOUNDING IN PREGNANCY MEDICATION STUDIES Confounding control begins with a review of the literature and consultation with subject-area experts. Directed acyclic graphs (DAGs) provide a graphical means to represent the causal structure the investigator believes is present 7 and guide study design, data collection, and analysis. Figure 1 is an example DAG showing one possible causal model for prenatal antidepressant exposure and childhood neurodevelopment, with potential biasing paths, including confounders (other psychiatric illness, other psychiatric medication use, depression severity, and genetics), which should be controlled as far as possible, as well as a mediator (gestational age), and a collider (live birth). Several nonbiasing paths, including a risk factor for the outcome that is unrelated to the exposure (child gender) and a predictor of exposure that is unrelated to the outcome (prepregnancy antidepressant use), are also shown. Obtaining unbiased effect estimates requires investigators to identify and control confounding, while avoiding bias from inappropriate control for colliders and mediators and loss of precision or confusing interpretation of estimates arising from control for factors only related to the exposure or outcome but not both. 8 The Supporting Information contains a more comprehensive review of definitions of confounding, counterfactuals, and causal inference. | Methods for measured confounders In Box 1 (Supporting Information), we include a simplified illustration of confounding by measured factors and the methods to address confounding. Confounder summary scores and marginal structural models The propensity score, which is the probability of exposure given observed confounders, 9 reduces a large set of confounders to a single summary score. Propensity scores are commonly used in the medical literature; however, other summary score methods, including disease risk scores 10 (preferred in the case of rare exposures) and polygenic risk scores 11 (useful for cases when genetic confounding) are available. Propensity scores are typically constructed using multivariable logistic regression, where exposure is the dependent variable and confounders are the independent variables. The PS model should include variables that are confounders or predictors of the outcome; inclusion of factors that are only predictors of exposure will increase variance without decreasing bias. 12 High-dimensional PSs, which include thousands of variables identified through computational algorithms, may also be useful for adjusting for unmeasured confounders, if the measured variables are partial proxies for the unmeasured confounders. 13 The PS can be used to match, stratify, adjust, or weight the outcome model. Propensity scores, including high-dimensional PS, have seen increased uptake in the pregnancy literature, ie, safety studies on ondansetron, 14 lithium, 15 antidepressants, 16 and statins 17 in pregnancy, but their use is still minimal compared to multivariable KEY POINTS • Studies of the safety of medication use during pregnancy depend mainly on observational studies, which are subject to confounding bias. • Novel methods for confounding control have seen limited uptake in the pregnancy medication safety literature. • Application of novel methods is necessary to appropriately address the complex confounding scenarios found in pregnancy studies. Strengths and limitations PS is especially useful when working with a common treatment and rare outcome. They also separate the design of the study (modeling confounding) from modeling the outcome. 18 However, for rare exposures, summary scores do not perform particularly well. 19 In Figure S1A). For example, when studying the safety of antidepressants, we may wish to control for depression severity. However, antidepressant use in earlier pregnancy predicts depressive symptoms in later pregnancy, which will also predict subsequent antidepressant use. Standard adjustments for depression severity will always be biased in this scenario. Central to the MSM is the inverse probability of treatment weight. At each measurement time t, the investigator uses logistic regression to construct the numerator (probability of exposure) and denominator (probability of exposure, given baseline predictors and history of exposure at time t − 1). 24 The total weight is the product of the weights at each time point, and analyses are conducted in the weighted population, or pseudo-population, in which individuals who are likely to be exposed are downweighted, while those who are unlikely to be exposed are upweighted, producing balance of measured confounders within strata of exposure. Use of MSMs for pregnancy medication safety studies remains rare, 25,26 despite examples where timing of exposure is of great importance, and exposure is conditional on time-varying confounders, such as other medication use, or changes in disease severity. Assumptions Under assumptions of positivity, exchangeability, and consistency, the MSM will give an unbiased estimate of the effect of the exposure on the outcome. These assumptions are not formally testable, although assessment of the positivity assumption may include evaluation of the inverse probability of treatment weight for extreme weights and progressive truncation of the weights to determine whether extreme weights are highly influential. 27 When important confounders are unmeasured or incompletely measured, MSM methods will not provide unbiased effect estimates. Strengths and limitations The key strength of the MSM is that it allows consideration of time-varying exposure and confounding, which is highly relevant in pregnancy research due to the changes in fetal vulnerability through the course of pregnancy and the tendency of women to change their medication use during pregnancy. 28,29 However, when the treatment-covariate association is very strong, MSMs can produce very wide confidence intervals, which fail to include the true effect. 27 | Methods for incomplete confounder data Failure to adjust for unmeasured confounders results in biased effect estimates ( Figure S1B). In some situations, the confounder of interest was not measured in the original dataset, but was measured in a similar sample. In this scenario, confounder adjustment is possible, even if the outcome has not been measured in this sample, using PS calibration. [30][31][32] Propensity score calibration is a method based on regression calibration 33 that offers an additional advantage over other methods of calibration, 34 by allowing for adjustment for multiple confounders. For example, in a study of triptan safety, we used a cross-sectional study to jointly adjust estimates for migraine severity and type. 35 In this method, 2 PSs must be calculated: the error-prone PS (estimated in both the main and validation studies, including only the confounders available in the main study) and the gold-standard PS (estimated in the validation study, including all confounders). The outcome model is fitted using the difference between the error-prone and gold-standard PSs to calibrate effect estimates. Assumptions In addition to the assumptions of PS models, outlined previously, PS calibration also assumes that the validation sample is a reasonable stand-in for the main sample and that the measurement error model is correctly specified. 30,31 Propensity score calibration also assumes surrogacy, meaning that the error-prone PS is an adequate surrogate for the gold-standard PS. 36 If the outcome is not measured in the validation study, the surrogacy assumption is not testable. Violations of surrogacy occur when the direction of confounding differs between the main and validation studies, 30 and bias arising from violations of surrogacy can be predicted. 36 Other methods exist for unmeasured confounding, including weighting by the inverse probability of missingness, as well as standard imputation techniques, and a comparison of these methods with PS calibration showed little material difference in bias reduction. 37 Strengths and limitations The main strength of PS calibration allows for adjustment for multiple unmeasured confounders. However, calibration methods fail when unmeasured confounding is strong, and violations of the surrogacy assumption may result in increased bias. | Methods for unmeasured confounding Information on confounders may be too difficult to measure (eg, family environment or parenting style) or too costly (eg, deep sequencing genetic data). The methods discussed below exploit aspects of observational data to control for measured and unmeasured confounders. | Sibling comparison designs If the unmeasured confounders are shared between siblings (see Figure S1C for illustration), then studies examining with discordant exposure allows researchers to remove bias from shared confounders. [38][39][40] If, for example, we believe that any differences in autism risk between children with and without prenatal exposure to antidepressants is due to inherited genetic risk, then comparing the autism diagnosis between pairs of siblings with different prenatal exposure should be less biased than comparing autism risk between unrelated exposed and unexposed groups. There has been substantial uptake of sibling study designs in the pregnancy medication safety literature in recent years, particularly in studies examining the safety of antidepressants, where the main concern is separating the underlying genetic and familial components of depression from exposure to antidepressant medications. 41,42 Assumptions Use of sibling designs is most appropriate when confounders that are shared between siblings are more important than unshared, 39 and there are no carryover effects between siblings. 43 Strengths and limitations Sibling designs control measured and unmeasured confounding that is shared between siblings. However, failing to control for unshared confounders increases bias; sibling studies are also more vulnerable to bias from measurement error than nonsibling studies. 39 Figure S1D). Strengths and limitations Instrumental variable analyses control measured and unmeasured confounding, and so instruments that meet all the assumptions will mimic the results from a randomized trial. However, estimates are highly sensitive to violations of untestable assumptions, and violations may produce bias amplification. 44 A reference to selected software for the methods discussed in this paper is included as part of the Supporting Information. With few exceptions, these methods have seen slow uptake in the pregnancy medication literature. This may be due to a sense of caution about methods that can seem opaque upon first encounter with the methods paper describing the technique. Caution is necessary when applying novel methods. However, it is also true that the standard regression methods require similar assumptions to the methods discussed in this paper. If readers find that their research question fits well with one of the scenarios described in this paper, we suggest approaching the problem by tackling the citations given for the technique. The techniques we describe in this paper have their roots in standard regression techniques and can be implemented with standard software. | DISCUSSION While this paper focuses on bias due to confounding, other sources of bias such as exposure and/or outcome misclassification 51 and selection bias, 52 as well as seasonal effects, 53 can also distort associations. This paper is not intended to be an exhaustive discussion of all possible methods for confounding control. New techniques are being developed all the time, and many of these, such as g-estimation 54,55 and targeted maximum likelihood estimation, 56 have not yet been implemented in the pregnancy medication literature. Quantitative bias analysis can help researchers account for bias from systematic errors in their data. 57 Further, the methods discussed herein are not mutually exclusive and can be used in combination with each other: Combining PSs with IVs 46 or MSMs with quantitative bias analysis 25 gives more information about the probable range of effect estimates than any single method. Observational studies are vital to our understanding of medication safety in pregnancy, but great care must be taken in the analysis and interpretation of data to minimize confounding and bias. In all ETHICS STATEMENT The authors state that no ethical approval was needed.
2018-04-03T06:04:26.870Z
2017-10-17T00:00:00.000
{ "year": 2017, "sha1": "e930a28808eebd99fec7da1d2098f7f32d38cfe3", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pds.4336", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e930a28808eebd99fec7da1d2098f7f32d38cfe3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21722272
pes2o/s2orc
v3-fos-license
The protection effect and mechanism of hyperbaric oxygen therapy in rat brain with traumatic injury Purpose: To investigate the effect of hyperbaric oxygen therapy (HBOT) on traumatic brain injury (TBI) outcome. Methods: The modified Marmarou’s weight drop device was used to generate non-lethal moderate TBI rat model, and further developed in vitro astrocytes culturing system. Then, we analyzed the expression changes of interested genes and protein by quantitative PCR and western blot. Results: Multiple HBO treatments significantly reduced the expression of apoptosis promoting genes, such as c-fos, c-jun, Bax and weakened the activation of Caspase-3 in model rats. On the contrary, HBOT alleviated the decrease of anti-apoptosis gene Bcl-2 and promoted the expression of neurotrophic factors (NTFs), such as NGF, BDNF, GDNF and NT-3 in vivo. As a consequent, the neuropathogenesis was remarkably relied with HBOT. Astrocytes from TBI brain or those cultured with 21% O2 density expressed higher NTFs than that of corresponding controls, from sham brain and cultured with 7% O2, respectively. The NTFs expression was the highest in astrocytes form TBI brain and cultured with 21% O2, suggesting a synergistic effect existed between TBI and the following HBO treatment in astrocytes. Conclusion: Our findings provided evidence for the clinical usage of HBO treating brain damages. treatment and prognosis.Though extensive efforts have been devoted to develop therapy methods for TBI/DTI, little was achieved in improving clinical prognosis 9,10 .Besides the brain damaged caused during acute period, the neuropathologies may sustain and progress into chronic traumatic encephalopathy (CTE), which could be a life-long health threat 11 .TBI associated disabilities affect approximately 5 million people in US with a health care cost over s $60 billion [12][13][14] . In the last two decades, hyperbaric oxygen therapy (HBOT) has been introduced to treat multiple injuries and disorders, including traumatic ischemia, damage, cerebral palsy and TBI among others 15,16 .During HBOT, the patient is administered with 100% oxygen at a pressure greater than atmospheric pressure at sea level within a sealed chamber.The effect of high pressure and increased solubility and diffusion characters of gas (O 2 ) are expected to improved oxygenation, vasoconstriction, modulation of inflammation and immune function, and/or promote angiogenesis 15 .In normal tissue, hyperoxia will induce vasculature constriction, but tissue oxygenation remains unchanged because of increased dissolved oxygen 17 .As consequence of vasoconstriction, tissue edema and exposure to reactive oxygen species (ROS), which is induced by hyperbaric oxygen, could be reduced 18,19 .In peripheral tissues such as cartilage and skin, HBOT has been shown to modulate angiogenesis during wound healing 20,21 , whether this is also the case in brain TBI/DTI remain largely unknown.Jiang et al reported that the effect of a single HBO treatment would last for about 12 hours, and therefore suggested multiple HBO treatment could be administered to prolong the response period 22 .Niklas et al. 23 confirmed that in severe brain TBI model rabbits, multiple HBOT dramatically reduced edema and necrosis areas and consequentially mortality rate. A meta-analysis including 12 randomized trials demonstrated that HBOT ■ Introduction Traumatic brain injury (TBI) occur when sudden acceleration or deceleration happens within the cranium caused by all kinds of external forces, among which traffic accident is one of the most common and still rising cause in modern society 1 .It is estimated that about 2 million people suffer TBI in the United States annually, accounting for ~30% injuryrelated deaths 2,3 .Because of its heterogeneous causes, the signs and symptoms of TBI vary dramatically, depending on the part of brain being affected and severity of the injury.Mild TBI may cause headache, vomiting, nausea, dizziness, difficulty balancing and so on, while severe TBI may cause convulsions, inability to awaken, slurred speech, aphasia and behavior change in addition 1,4 .According to the pathogenesis feature, TBI damage could be focal or diffused or the combination of both 5 .A focal traumatic injury could be easily detected by conventional imaging method such as CT or MRI, but diffused traumatic injury (DTI) could only be detected by post-mortem microscopy examination or more advanced diffusion tensor imaging MRI techniques [6][7][8] .According to Granacher Jr 6 , DTI includes the following 4 brain damage types: 1) Diffuse axonal injury, which affects the white matter; 2) Ischemia, caused by reduced blood supply and a main reason of secondary damage; 3) Vascular injury, and 4) Edema, increasing the intracranial pressure which could be lethal if not treated properly.Normally, these pathological changes do not occur separately; rather, two or more changes occur simultaneously or work as reciprocal causations.Take brain edema for example, which is the most common and rapidly happened symptoms after TBI, combined with vascular injury, it will promote the deterioration of ischemia and further neurodegenerative pathology.Due to the widely spreaded damage and complicated pathogenesis pathways involved, DTI is most challenging for both was no better than control treatment for mild TBI; however, HBOT did benefit moderateto-severe TBI for acute treatment.But still, the clinical trial of HBOT was rather limited as the underlining mechanisms remain unclear.In hypoxia-ischemia rat model, a single HBO treatment significantly reduced caspase-3 activity and sequentially reduced apoptotic cell number in both brain cortex and hippocampus 24 .Several lines evidence indicated that TBI induced immediate c-fos and c-jun expression, which were not limited to damage site but rather diffused and the expression were associated with severity of damage [25][26][27][28] .Some apoptosis genes, such as Bax was also upregulated following the elevation of c-fos/c-jun, indicating that they may involve in neural death events.There is no direct evidence demonstrating whether HBO treatment improve TBI/DTI prognosis by modulating c-fos/c-jun expression level so far. In this study, with modified Marmarou rat DTI model, we confirmed that multiple HBOT reduced damage-induced c-fos/c-jun expression and further reduced cell apoptosis.These effects may be partial attributed to the elevated expression of neural trophic factors (NTFs) such as NGF, BDNF, NT3 and GDNF. In vitro cell culturing model demonstrated that astocytes isolated from DTI model rat contributed to the expression elevation of NTFs in response to high O 2 concentration, while astocytes isolated from control rat brain were less effective.Our results for the first time connected the neural protective effect of HBO to NTFs and specified that astocytes as the source of such NTFs. ■ Methods All experimental procedures were performed according to the Guidelines for Animal Care and Use of Shanghai Sixth People´s Hospital East. Adult male SPF Sprague-Dawley rats, 8-week age, weighted 300±30g, were purchased from Shanghai Slac laboratory animal CO. Ltd.The rats housed under a 12:12 light-dark cycle with ad libitum feeding. Modified Marmarou weight drop model To induce close head brain diffused traumatic injury, Marmarou weight drop device was adopted with modification [29][30][31] .First, a metal helmet that match rat skull curve was casted and used to cover the rat head during weight drop, which evenly distribute the vertical force to the whole brain.Second, the weight and falling height of the impactor were careful adjusted to achieve close head moderate DTI.The parameters were strictly fixed to avoided biases among animals.Third, the impactor was connected to a conduction rope, which was manually strained immediately after the impactor contacted the metal helmet to avoid second strike. Animals were initially anesthetized with 2% sodium pentobarbital at 45mg/kg body weight intraperitoneal.The skull was exposed by midline incision of the dorsal surface and covered with the sterilized metal helmet.Injury was then induced by weight derived falling of the impactor.Sham group rats were only anesthetized and surgically exposed the Skull but without impact injury. HBO treatment After injury, the animals were immediately administered with HBO treatment of 100% oxygen at a pressure of 3 atmospheres absolute (ATA) for 1h, and then returned to normal housing condition.Multiple HBOT was conducted in 12 hour interval for the following 3 days and a total of six treatments were administered.The control group rats were put in the HBO chamber for 1h but administered with only normal atmosphere air. Assessment of brain edema The wet-weight dry-weight method was used to assess brain edema.In brief, at desired time points after injury the whole brains, including bilateral cerebral, diencephalon, mesencephalon, cerebellum and brainstem of 3 animals in each group, were dissected out.The fresh tissues were immediately weighed to get wet weight, then placed at 100°C for 24 hours and dry weight were determined.The percentage of water was calculated with the following formulation: % of H2O = (wet weight -dry weight)/wet weight x 100%. HE staining Paraffin embedding and HE staining were performed as described previously.In short, the animals were perfused with PBS under deep anesthesia, followed by 4% PAF in PBS for pre-fixation.The brains were removed and placed in the same fixative solution at 4°C overnight.Paraffin-embedded brains were sectioned at 5um with a Leica semiautomatic microtome, transferred onto plastic slides and processed for HE staining.Stained slides were mounted with neutral balsam and cover slips, and images were developed with Olympus BX51 microscope equipped with Cellsens software. Quantitative PCR (qPCR) The animals were sacrificed at desired time points and the cerebrums were isolated, separated into left and right hemispheres along midlines, flash frozen in liquid nitrogen (L N2 ) and kept at -80°C until use.The left cerebral hemispheres were grind in a mortar pre-cooled with L N2.The samples were then lysed with 1ml Trizol and total RNA were extracted.1 ug of each total RNA sample was reverse-transcribed to cDNA using the QuantiTect Reverse Transcription Kit (Qiagen).Real-time PCR was performed with SYBR Green PCR Master Mix (Applied Biosystems) according to the manufacturer's instruction.All measurements were performed in triplicate and Gapdh mRNA was used to normalize the relative expression levels of target genes. Western blot The right cerebral hemispheres were grind in a mortar pre-cooled with L N2 and lysed in RIPA buffer (25mM Tris-HCl pH7.5, 150 mM NaCl, 0.1% SDS, 0.5% sodium deoxycholate, 1% Triton X-100) supplemented with protease cocktail.Equal amounts of protein were separated on 10% SDS-PAGE gels and transferred onto nitrocellulose membranes.After blocking with 5% skimmed milk in TBS buffer (50 mM Tris-HCl, 150 mM NaCl), membranes were incubated with primary antibodies () diluted in blocking buffer at 4°C overnight.The membranes were washed with TBST buffer (TBS + 0.1% Tween-20) 3x 5min at room temperature and incubated in corresponding HRP-conjugated secondary antibodies (1:5000; Cell Signaling Technology).The blots were developed using Pierce ECL Western Blotting Substrate Plus and band density was measured with ImageJ software. Primary astrocyte culturing The brains of sham rat and Marmarou weight drop model rat were aseptically dissected, and the meninges were removed.Primary astrocytes were isolated and cultured as described with minor modification 32 .In brief, the cerebrum was chopped to 1mm 3 pieces and digested with 0.05% trypsin and 0.003% DNase at 37°C for 15 min.The tissue was triturated with fire polished Pasteur pipette, collected, digested with 40U papain/ ml, 0.02% cysteine and 0.003% DNase at 37°C for 15 min.Then triturated again and filtered through 40um cell strainer.The single cells were collected and re-suspended in DMEM/ F12 supplemented with 10% FBS and seeded in poly-L-lysine pre-coated dishes at density of 5x10 5 cells/cm 2 .To mimic low and high oxygen condition, the cells were kept at 7%O 2 , 5%CO 2 , 88%N 2 or 21%O 2 , 5%CO 2 , 74%N 2 incubators respectively.After three days incubation, the cells were harvested and mRNA expression was measured with qPCR. Statistical analyses Multiple group were compared using ANOVA (one-way or two-way).Unpaired t tests were used for two-group comparisons. The tests were two-tailed and considered significant when p<0.05.All data are presented as mean±SEM. Modified Marmarou weight drop model causes moderate but not lethal TBI Marmarou's weight drop device was widely used for introducing diffuse axonal injury model of TBI on both rat and mouse since first developed in 1990s 30 .We made minor modifications as mentioned above to increase the reproducibility and comparability among animals.With these modifications and carefully adjusted parameters, we got TBI model rats of moderate diffused damages. We first assessed the brain edema by measuring the percentage of water content, which was about 78% in sham treated brains and almost no fluctuation observed over time.However, weight drop impact dramatically increased brain water content which was observed as early as 3 hours and peaked at 24 hour after injury, suggesting a severe brain edema and increasing of intracranial pressure (Figure 1A).The incensement of water content was relieved at 3 and 7 days post injury, but still significantly higher than control group, indicating the absorption of edema while pathogenic risk sustained if not treated properly.HE staining and histological examination demonstrated reduced neuronal density in the cerebral cortex at all checked time points (Figure 1B; 3h, 6h, 1d, 3d, 7d after injury) compared with sham control.Shrunken neurons with surrounding vacuolation were observed and the number increased with time, indicating the progress of irreversible neural death.These results were consistent with previous reports 29,30,33 .In sham treated brains, the water content is about 78±0.2%.Weight drop impact significantly increases water content which peaks at 24 hours after injury (82.4±0.4%) and decreases but still higher that sham control at 3 and 7 days after injury.*p<0.05,**p<0.01.B. HE staining demonstrates the neuropathogenesis after weight drop impact.Neuron number is decreased, shrunken neurons and peri-neural vacuolation are observed, which aggravate with time. TBI elevates apoptosis proteins expression and increases c-fos/c-jun mRNA It was reported that TBI induced Caspase-3 activity, which is 32kD zymogen could be activated by both extrinsic and intrinsic apoptosis pathways 34 .We confirmed that the cleaved 17 kD activity subunit was dramatically increased 24 hours after TBI and sustained to day 7, indicating active apoptosis events (Figure 2A-B).Further, we detected the expression level of proteins Bcl-2 and Bax, two mutually antagonistic factors of mitochondrial pathway of apoptosis.We found dramatically decreased expression of Bcl-2, which is an anti-apoptosis protein, most apparent at 24 hours after TBI.On the contrary, the expression of Bax, a pro-apoptosis protein which induce mitochondrial outer membrane permeabilization, was markedly increased, as shown in Figure 2A.However, there was no activation of Caspase-3 or Bcl-2/Bax disbalance among the sham treated rats (Figure 2A).These results suggested that TBI induced neural apoptosis via the intrinsic mitochondrial pathway. C-fos and c-jun are two immediate-early genes encode proteins that form heterodimer complex AP-1 (Activator Protein-1).Several lines of evidence demonstrated that c-fos/cjun played important roles in apoptosis and the expression of their mRNA could serve as an early indicator of apoptotic events.As expected, compared with sham control, the expression of c-fos/c-jun mRNA were rapidly increased after TBI, reached maximum at 24 hours and drop back to base level at day 3/7 (Figure 2B). Multiple HBOT attenuates brain edema and neural pathogenesis induced by TBI To determine whether hyperbaric oxygen will benefit TBI induced brain damage, the model rats were administrated with multiple HBOT immediately after injury and at 12-hour interval in the following 3 days.Brain samples were collected at time points of 3h, 6h, 1d, 3d and 7d post injury, which received 1 (3h, 6h), 2 (1d), 6 (3d, 7d) times of HBOT.As shown in Figure 3A, HBOT significantly reduced brain water content, which was most apparent at 1d after injury, as compared to notreatment controls.HE staining of 1d samples demonstrated that HBOT remarkably alleviated the pathological progress, as less shrunken neuron and perineuronal vacuolation was observed (Figure 3B).Consistent with previous report, these results confirmed that HBOT could effectively prevent deterioration of DTI pathogenesis. HBOT prevents neural death by inhibiting apoptosis pathway and activating neurotropic factors expression We further detected the apoptosis protein expression with HBO treated brain samples.The protein expression tendency is similar to TBI brain without HBOT (Figure 2), but several subtle changes were noticed: first, a stronger Bcl-2 band was detected in HBOT group at time point 1d, which was barely visible in the corresponding TBI sample; second, the increase of Bax expression was less apparent in HBOT group; third, the cleaved Caspase-3 band was more significantly reduced in HBOT group at time point 7d (Figure 4A).These results suggested that HBOT effectively blocked neural apoptosis pathways.We then measured c-fos and c-jun mRNA level, as expected, HBOT significantly reduced their expression which were most evident at 1d post injury, suggesting an early effective window for HBO treatment (Figure 4B-C). We speculated that neurotrophic factors (NTFs), known for their function of promoting neuron survival, might play as mediators of the HBOT neural proactive/antiapoptosis effect.By qPCR, we determined the expression of the 4 most abundant NTFs: Brain-derived neurotrophic factor (BDNF), Glial cell line-derived neurotrophic factor (GDNF), nerve growth factor (NGF) and Neurotrophin-3 (NT-3) (Figure 4D-G).As shown in Figure 4F-G, all the 4 NTFs was elevated after TBI, while HBOT increased their expression further as compared to no-treatment controls, which was most significant at time point 1d and continued to day 7. Astrocyte is the source of NTFs triggered by TBI-high O 2 combination It is well known that astrocytes support neuronal cells under both physiology and pathology condition by secreting NTFs.We asked whether astrocytes contribute to the increasing expression of NTFs and the neural protection effect of HBOT after TBI.To answer this question, we isolated astrocytes from both TBI and sham treated rat brains and cultured at both 7% and 21% O 2 circumstances for 3 days before the expression of NTFs were determined.As shown in Figure 5, high O 2 (21%) promoted the expression of all the checked NTFs in both sham treated and TBI astrocytes, but most significantly in the later.Need to point out was that TBI along is sufficient to activate astrocytes NTFs expression, albeit less significant.These results suggested that astrocytes were activated upon TBI damage to exert its neural supporting/protection function, and hyperoxia further enhanced this activity. ■ Discussion HBO has long been proposed to be an adjunct or enhancement therapy for TBI, but the clinical trials gave controversial results and no conclusions reached so far.In this study, we used Marmarou rat TBI/DTI model revealed that HBOT attenuated neural apoptosis process by reducing the expression of the two immediateearly genes c-fos/c-jun, and re-balancing the ratio of Bcl-2/Bax, as a consequence, the activation of apoptosis executioner protein Caspase-3 was subdued.This effect was mediated, at least partially, by the increasing expression of NTFs which we found attributing to the activation of astrocytes. There are multiple closed head impact models designed to replicate the pathobiology of human concussive and diffuse traumatic injury 35 , among which Marmarou's weight drop model is most wildly used because the device is easy to set up.By modifying the height of dropping weight and constrain secondary injury we achieved moderate TBI model with almost zero skull fracture and mortality, which were reported to be 12.5% and 44% respectively in the original study 30,33 .It was reported that during the first 4 hours after injury, reduced cerebral blood flow (CBF) and elevated intracerebral pressure could be observed as the consequence of cerebral autoregulation loss 36 head injured by weight drop from one meter height using 350 g, 400 g and 450 g respectively.CBF was monitored using laser-Doppler flowmetry along with monitoring of ICP and arterial blood pressure.If the correlation coefficient between CBF and CPP was > 0.85 and CPP was within normal range, loss of autoregulation was hypothesized.Loss of autoregulation was seen in all groups of injured rats during first four hours.A statistically significant difference (p = 0.041, based on which, it's reasonable to deduce that reduced CBF could further worsen neuropathy caused by primary impact.As comprehensive consequences, weight drop impact induced apoptotic protein Bcl-2/Bax ratio change, activated caspase-3 and released cytochrome c from mitochondria into cytosol 37 , the former two phenomena were confirmed in this study.It was reported in ischemic wound model, HBOT decreased inflammation and apoptosis by up-regulating Bcl-2 expression but inhibiting Caspase-3 activity 38 .We found it was also the case for HBO treated rat TBI model.In mouse Neuro2A cells, a nuroblastoma cell line, deprivation of oxygen and glucose increased c-fos expression at both mRNA and protein level, and a more interesting finding was that mitochondria recruited excess c-fos protein during the process of cell apoptosis 39 .In mouse retina photoreceptor cells, knockout of c-fos rendered the cells resistance to apoptosis signaling.We found that TBI rapidly stimulated the expression of c-fos and its partner gene c-jun.Though the exact function of c-fos in neural death remained unclear, we speculated that it might play a role by forming transcriptional activator complex AP-1 together with c-jun, and regulating the expression of certain genes associated apoptosis.c-jun is a 39kD protein which could be phosphorylated on multiple serine and threonine sites by JNK.c-jun was reported to play anti-apoptotic roles by inhibit p21 and p53, whether it also functioned as apoptosis inhibitor and how did it balanced with c-fos function during TBI was still unknown.Further studies are needed to determine whether Bcl-2 or Bax was the target of AP-1 transcriptional activation.However, we observed a clear positive correlation among c-fos/c-jun expression, apoptosis effector proteins (Bcl-2/Bax/ Caspase-3) modulation and neural death, suggesting that they were not isolated events, rather there might be inherent relationship unrevealed.HBO treatment also attenuated c-fos/c-jun up-regulation, in a similar trend as that of apoptosis effector proteins, which further indicated such a possibility. In the health brain, astrocytes are widely spreadedand function to preserve environment for neural circuit function, such as maintaining the homeostasis of ions, transmitters, water, and nutrition.It is well documented that various kinds of damages and diseases can activate astrocytes, which is termed as reactive astrogliosis.In the present study, we found that TBI stimulated the expression of NTFs, which might function as paracrine factors promoting neurons to leave apoptosis pathways and survive damages.We attributed the surge of NTFs to the activation of astrocyte, and confirmed the results in cultured cells, but it was not clear how mechanical forces (weight drop impact) were translated into cell signal that could be recognized by astrocytes and the properly responded.It was reported that astrocytes express mechanotransducing ion channels andstretch-sensitive cation channels on the membrane surface, which contribute to rapid influx of extracellular calcium and sodium upon membrane deformation caused by TBI [40][41][42][43] .The intracellular network formed by glial fibrillary acidic protein (GFAP), the astrocyte specific intermediate filaments, may also participates in transduction of mechanical stretch as it was reported to be up-regulated by trauma 44 .How was the above cell signals translated into the expression up-regulation of NTFs?This issue is rarely discussed and further work is needed.It was interesting that high oxygen density could further elevate the incensement of NTFs expression, which indicated that astrocytes also responded to O 2 stimuli in a positive way and this is a strong support for clinical usage of HBOT.The difficulties to developing therapeutic strategies for TBI is largely lie in the fact that multiple factors are tightly tangled and affect each other during neuropathy progressing.Furthermore, in the case of diffused traumatic brain injury, there is no specific location for targeted treatments. ■ Conclusions The robust astrocytes in the brain could be activated by traumatic brain injury and following hyperbaric oxygen treatment.Optimized HBOT parameters and treating scheme may better stimulate the protection effect of astrocytes and extend its medical application. Figure 1 - Figure 1 -Modified Marmarou weight drop model successively causes brain edema and neuropathogenesis. A. Brain edema was evaluated by the percentage of water in total brain tissue.In sham treated brains, the water content is about 78±0.2%.Weight drop impact significantly increases water content which peaks at 24 hours after injury (82.4±0.4%) and decreases but still higher that sham control at 3 and 7 days after injury.*p<0.05,**p<0.01.B. HE staining demonstrates the neuropathogenesis after weight drop impact.Neuron number is decreased, shrunken neurons and peri-neural vacuolation are observed, which aggravate with time. Figure 2 - Figure 2 -TBI induces the activation of mitochondria apoptosis pathway.A. Western blot shows the changes of key apoptosis associated proteins.Compared with sham control, TBI reduces the expression of Bcl-2, but promotes the expression of Bax and the activation of Caspase-3.Theses effect is most apparent at time point 24 hours after injury.B. qPCR demonstrates the elevated mRNA expression of c-fos and c-jun, which is most significant at 24 hours after injury.*p<0.05,**p<0.01,***p<0.005. Figure 3 - Figure 3 -HBO treatment reduces brain edema and attenuates neuropathogenesis. A. Compared control treatment, HBOT significantly reduced brain edema, as measure by brain water content, at all check time points.*p<0.05,**p<0.01.B. HBOT attenuates neural death as fewer shrunken neurons and peri-neural vacuolation are observed compared with control. Figure 4 - Figure 4 -HBO attenuates the changes of apoptosis genes and promotes the expression of NTFs in vivo. A. Compared with control treatment, HBOT attenuates the reduction of Bcl-2, reduces the increment of Bax and the activation of Caspase-3.B-C.HBOT reduces the mRNA expression increment of c-fos and c-jun throughout the experiment scheme, but most significant at 24 hours after injury.*p<0.05,**p<0.01,***p<0.005.D-G.The expression of all the checked NTFs (BDNF, GDNF, NGF, NT-3 ) are elevated after HBOT, which starts at 6 hours after injury and sustains to day 7. *p<0.05,**p<0.01. Figure 5 - Figure 5 -High oxygen density promotes NTFs expression in cultured astrocytes.After 3 days in vitro culture, the expression of NTFs in astrocytes is elevated in TBI and high O2 groups compared with sham control or low O2 groups.
2018-05-21T21:28:04.096Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "fe963642707b120b1e0d46214db6a220624d14f0", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/acb/v33n4/1678-2674-acb-33-04-00341.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5dea22aeb93804301cac71e14d5bfddc7bf11e9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247610419
pes2o/s2orc
v3-fos-license
Cannabidiol on the Path from the Lab to the Cancer Patient: Opportunities and Challenges Cannabidiol (CBD), a major non-psychotropic component of cannabis, is receiving growing attention as a potential anticancer agent. CBD suppresses the development of cancer in both in vitro (cancer cell culture) and in vivo (xenografts in immunodeficient mice) models. For critical evaluation of the advances of CBD on its path from laboratory research to practical application, in this review, we wish to call the attention of scientists and clinicians to the following issues: (a) the biological effects of CBD in cancer and healthy cells; (b) the anticancer effects of CBD in animal models and clinical case reports; (c) CBD’s interaction with conventional anticancer drugs; (d) CBD’s potential in palliative care for cancer patients; (e) CBD’s tolerability and reported side effects; (f) CBD delivery for anticancer treatment. Introduction Cannabidiol (CBD) is the most abundant natural cannabinoid found in cannabis plants. The advantage of CBD is the apparent lack of any intoxicating effect. CBD has been proposed for the treatment of pain, insomnia, several psychological conditions, graftversus-host disease, inflammatory diseases, and cancer [1][2][3][4][5][6][7]. The wide spectrum of biological effects seems to be related to numerous molecular targets for CBD, which include various G-protein-coupled receptors, ion channels and ionotropic receptors, transporter proteins, nuclear receptors, and numerous enzymes involved in lipid, xenobiotic/drug, and mitochondrial metabolism [7,8]. The anticancer properties of CBD are mostly reported in studies in vitro, and to a lesser extent in vivo, whereas clinical studies including cancer patients are still scarce. The goal of the present review is a critical assessment of CBD's potential for anticancer therapy, recent advances, and challenges. CBD Shows Anticancer Properties in Pre-Clinical Studies In Vitro and In Vivo In the last several years, there has been growing interest in the use of cannabinoids in the treatment of various types of cancer. Two of them, CBD and ∆-9-tetrahidrocannabinol (THC), have demonstrated pronounced anticancer activity in pre-clinical in in vitro and in vivo trials. Because the use of THC in chemotherapy is limited due to its psychotropic effects, special attention is paid to the non-psychoactive CBD, which also has demonstrated a greater antitumor effect than THC [9][10][11]. A recent comprehensive review summarizes the biological effects of CBD in different tumor types and is highly recommendable for interested readers [6]. The biological effects of CBD have been tested in a broad range of tumor cells in vitro and in vivo (Table A1 in Appendix A), including glioma/glioblastoma [9,[11][12][13][14][15][16][17][18][19], breast 38], lung cancer [39][40][41], cervical cancer [25,39,42]), neuroblastoma [43,44], medulloblastoma [45], ependymoma [45], pancreatic cancer [46,47], ovarian cancer [28], endometrial cancer [48], bladder urothelial carcinoma [49], and head and neck squamous cell carcinoma [50]. Anticancer drug candidates are initially screened on cell lines, to reveal their biological effects and underlying molecular mechanisms. The antitumoral activity of CBD was tested in vitro in a wide range of concentrations, from 0.01 to 100 µ M (Table A1). However, the variation in culture conditions (passage number, medium composition, presence of serum and cellular confluence) and the mode of CBD administration (single or repetitive daily administration) hinder direct data comparison and assessment of the relative sensitivity of cell lines to CBD. Lymphoblastic leukemia, particularly of T lineage, presents higher sensitivity to CBD when compared to myeloid leukemia and breast and cervical cancer, as was demonstrated in comparative viability assays carried out under the same experimental conditions [25]. To understand the antitumor action of CBD, it is necessary to consider the sequence of events induced in target cells and their interrelation. Table A1 summarizes cellular targets and processes in different cancer models affected by CBD at the time range from minutes to days, whereas a synthetic timeline for CBD's biological effects, related to its antitumor activity, is given in Figure 1. There are only a few studies that monitored the earliest/instantaneous responses to CBD. A rapid rise in the cytosolic free calcium (Ca 2+ ) level ([Ca 2+ ]i), which occurs within the first 3-5 min after CBD administration, was observed in breast cancer [9] and leukemic T cells [25]. In the latter work, concurrent measurements of mitochondrial Ca 2+ ([Ca 2+ ]m) and [Ca 2+ ]i revealed that the [Ca 2+ ]I rise was preceded by [Ca 2+ ]m transience, indicating the early involvement of mitochondria in the process [25]. Accordingly, the rapid dissipation of mitochondrial membrane potential (ΔΨm) and cytochrome C (Cyt C) release from mitochondria to cytosol was observed in this model during the first 10-20 min [25]. At physiological conditions, mitochondria are major contributors to reactive oxygen species (ROS) production. They also possess an efficient antioxidant enzyme system for rapid ROS scavenging, to prevent cell damage. Accordingly, mitochondrial disturbances are related to the increased production of ROS and oxidative stress [51]. Augmented ROS levels were reported from the first hour of CBD administration in different models, such as murine thymoma [33], human breast cancer [9], and T cell leukemia [25], and could be detected also at longer (24-96 h) times of observation (Table A1). Increased ROS production seems to be an important mediator of CBD cytotoxicity. ROS scavengers α-tocopherol (αTOC) and N-acetylcysteine (NAC) counteracted the antiproliferative effects of CBD in There are only a few studies that monitored the earliest/instantaneous responses to CBD. A rapid rise in the cytosolic free calcium (Ca 2+ ) level ([Ca 2+ ] i ), which occurs within the first 3-5 min after CBD administration, was observed in breast cancer [9] and leukemic T cells [25]. In the latter work, concurrent measurements of mitochondrial Ca 2+ ([Ca 2+ ] m ) and [Ca 2+ ] i revealed that the [Ca 2+ ] I rise was preceded by [Ca 2+ ] m transience, indicating the early involvement of mitochondria in the process [25]. Accordingly, the rapid dissipation of mitochondrial membrane potential (∆Ψm) and cytochrome C (Cyt C) release from mitochondria to cytosol was observed in this model during the first 10-20 min [25]. At physiological conditions, mitochondria are major contributors to reactive oxygen species (ROS) production. They also possess an efficient antioxidant enzyme system for rapid ROS scavenging, to prevent cell damage. Accordingly, mitochondrial disturbances are related to the increased production of ROS and oxidative stress [51]. Augmented ROS levels were reported from the first hour of CBD administration in different models, such as murine thymoma [33], human breast cancer [9], and T cell leukemia [25], and could be detected also at longer (24-96 h) times of observation (Table A1). Increased ROS production seems to be an important mediator of CBD cytotoxicity. ROS scavengers αtocopherol (αTOC) and N-acetylcysteine (NAC) counteracted the antiproliferative effects of CBD in human glioblastoma [11,12], breast cancer [9,21], T cell leukemia [31], and mouse medulloblastoma [45]. Accordingly, αTOC rescued cancer cells from apoptosis [15,18,22,31]. Among early events, which developed within minutes after CBD administration, decreased levels of active (phosphorylated) AKT were reported [44], which in turn can be related to the inhibition of cellular metabolism and proliferation. Inhibition of AKT/mTOR and upregulation of MAPK signaling pathways were confirmed in many models (Table A1, Figure 1). Since decreased p-AKT levels are a key signal for the activation of autophagy [52] and are related to the upregulation of MAPK p38 [18], p-AKT downregulation is maintained over time [16,18,19,22,32] and correlates with the rise in p-p38 [18,19,39], decrement in p-mTOR [32], upregulation of key autophagic genes [16], and induction of autophagy [25,44]. Autophagy, depending on its scale, moderate or large, may act as a protective mechanism or can eventually lead to cell death [52]. In many cell types, CBD predominantly evoked apoptosis, as was evidenced by an increase in the expression/function of pro-apoptotic initiators (Bad, tBid) and pore-forming proteins (BAX), a decrease in antiapoptotic Bcl-2 [16,22], Cyt C release [14,22,25,31,34], and the activation of caspases [9,14,22,25,31,35,37]. Of note, in many studies, the type of cell death was not specified because only metabolic assays were performed. Thus, apoptosis likely is not a unique process induced by CBD but can be paralleled and/or affected by concurrent processes such as autophagy and metabolic inhibition. For example, although apoptosis was triggered first by CBD in leukemic cells, severe mitochondria damage and oxidative stress caused the switch to the mitochondrial permeability transition pore (mPTP)-driven necrosis [25]. There is plenty of evidence that CBD can strike multiple cellular targets. CBD possesses low affinity for classical cannabinoid receptors CB1 and CB2 but can efficiently antagonize their agonists; it also acts as a CB2 inverse agonist. Meanwhile, many of the CBD-mediated cellular effects are independent of the endocannabinoid system receptors. CBD acts as an antagonist of G-protein-coupled receptor GPR55, and as an agonist for serotonin receptor 5HT and transient receptor potential vanilloid receptors/channels TRPV1 and TRPV2 [2,6,7,53]. In addition, CBD is a small and lipophilic molecule. Thus, it easily permeates the plasma membrane, being able to reach intracellular targets as well. Accordingly, the mitochondrial outer membrane voltage-dependent anion channel (VDAC) was reported as a highly relevant CBD target [25,54]. Therefore, CBD should be considered as a multitarget agent, capable of triggering various scenarios, depending on the cellular and microenvironmental context, which includes a characteristic pattern of CBD-binding receptors, cellular metabolic state, CBD concentration, and bioavailability. The involvement of CB1/CB2 receptors was addressed in several cancer models (Table A1). Specific antagonists of CB1 receptors or CB1 receptor knockdown abolished the antiproliferative effects of CBD in a colorectal cancer cell line [36]. Both CB1 and CB2 receptors were shown to be involved in the development of different processes induced by CBD, including autophagy in human neuroblastoma [44], proliferation and viability decrease in breast cancer [9], apoptosis in glioblastoma [18] and colon carcinoma [27], and reversed invasiveness of human cervical and lung cancer cell lines [39,40]. The involvement of CB2 but not CB1 in CBD-triggered effects was demonstrated in several models: the inhibition of proliferation and viability in murine thymoma and human leukemic cells [31] and human glioblastoma [12] and PARP cleavage in prostate carcinoma [27]. On the other hand, U87 and U373 human glioma cell lines [12,13] and glioma stem-like cells [16] express CB1 and CB2 but the antiproliferative effect of CBD was insensitive to the respective antagonists SR141716 and SR144528. CBD decreased the cell viability of D425 and D283 medulloblastoma and IC-1425EPN and DKFZ-EP1NS ependymoma cell lines, independent of CB1, even though human medulloblastomas and ependymomas express [45]. CBD (5 µM) decreased the survival of the MDA-MB-231 breast cancer cell line in a CB1/CB2-independent manner [22], although the antiproliferative effect was partially CB2-dependent when CBD was added at a higher concentration (10 µM) [9]. The antitumor effects of CBD were shown to depend on TRPV1 in human neuroblastoma [44], cervical, lung, and breast cancer [39,40], and colon adenocarcinoma [36], but not in glioblastoma [12,13], human leukemia, or murine thymoma [31]. The CBD-dependent decrease in the viability of glioma stem-like cells was dependent on both TRPV1 and TRPV2 [16]. High expression of TRPV2 in drug-resistant cancers such as triple-negative breast or advanced non-small cell lung cancers is correlated with better prognosis, and the activation of TRPV2 by CBD assists drug (doxorubicin)-induced apoptosis in breast cancer cells or provokes apoptosis by CBD itself in lung cancer cells [55,56]. Reported results should be interpreted with caution, due to the different culture conditions, CBD concentrations, or mode of CBD application. For example, viability and proliferation in the MDA-MB-231 cell line (breast cancer) was reported to be independent of TRPV1, when cells were cultured in serum-free conditions [22]. A contradictory result was obtained in another study, where the same cellular model was used, but CBD was added daily in a similar concentration and cells were cultured with serum [9]. It should also be noted here that, in most works on the dependence of CBD-triggered processes on plasma membrane receptors, only late (12-48 h) events were studied (Table A1). For instance, CBD (5 µM, serum-free medium) produced multiple cytotoxic effects in Jurkat cells at 24 h, which were CB2-dependent [31]. Contrary to this, applied to the same cell model, CBD (30 µM, with serum) provoked almost instantaneous [Ca 2+ ] i and [Ca 2+ ] m rises and induced cell death, which were independent of CB1/CB2 and GPR55, but dependent on the direct modulation of the mitochondrial VDAC, [Ca 2+ ] m overload, and mitochondrial damage [25]. Although CBD-dependent ROS production has been confirmed in numerous cancer models, and CBD cytotoxicity is suggested to be related to oxidative stress, the question of the dependence of ROS production on any kind of plasma membrane receptors has not yet been addressed. Proliferator-activated receptor γ (PPARγ) in peroxisomes appears to be an additional intracellular target for CBD. The antiproliferative effects of CBD (10 µM) in colon cancer cells were counteracted by PPARγ antagonist GW9662 [36]. An important issue is the selectivity of the drug for cancer vs. healthy tissues. Several pre-clinical studies demonstrated that the concentrations of CBD that were cytotoxic in cancer cell lines did not significantly decrease the viability of healthy cells such as primary glial culture (up to 50 µM CBD) [14], human oral keratinocyte cell line (up to 15 µM for 24 h) [50], human keratinocytes, rat preadipocytes and mouse monocyte-macrophage cell lines (10 µM for 72 h) [9], murine bone marrow stromal cells, and resting but not activated human CD4 + lymphocytes (30 µM for 24 h) [25]. The MCF-10A mammary epithelial cell line was more resistant to CBD than the MDA-MB-231 breast cancer cell line (up to 10 µM for 24 h) [22]. In some cases, apoptosis mediated by CBD (16 µM) occurred earlier in cancer (EL-4 murine thymoma, 1 h) than in healthy tissue (thymocytes, 6 h) [33]. Although, in general, healthy tissue cells seem to be less sensitive to CBD cytotoxicity, some of their functional properties may be affected. Murine splenocytes stimulated for cytokine production showed lower production of IL-2, IL-4, and IFN-γ after being pretreated with CBD [57][58][59]. On the other hand, the functionality could be restored later. For example, human resting CD4 + cells, pretreated with CBD (30 µM, 24 h), completely restored their ability to be activated after 72 h in CBD-free conditions [25]. Synergism: CBD Improves the Effect of Conventional Anticancer Therapy Synergistic effects of cannabinoids with other compounds were observed and discussed in early studies during the late 1990s, suggesting that CBD and other molecules such as terpenoids from Cannabis sativa can boost the activity of other compounds such as THC. This effect was denominated as the entourage effect; however, the evaluation of the synergistic effect of CBD with other drugs was mainly restricted to research on neurological diseases [60,61]. Eventually, the synergistic potential of CBD in other pathologies, including cancer, gained interest and became a subject of ongoing research. Combined chemotherapy is the main therapeutic anticancer strategy that potentially reduces drug resistance. Accordingly, the search for the best drug combination is paramount. In this regard, the synergism of CBD with several cytotoxic drugs, including THC and conventional chemotherapeuticals such as gemcitabine, cytarabine (ARA-C), cyclophosphamide (CPA), cisplatin (CIS), doxorubicin (DOX), paclitaxel, temozolomide, carmustine, vincristine (VIN), carfilzomib, and erastin, as well as irradiation, was observed (Table 1) [10,11,15,45,[62][63][64][65][66][67][68][69][70]. The synergistic effect was manifested either by an increase in cytotoxicity in vitro or by a decrease in tumor size in xenograft models. In multiple studies, the synergistic effect was quantitatively analyzed by the evaluation of the so-called Combination Index, CI, which has to be <1 in the case of synergy for a two-drug combination [71]. For a combination of CBD with different anticancer drugs, CI ranging from 0.22 to 0.9 was reported ( Table 1). In several cases, the combined effect of CBD with anticancer agents was non-trivial. In the study by Deng and colleagues [66], CBD itself exhibited pronounced cytotoxicity against several glioblastoma cell lines (with IC 50 = 3.2 µM). However, synergism was demonstrated only when low CBD concentrations were combined with DNA-damaging agents, but not with the most of other drugs, where the effect of drug combinations was only additive or even antagonistic [66]. Another study demonstrated that CBD synergistically enhanced the cytotoxic effects of CPA in different MDB cell lines, but only at high concentrations, whereas low CBD concentrations (<5 µM) antagonistically interfered with the CPA activity [45]. Tamoxifen (TAM) was shown to interact synergistically with CBD in the suppression of T-ALL cells and this synergism was higher when cells were pretreated with TAM or both drugs were added simultaneously compared to the case of TAM after CBD. This was explained by the fact that TAM pretreatment prevented the mitochondrial permeation transition pore formation by binding to cyclophilin D, so that a consequent CBD application resulted in a permanent mitochondrial Ca 2+ overload and more severe mitochondrial dysfunction [72]. The outcome of CBD interactions depends on the cancer type/phenotype and microenvironmental conditions. For example, non-identical interactions of CBD with other cytotoxic drugs (e.g., CPA, THC) were observed in two medulloblastoma cell lines: synergism in D283 and antagonism in PER547. Remarkably, the synergism observed in D283 in vitro was not confirmed in the xenograft environment [45]. Similarly, CBD acted synergistically with ARA-C in myeloid leukemia cells but not in acute lymphoblastic leukemia [10]. Another issue is the nature of the companion anticancer agent. In myeloid leukemia, CBD exhibited synergistic effects with ARA-C, but antagonism with VIN [10]. Thus, the possible outcome of CBD interaction with any of the conventional chemotherapeutic agents should be carefully examined in its context, which includes the experimental model, the cancer phenotype, the nature and the concentration of the drug, as well as individual patients' particularities. CBD in Palliative Care Standard anticancer treatments such as chemotherapy, radiotherapy, hormone therapy, and nutritional adaptations are known to impact negatively on the patients' life quality by disrupting sleep and appetite, producing pain, increasing the appearance of mood disorders, and generating immunosuppression, anemia, fatigue, and multisystemic toxicity, especially with intensive or long-term protocols. In this context, there is a great deal of interest in palliative care [73]. Despite the general popularity of the topic, there is only a limited number of published studies regarding the palliative properties of cannabinoids, which present significant methodological flaws. We will discuss some of these reports in more detail. More than 3000 cancer patients using medical cannabis were monitored for 2 years in Israel to assess its safety and efficacy [74]. Of these patients, 66% reported a substantial improvement in their health condition and life quality from the first month of use. Despite the fact that the obtained results are encouraging, several limitations complicate their interpretation: (1) the medicament formulation was not of pharmaceutical purity grade and represented whole plant oil extract or inflorescence, including flowers, capsules, or cigarettes; (2) data from all patients were combined and analyzed regardless of patient age, cancer type, and stage. In another study from the Mayo Clinic published recently, patients, including cancer patients, used THC and CBD as a palliative agent against pain, appetite loss, and insomnia [75]. In the majority (71%) of patients using CBD, these symptoms were alleviated. However, there were many uncontrolled variables in the CBD consumption, including concentrations (not reported), frequency of consumption (daily, weekly, or rarely), and methods of administration (vape, spraying, pills, topical application). In the majority of trials so far, instead of pure CBD, CBD/THC formulations with different ratios and purity were employed. It is worth mentioning here that the consumer demand for CBD products has increased drastically during the last decade [76]. As a result of this rising demand, numerous CBD-containing products have appeared for online purchase. In a recently published study, eighty-four CBD products from 31 companies were analyzed for whole-spectrum cannabinoid content (CBD, THC, cannabinol, cannabigerol, among others) using highperformance liquid chromatography [77]. CBD concentrations varied significantly, from 0.10 to 655 mg/mL, with only 31% of accurately labeled products. The rest of the products were either underlabeled (43%) or overlabeled (26%). Mislabeling occurred frequently in vaporization liquids. Importantly, THC was detected in 20% of samples, sometimes in concentrations sufficient to provoke intoxication. These findings indicate the urgency of manufacturing control and testing standards, to prevent inappropriate use. In this context, double-blind, placebo-controlled, randomized clinical trials to assess the use, efficacy, and safety of CBD in palliative care are now being conducted [4,78]. CBD in Chemotherapy-Induced Pain It is estimated that around 70-90% of patients with advanced cancer experience pain during therapy due to the therapy-induced damage in the peripheral nerves. There is an extensive search for strategies to limit the development of chemotherapy-induced neuropathic pain (CINP) or to relieve pain, in order to improve patients' life quality [74,79,80]. CBD has been demonstrated to exert analgesic effects in a murine model of CISinduced allodynia [81]. Similar effects were also observed in cancer patients, followed for up to 6 months of CBD consumption, with a significant reduction in pain caused by chemotherapy. Most of the patients (67%) stopped using analgesics or reduced the dosage [74]. There are at least 76 clinical trials, either completed or recruiting, which evaluate the benefits of CBD in pain management. Among them, 17% are focused on the analgesic properties of CBD in cancer patients (www.clinicaltrials.gov, accessed on 14 February 2022). Such trials employ CBD either alone or in combination with other cannabinoids in doses ranging from 2.5 mg to 40 mg, mostly via an oromucosal spray. Low (<25 mg) doses of Pharmaceuticals 2022, 15, 366 9 of 42 CBD provoked analgesia, while higher doses caused no analgesia but secondary effects [82]. Patients with terminal cancer-related pain and refractory to opioids experienced a decrease in pain severity within the first few weeks of CBD/THC consumption [83,84]. Similar results were obtained in another trial, in which more than 30% of patients reported a reduction in baseline pain [83]. Contrary to these findings, several independent double-blind, placebo-controlled phase 3 trials showed no significant difference between CBD/THC and placebo effects on pain management, although patients reported some improvement in their life quality (NCT01361607; NCT01424566) [85,86]. Based on data from in vitro and clinical trials, several mechanisms have been proposed for CBD-mediated analgesia, which includes the action through different cell membrane receptors, ion channels, transporters, as well as intracellular enzyme targets [8]. However, there are only a few studies on tumor models. Accordingly, in a breast cancer xenograft, CBD (2.5-10 mg/kg) prevented the manifestations of CINP induced by paclitaxel, acting through the serotonin receptor 5TH1A [64]. Importantly, CBD treatment also displayed a synergism with paclitaxel against breast cancer cells [64]. In another work, CBD (0.625-20 mg/kg) was shown to attenuate CINP, induced by paclitaxel or oxaliplatin, but not by vincristine [87]. Additional experiments are still needed to confirm the analgesic effects of CBD on chemotherapy-induced neuropathic pain and to reveal the underlying mechanisms. CBD for Healthy Cells' Protection Several anticancer agents are toxic to healthy cells, especially when the drugs are accumulated in certain organs. For example, it is well known that CIS promotes acute renal failure (ARF) in a dose-dependent manner in approximately one third of patients [88,89]. CIS is differentially absorbed by the medullar and cortical sections of the kidney, inducing apoptosis and necrosis in these tissues. Several mechanisms have been implicated in CIS-mediated nephrotoxicity; thus, drugs limiting such mechanisms have emerged as renoprotective agents [90]. In an ARF mice model, the pre-administration of CBD (10 mg/kg/day) significantly attenuated the renal damage induced by CIS [89]. Additionally, CBD has been shown to potentiate CIS activity in different cancer types [62,66]. Thus, CBD may be considered as a promising renoprotector against CIS-induced renal failure. Another effective chemotherapeutic drug, DOX, may provoke cardiotoxicity when accumulated. For cancer patients with DOX-developed cardiomyopathy, the prognosis is poor [91,92]. In mice with DOX-induced cardiomyopathy, CBD (10 mg/kg/administrated i.p. for 5 days) reduced the markers of ARF and cardiac injury [93]. The effects of CBD as a cardioprotector were attributed to a reduction in oxidative/nutritive stress and cell death and it improved the mitochondrial function and biogenesis. From a therapeutic point of view, CBD usage in cancer patients under regimens including DOX is encouraged, considering that CBD also potentiates the cytotoxic effects of DOX (Table 1), allowing the adjustment of DOX doses and limiting its cardiotoxicity. CBD against Opportunistic Infections Cancer patients undergoing chemotherapy are at high risk for opportunistic infections. It is estimated that 30% of cancer patients with non-hematological tumors and up to 85% of patients with acute leukemia develop life-threatening infections. Some chemotherapeuticals such as CPA cause immunosuppression by altering hematopoiesis, affecting the total white blood cell count and generating neutropenia [45,94,95]. Chemotherapy, surgical, or diagnostic procedures can also disrupt anatomic barriers in the process of infection. To overcome these complications, the concurrent use of antimicrobial agents and growth factors to restore hematopoiesis is being considered [96]. In this regard, the ability of CBD to influence hematopoiesis was observed. For example, in orthotopic mouse models of ependymoma and medulloblastoma, CBD (50 mg/kg/p.o.) was able to reverse hematopoietic toxicity caused by CPA treatment, as measured by an increase in leukocyte and neutrophils counts. However, the survival rate of animals in this model was not improved, despite the fact that, in experiments on medulloblastoma and ependymoma cell lines performed in vitro, CBD enhanced the cytotoxic effects of CPA (Table 1) [45]. Notably, an increase in the total number of white blood cells, lymphocytes, monocytes, and neutrophils was also seen in cannabis consumers [97] Several studies evidenced the marked antimicrobial activity of CBD. In particular, CBD was effective against various species of Gram-positive bacteria, including Staphylococcus spp., Listeria spp., Enterococcus spp., and Bacillus spp., with the range of minimum inhibitory concentrations (MIC) being 1-4 µg/mL [98][99][100][101][102]. CBD also potentiated the effect of bacitracin [101]. Importantly, it was highly efficient against many Gram-positive resistant strains [102]. Although the majority of Gram-negative species are significantly less sensitive to CBD (MIC > 60 µg/mL), some "urgent threat" pathogens such as Neisennia honorrhoeae, Neisennia meningitides, and Legionella pneumophilia showed high sensitivity, with MIC around 1 µg/mL [102]. In addition to its bactericidal properties, CBD protects the mucous membrane and limits the susceptibility to infections due to its antisecretory, antioxidant, anti-inflammatory, and vasodilatory properties [103][104][105][106]. CBD in Anorexia-Cachexia Syndrome Up to 80% of cancer patients undergo a wasting syndrome, characterized by vomiting, anorexia, asthenia, and anemia [107,108]. The resulting cancer cachexia (CCA), together with immunosuppression, increases their susceptibility to infections, limits chemotherapy's effectiveness, and increases the risk of eventual organ failure. Therefore, cancer patients are strongly encouraged to adopt strategies that promote appetite increase, weight gain, and immunity recovery. Advanced cancer patients treated with CBD (2.5 mg p.o.) or CBD/THC blends showed improved appetite compared to the placebo group [84]. Another controlled study confirmed weight gain in cancer patients receiving CBD of pharmaceutical grade (20 mg/daily/p.o.) [109]. Interventional phase 2/1 clinical trials have been proposed in order to evaluate the effects of CBD in emesis, cachexia, and appetite alterations by estimating the body mass index, nausea, taste alteration, energy intake, and lean body mass in cancer patients under chemotherapy (NCT03245658; NCT04585841; NCT04482244; NCT02675842). Collectively, available data suggest that CBD can improve the life quality of cancer patients under chemotherapy ( Figure 2) and call for further extended clinical trials of CBD as a potential palliative care agent. Evidence of Anticancer Activity of CBD from Clinical Trials and Case Reports Although numerous pre-clinical studies have demonstrated the anticancer activity of CBD (Sections 2 and 3), objective clinical evidence is still very scarce. A comprehensive review of pre-clinical and clinical reports concerning the anticancer activity of cannabinoids, including CBD, was performed and published recently [110]. In this work, the data available in the PubMed and EBSCO databases, congress presentations, books, and Evidence of Anticancer Activity of CBD from Clinical Trials and Case Reports Although numerous pre-clinical studies have demonstrated the anticancer activity of CBD (Sections 2 and 3), objective clinical evidence is still very scarce. A comprehensive review of pre-clinical and clinical reports concerning the anticancer activity of cannabinoids, including CBD, was performed and published recently [110]. In this work, the data available in the PubMed and EBSCO databases, congress presentations, books, and clinical trials registered at ClinicalTrials.gov website were analyzed. Among them, 77 publications of case reports with various types of cancers were revealed and classified as weak (81%), moderate (5%), or strong (14%). Accordingly, the cases were considered as strong or moderate when they met the following criteria: (a) patients presented an active form of cancer at the time of cannabinoid application and (b) clinically validated laboratory documentation about clinical response and improvement was available. In strong cases, cannabinoids were utilized without a concurrent therapy, whereas in moderate cases, anticancer therapies were executed in parallel. In our opinion, the latter combined approach is more pertinent than CBD monotherapy. In clinical trials reported by Kenyon and colleagues, pharmaceuticalgrade synthetic CBD (STI Pharmaceuticals) was tested on 119 patients with advanced cancer of different types, including breast, prostate, and colorectal cancers, non-Hodgkin's lymphoma, and glioblastoma [109]. Patients were given 10-30 mg of CBD (depending on tumor mass), twice per day ("three days on/three days off" basis). Favorable clinical responses were observed in 92% of patients, evidenced by a reduction in tumor size (repeated scans) and a decrease in circulating tumor cells. Positive dynamics were observed in patients treated with CBD both alone and in combination with a standard therapy [109]. Importantly, the authors reported the case of a glioma patient where the improvement was observed only by taking synthetic CBD of pharmaceutical grade, but not cannabis oil extract [109]. It should be noted here that clinical researchers, physicians, and the FDA expressed their concern that many patients use a variety of cannabis oils or whole plant extracts of questionable quality (not of pharmaceutical grade) in self-prescribed dosages, which may be ineffective or even harmful for patients [109,111]. Thus, the following important issues should be addressed in the path toward the use of medical CBD for cancer patients: (1) CBD formulations and administration methods to reach the desirable cytotoxic effect specifically in the cancer tissue or favorable effects in palliative care; (2) possible side effects for specific CBD formulations and concentrations administrated by any specific route. CBD Tolerability, Toxicity, and Adverse Effects CBD's toxicity against numerous cancer cell lines has been identified, as previously discussed (Section 2). Although healthy cells have been reported to be less sensitive, the causes and mechanisms of the differential sensitivity of cancer and healthy cells to CBD toxicity are still unclear. Moreover, CBD may target a variety of surface and intracellular molecules (receptors, ion channels/transporters, enzymes) and triggers multiple signaling pathways present in both cancer and healthy cells. Taken together, these facts raise safety and side effect issues. According to traditional protocols, drug toxicity is first tested in preclinical animal models. Pre-clinical studies, carried out on animal models, reported acute and chronic adverse effects of CBD on different organs and systems (Table A2) [112][113][114][115][116][117][118][119][120][121][122]. There are several highly recommended comprehensive reviews, which critically analyzed the CBD safety and toxicity experiments carried out in animal pre-clinical and human clinical trials [53,[123][124][125][126][127]. The following important observations should be mentioned: (1) regarding the administration route, in most human trials, CBD was administrated orally or by inhalation, whereas predominantly intraperitoneal (i.p.) and intravenous (i.v.) injections and sometimes the oral route were used in animals; (2) CBD pharmacokinetics and molecular targets seem to differ between humans and rodents; these differences should be taken into consideration when extrapolating results obtained in pre-clinical models to humans; (3) regarding the composition, in numerous CBD toxicity reports in humans, patients consumed not pure CBD but different preparations of CBD of unknown concentration and uncertain composition. Many preparations marketed as CBD contain also variable quantities of THC [77]. Since the toxicity profile and side effects caused by THC and CBD are different and THC seems to be more toxic [126], the data obtained in these studies are misleading, reporting the net effect of THC, CBD, and their interaction. Drug-drug interactions represent a very important issue in the case of CBD, because it targets enzymes implicated in drug metabolism and excretion [8]. Thus, it may prolong the presence and increase the toxicity of co-administrated drugs. Taking all the aforementioned factors into consideration, we will restrict ourselves to the most prominent and reliable data concerning the toxicity and adverse effects of CBD. Obviously, CBD's tolerability depends on the doses, frequency, routes of administration, and treatment duration. CBD is usually well tolerable during acute and short-lasting treatment in moderate doses. At a range of 3-30 mg/kg (i.p.) or 0.1-30 mg/kg (i.v.), CBD did not change the heart rate, blood pressure, gastrointestinal (GI) transit, respiration, biochemical blood parameters, and hematocrit in rodents [53]. In piglets, CBD doses of 10 mg/kg (i.v.) were well tolerated, whereas higher doses (50 mg/kg) in some cases caused hypotension and cardiac arrest [116,125]. In rhesus monkeys, high CBD doses of 150-300 mg/kg (i.v.) caused acute CNS toxicity (tremor, sedation, and prostration) within 30 min of injection, whereas prolonged treatment for 9 days elicited bradycardia, hypopnea, cardiac failure, liver weight increase, and inhibition of spermatogenesis [113,125]. For the same model (rhesus monkeys), chronic oral CBD application (30-300 mg/kg/day, 90 days) caused systemic negative effects on the liver, heart, kidneys, and thyroids, and inhibited spermatogenesis [113,124,125]. Negative effects of chronic CBD on embryonic development were reported in rats when relatively high doses (75-250 mg/kg/day) were administrated orally during pregnancy, which included developmental toxicity, decreased fetal body weight, increased fetal structural variations, and embryofetal mortality [125]. Clinical reports in humans are scarce, and, obviously, are limited to low and moderate doses. No disturbances in physiological parameters or psychomotor functions were observed in clinical CBD trials after oral administration (15-160 mg), i.v. injection (5-30 mg), or inhalation (0.15 mg/kg) [53]. No side effects were observed during the prolonged CBD treatment of cancer patients (up to 60 mg daily, orally, up to 6 months) [109]. Most of the reliable clinical trials (i.e., double-blind, randomized, placebo-controlled) were performed on patients (children and adults) suffering from treatment-resistant epilepsy or schizophrenia, or related neurologic and psychotic disorders. The CBD dose range utilized in these trials was usually from 0.5 to 50 mg/kg/day or from 200 to 1000 mg/day for psychiatric studies. When CBD was administrated orally (25-50 mg/kg/day) for an extended period (weeks), moderate adverse effects included somnolence and fatigue, sleep disorders, diarrhea and GI intolerance, and respiratory complications, and pneumonia, thrombocytopenia, and liver and blood abnormalities were reported [125,[128][129][130][131]. Pyrexia was relatively common in children with Dravet's or Lennox-Gastaut syndrome during 3-or 4-week treatment trials with doses of 5-20 mg/kg/day administered orally [125,128,129]. Since CBD is suggested to be included in combined anticancer chemotherapy protocols, CBD's hepatotoxicity, which can cause changes in drug metabolism, is an issue of special importance. A hepatotoxic effect was documented in pre-clinical and clinical studies when relatively high CBD doses were administrated for a prolonged time [53,[123][124][125][126][127]. As was revealed by a randomized, double-blind trial that included 171 patients, hepatocellular injury represents the most frequent adverse effect, so it was recommended to test serum transaminases and total bilirubin levels in all patients prior to starting the treatment with Epidiolex ® , which is CBD in an oral solution [132,133]. Importantly, CBD targets the cytochrome P450 system and is metabolized by CYP3A4 and CYP2C1 in human liver microsomes (HLMs), giving rise to 6α-OH-, 6β-OH-, 7-OH-, and 4"-OH-CBDs [134]. A female patient, treated for 6 years with tamoxifen, and, additionally, by CBD, which inhibited CYP3A4/5 and CYP2D6, presented a consequent reduction in N-desmethyltamoxifen and active metabolite endoxifen [135]. In cancer patients, especially if they have liver diseases or a poor metabolic profile, possible effects of CBD on cytochromes P450, which in turn can affect the pharmacokinetics of conventional anticancer drugs, need to be considered. Concerning Better CBD Delivery for Cancer Therapy Satisfactory delivery of anticancer therapeuticals should provide its efficient accumulation in the target cancer tissue, with minimal side systemic effects on other organs. CBD is a highly lipophilic compound, which is poorly soluble in aqueous solutions and highly sensitive to light, temperature, and oxidation, which underlies its relatively low bioavailability [136]. CBD, when administrated orally, can precipitate in the GI tract, resulting in poor GI permeability. It undergoes then the first step of metabolism by liver and gut enzymes and is predominantly excreted through the kidneys [136,137]. As a result of the first step of metabolism, the oral CBD bioavailability is estimated to be between 5% and 19% [136,137]. Variable pharmacokinetics profiles were reported, depending on the means of CBD administration. These include more traditional and better-studied oral/mucosal, inhalation, and smoking, and less explored intravenous routes [138]. Free CBD Delivery To date, the only CBD formulation approved by the FDA for the treatment of rare forms of epilepsy is Epidiolex ® , CBD in an oral solution (100 mg/mL), with maximum recommended doses of 20 mg/kg/daily. Currently, there are numerous clinical trials of CBD for the treatment of different disorders, including palliative care in cancers, where CBD is delivered predominantly as an oil solution, orally, or via inhalations (https://clinic altrials.gov/ct2/results?cond=&term=cannabidiol&cntry=&state=&city=&dist=, accessed on 14 February 2022). As was discussed in Section 2, a relatively broad range of CBD concentrations was tested in studies in vitro to prove its anticancer properties. Significant variations in experimental models and culture conditions complicated a comparative analysis. Considering cell cultures supplemented with serum as a better approximation of physiological conditions, effective concentrations were in the µM range. When CBD was administrated orally in humans (20 mg), its maximal plasma concentration achieved at 3 h was in the range of 7.9-19.1 ng/mL (i.e., 5-15 nM), with better bioavailability in women (Table 2) [139]. A novel self-emulsifying drug delivery system (SEDDS) was proposed recently to improve the oral CBD bioavailability. This resulted in 2-4-fold higher plasma CBD concentrations when compared to oral/mucosal administration, with a lower gender difference (Table 2) [139][140][141]. When CBD was received by inhalation or smoking, the maximal plasmatic CBD levels were in the nM range and then dropped stepwise (Table 2) [142,143]. The first trial in adult humans to compare single and multiple oral delivery was undertaken recently [144]. The single oral dose was administrated in the range of 1500-6000 mg, which is comparable to or higher than doses recommended for Epidiolex ® . The maximal plasma CBD concentration, reached at 3-5 h after administration, was 292.4 ± 87.9 ng/mL (approx. 1 µM) and 782 ± 83 ng/mL (approx. 2.5 µM) for 1500 and 6000 mg, respectively, and then it dropped significantly. When CBD was administrated twice per day (2 × 1500 mg) during an extended period of 7 days, a steady-state plasma level was reached at 2 days, and on day 7, the maximal concentration was 541 ng/mL (approx. 1.7 µM). Intravenous CBD injection is an alternative delivery method, which prevents GI degradation and has demonstrated better bioavailability. It was tested and compared with other delivery methods in studies in humans and mice (Table 2) [142,143,145]. Intravenous administration caused higher CBD plasma levels than oral administration [145], smoking [142], or inhalation [143] (Table 2). In healthy volunteers, the injection of a 20 mg/kg dose resulted in a rapid rise in the plasma concentration, ranging from 358 to 972 ng/mL (1-3 µM), which was approximately five times higher than by smoking [142]. Although these concentrations are close to the cytotoxicity range reported for some tumors (discussed in Section 2), plasma CBD levels had dropped drastically within 1 h of administration [142]. Similar results were obtained in a murine model, with an immediate plasmatic concentration rise to 3000 ng/mL (approx. 10 µM), when 10 mg/kg was injected, followed by a rapid (within 1 h) tenfold drop [145]. Thus, any administration route of free CBD resulted in a transient rise in the plasmatic drug level, where only the maximal levels are comparable to cytotoxic concentrations. Importantly, bioavailability in cancer tissue is expected to be significantly lower than in plasma and highly variable, depending on the cancer type, tumor size, geometry, and vascularization. On the other hand, achieved plasma concentrations are sufficient to cause undesirable side effects (Section 6). Thus, increasing the dose of pure CBD by any administration method should not be considered as an appropriate strategy for CBD delivery for cancer treatment. Instead, alternative formulations, aimed to increase CBD's stability and its specific targeting to the cancer tissue, should be developed. Nanotechnology May Improve CBD Delivery for Cancer Therapy: General Considerations and Experimental Evidence Multiple nanoformulations have been proposed to overcome the delivery challenges of hydrophobic unstable drugs such as CBD. There are several excellent comprehensive reviews discussing in detail the best approaches to design nanocarriers (NC) for cancer therapeutics [146][147][148]. There are various important criteria that should be taken into consideration. NC should be composed of biocompatible nontoxic and non-immunogenic materials. According to their chemical structure, NP can be categorized into different groups, such as inorganic, polymeric, liposomas, nanomicelles, etc. In inorganic nanoparticles, the core is composed of metal or metal oxide (silver or gold are frequently used). Polymeric NC are produced using a conjugation of several polymers with desirable char-acteristics. Liposomes are nanoparticles with an aqueous interior part, surrounded by one or more concentric bilayers of amphipathic lipids (e.g., phospholipids). The design of such NC can be developed according to therapeutic requirements. Their diameter ranges normally from 1 nm to several µM. Consequently, such liposomes can be distributed in the bloodstream (smallest capillary diameter is approximately 5-6 µM) and accumulated in the target tumors. The ultra-filterable range of less than 200 nm provides the possibility for sterilization. Covalent linkage of NC to polyethylene glycol (PEG), so-called PEGylation, decreased significantly their immunogenicity. Moreover, such a modification changes the physicochemical and hydrodynamic properties, which results in a prolonged circulation time and reduced renal clearance [149]. NC easily incorporate drug molecules and form a barrier around therapeutic agents, preventing the premature drug interaction with body fluids and immune cells before their delivery to the target site. A precise design, which takes into consideration the material, size, and shape of NC, may provide drug release in a controlled and predictable fashion. This approach is also useful for the delivery of two or more drugs simultaneously, which can be very useful for cancer treatment, considering multi-drug chemotherapeutic protocols. Moreover, the nature of the core molecules may provide the possibility to combine both hydrophobic and hydrophilic drugs at the same time. In liposomes, hydrophobic drugs are incorporated into the lipid membrane, whereas hydrophilic compounds are present within the central aqueous cavity. Target-specific drug delivery can significantly decrease side effects and increase the therapeutic index of encapsulated drugs. Passive and active targeting of nanoparticles can be used for cancer therapy. Passive targeting is possible due to the phenomenon known as the enhanced permeability and retention (EPR) effect in solid tumors [147,[150][151][152][153]. In rapidly growing tumor tissue, characterized by the overexpression of vascular endothelial growth factor (VEGF), the microvasculature is characterized by a chaotic ramification with enhanced endothelial porosity or fenestration, in contrast to the tighter endothelial structures of normal capillaries. As a result of the changed cytoarchitecture, the blood flow is slower, and, due to the high porosity, tumor capillaries are leaky. Both these factors ensure the retention of enlarged particles, such as NC, in tumors. In hematological malignances, the bone marrow (BM) leukemic niche is the target tissue. Blood vessels supplying BM (sinusoids) possess the fenestrations and are semipermeable, providing favorable conditions for the accumulation of NC [154]. At the same time, the EPR effect was reported to provide a relatively modest, twofold enhancement of the nanodrug retention in tumor tissues, when compared with healthy organs [155]. The surface of NC can be modified to improve their targeting to tumors. A variety of ligands/antibodies to specific antigens, expressed by cancer cells, can be proposed for NC surface engineering [146]. Dual-action CXCR4-targeting liposomes were developed and proposed for drug delivery and the simultaneous blockage of the CXCR4/CXCL12 axis for leukemia treatment [156]. HER2-targeted liposomes were accumulated in the tumor tissue of patients with HER2-positive breast cancer [157]. The RGD (arginyl/glycyl/aspartic acid) motif was proposed to target integrins to tumor cells [158]. Anionic liposomes were shown to accumulate in BM and were then predominantly adsorbed by leukemic cells [154]. Hyaluronic acid, which shows a high binding affinity for the CD44 adhesion molecule, is present at enhanced concentrations in a variety of tumors and was also proposed for NC modification [159,160]. Experimental trials of novel delivery methods for CBD in cancer therapy are still scarce but have demonstrated promising results (Table 3) [161][162][163][164][165][166][167][168]. Gold PEGylated nanodrones were proposed recently to target lung cancer with cannabinoids and radiosensitizers [161]. The efficiency of two administration routes, inhalation and intravenous, was tested in transgenic mouse models bearing lung adenocarcinoma. The particle size (100 nm) was optimized to ensure an increased circulation time and efficient tumor uptake. Additionally, drones were functionalized with the RGD (arginyl/glycyl/aspartic acid) motif to target integrin receptors on the lung tumor cells' surface. Both administration routes provided efficient nanodrone penetration into the tumor tissue, but the inhalation route was more promising for this tumor type. CBD was proposed to be conjugated to the amine groups present on the PEG. However, CBD-conjugated drones have not been tested yet. The efficiency of a micellar delivery system for targeting cannabinoids to cancer tissue was tested in a murine model of triple-negative breast cancer [162]. In this case, micelles were loaded with the synthetic cannabinoid WIN55,212-2. The average micelle size was 152 nm, ensuring their accumulation in the tumor by the EPR. WIN, being conjugated with the micellar system, efficiently inhibited tumor growth. Remarkably, predominant micelle accumulation in the tumor was demonstrated, indicating the viability of the micellar system for its use with cannabinoids. CBD-loaded poly-ε-caprolactone microparticles, as an alternative delivery system for long-term CBD administration, demonstrated their efficiency in inhibiting glioblastoma growth and tumor angiogenesis in a murine xenograft model [163]. More recently, poly-(lactic-co-glycolic acid), PLGA, microparticles, loaded with CBD, were tested for their potential to improve the conventional chemotherapy of breast and ovarian cancers [164,165]. PLGA is approved by the FDA for use in parenteral release systems. The mean particle size was around 25 µM, with a high entrapment efficiency in the tumor tissue. Particles were sterilized by gamma irradiation (25 kGy). Since sterilization accelerates the polymer erosion, a CBD:polymer ratio (10:100) was selected to ensure a durable release profile. Remarkably, a single administration of this formulation ensures the antitumor activity in vitro for at least 10 days. CBD-loaded microparticles were effective as a monotherapy, but synergism with DEX (breast cancer) and paclitaxel (breast and ovarian cancer) allowed a more pronounced effect at a single administration. However, a particle size in the µM range is not suitable for intravenous injections, because only particles smaller than 5 µM can freely circulate in the bloodstream and reach the tumor site. Afterwards, PLGA CBD-loaded nanocarriers for i.p. administration in ovarian cancer treatment were developed, which demonstrated improved CBD stability, its long-lasting release, internalization by cancer cells, and anticancer efficiency [165]. Drug delivery to brain malignancies such as glioma/glioblastoma is restricted by the blood-brain barrier (BBB). Aparicio-Blanco and colleagues proposed the original strategy of non-immunologic BBB targeting using NC decorated (functionalized) with CBD [166,167]. They elaborated small lipid nanoparticles with a size range of 10-100 nm, carrying CBD on their surface, which were able to pass through the BBB. CBD-decorated particles were suggested to target the brain endothelium, which expresses different surface molecules able to bind CBD, namely the CB1 receptor, the G-protein-coupled receptor 55 (GPR55), and serotonin receptor 5-HT. After the brain endothelium transcytosis, these particles were expected to target glioma cells overexpressing CB1/2 receptors. As far as CBD was reported to be cytotoxic for glioma, lipid nanoparticles were loaded with CBD and tested as prolonged-release carriers for glioma therapy [166,167]. This strategy was demonstrated to enhance the glioma targeting, and a combination of CBD loading with CBD functionalization significantly reduced the IC 50 values. CBD decoration was confirmed to enhance the passage of lipid nanoparticles across the BBB both in vitro (human brain endothelial hCMEC/D3 cells) and in vivo (mouse glioma xenograft models). An RGD proteinoid polymer was synthesized and used to encapsulate CBD [168]. Resulting nanoparticles inhibited tumor growth in xenograft mouse models of colorectal and breast cancer and were proposed for further trials. The possibility of the delivery of two or more drugs simultaneously by nanocarriers is of special interest for the inclusion of CBD into chemotherapeutic protocols, taking into the account the fact that CBD improves the effect of various anticancer drugs (Section 3). Importantly, there are several anticancer drugs that are already clinically used in liposomal formulations for chemotherapeutic protocols [169]. Regulation Issues Although CBD lacks any psychotropic effect, its public and clinical usage falls under general regulations applied to cannabis-derived products. Even though the difference between non-psychotropic components and THC is understood, recent circulars released by the U.S. Food and Drug Administration [170,171] strive to thoroughly evaluate the purity of cannabis products containing CBD and its derivatives and to inform the public about the risks and unknowns of these products. The current trend in several countries, including the U.S., Mexico, Canada, and Uruguay, to name those in the western hemisphere, is the decriminalization of the use of cannabis products. In 31 out of 45 European countries, CBD is legal or is within a grey legal zone (https://www.legalreader.com/cbd-in-europe-legalstatus-of-cbd-country-by-country/, accessed on 10 March 2022). However, the regulations differ from country to country and even between different states. In Mexico, in 2020, an initiative was launched to differentiate between marijuana and non-psychoactive cannabis, and respective modifications were made in the General Health Law and Federal Penal Code, approved by the Chamber of Deputies in 2021. In the case of medicinal, palliative, pharmaceutical/cosmetic, or scientific uses for said purposes, these will be regulated by the provisions of the General Health Law and other applicable regulations. The Federal Commission for the Protection against Sanitary Risks (COFEPRIS) has publicly reiterated its "open door" policy to receive and guide all people and organizations interested in the medicinal, personal, and recreational use of cannabis. However, respective requests are attended individually and there is still a long way to go so that the requests do not necessarily have to be evaluated personally on a case-by-case basis, or by court order, but under general and clear guidelines and a regulatory framework that allows the definition of the therapeutic indications in which the use of cannabis-derived products will be prescribed, as well as the process to evaluate their quality, safety, and efficacy, in a similar way as occurs for other medicines. Therefore, researchers and clinicians who seek to employ CBD for anticancer treatments are strongly advised to consult the current status of respective regulations in their area. General Conclusions and Further Considerations The anticancer properties of CBD against cancer cells of different histogenesis were demonstrated in numerous pre-clinical in vitro studies (Section 2, Figure 1, Table A1). For many, but not all, cancer types, anticancer effects were also confirmed for animal models (Table A1, [6]). The synergic effect of CBD with conventional anticancer drugs encourages the inclusion of CBD in conventional chemotherapeutic protocols (Section 3, Table 1). The urgent need for clinical trials in developing CBD as an anticancer drug is proclaimed [6]. We provide here the suggested flowchart for the translation of the CBD anticancer activity from the lab to clinical trials and clinical use in anticancer treatments (Figure 3). CBD acts through various molecular targets and triggers multiple signaling pathways simultaneously so that precise cytotoxic mechanism(s) for every cancer type are still to be revealed. Many studies evidenced lower, if any, cytotoxicity of CBD against healthy tissues, but the cause of the differential sensitivity of healthy and cancer tissues is still unclear. Moreover, the specific microenvironment may protect cancer cells from drug-induced damage. Thus, experiments in pre-clinical in vitro models with a high approximation of the cancer microenvironment, namely 2D and 3D co-culture with stromal cells and cancer organoids, are very desirable. The anticancer effects of CBD are often observed at relatively high concentrations of pure CBD added to cell culture or injected into animals, which can cause adverse effects, especially under long-lasting treatments (Sections 6 and 7, Tables 2 and A2). On the other hand, low CBD concentrations may even promote cancer cell proliferation [25]. Thus, new CBD formulations for targeted cancer treatments are required. NC of different design represent a promising approach for the controlled simultaneous delivery of CBD in combination with conventional chemotherapeutics. Several nanoformulations were designed and their effectiveness was proven in pre-clinical models, but this kind of study is still very scarce (Section 7, Table 3). Obviously, every new CBD formulation requires a range of pre-clinical studies in animals, which includes the evaluation of optimal administration routes, pharmacokinetics/pharmacodynamics, biodistribution, tissue and cancer cell specificity, stability, and safety. To confirm the anticancer efficacy of new formulations, cancer-specific pre-clinical models will be required, which may include chemically or genetically induced animal models, tumor allografts, and xenografts of human tumors in immunodeficient mice. Taking into consideration the high heterogeneity of cancer clones, experiments with patient-derived cancer tissue/cells are very desirable at this phase, to confirm the efficiency of CBD against specific cancer types. After satisfactorily completing all these pre-clinical studies, double-blind, randomized, placebo-controlled clinical studies could be performed. The combined use of CBD as both an antitumor and palliative agent is very attractive. Such an approach may be complicated by the fact that the effective concentrations, formulations, and administration routes are likely to be different for these two purposes. The observed synergism of CBD with conventional anticancer drugs can decrease the efficient drug and CBD concentrations, thus optimizing the treatment (Section 3). Importantly, CBD products' quality should be controlled and self-medication should be inhibited, to prevent inappropriate use. seek to employ CBD for anticancer treatments are strongly advised to consult the current status of respective regulations in their area. General Conclusions and Further Considerations The anticancer properties of CBD against cancer cells of different histogenesis were demonstrated in numerous pre-clinical in vitro studies (Section 2, Figure 1, Table A1). For many, but not all, cancer types, anticancer effects were also confirmed for animal models (Table A1, [6]). The synergic effect of CBD with conventional anticancer drugs encourages the inclusion of CBD in conventional chemotherapeutic protocols (Section 3, Table 1). The urgent need for clinical trials in developing CBD as an anticancer drug is proclaimed [6]. We provide here the suggested flowchart for the translation of the CBD anticancer activity from the lab to clinical trials and clinical use in anticancer treatments (Figure 3). CBD acts through various molecular targets and triggers multiple signaling pathways simultaneously so that precise cytotoxic mechanism(s) for every cancer type are still to be revealed. Many studies evidenced lower, if any, cytotoxicity of CBD against healthy tissues, but the cause of the differential sensitivity of healthy and cancer tissues is still unclear. Moreover, the specific microenvironment may protect cancer cells from druginduced damage. Thus, experiments in pre-clinical in vitro models with a high approximation of the cancer microenvironment, namely 2D and 3D co-culture with stromal cells Alteration of CYP content
2022-03-23T15:28:50.384Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "2dff25988f58ed21dc9edaa43eb73aacea15e4e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/15/3/366/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b61c54ea23bc2b5d5259f53b40bd044faa3d2c93", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
24571480
pes2o/s2orc
v3-fos-license
Introducing site-specific cysteines into nanobodies for mercury labelling allows de novo phasing of their crystal structures Nanobodies are used as crystallization chaperones and here site-specific mercury labelling of nanobodies is shown as a new tool for phasing. Introduction The production of well diffracting protein crystals is a major challenge in macromolecular X-ray crystallography. Large multi-domain proteins and membrane proteins are inherently difficult to crystallize owing to conformational heterogeneity and the lack of suitable surface chemistry that allows the formation of a crystal lattice. Crystallization chaperones are auxiliary proteins that increase the chance of crystallization by reducing conformational flexibility and providing well ordered surfaces to form crystal lattice contacts. Monoclonal antibody Fab fragments derived from IgG are the most widely used chaperones (Uysal et al., 2009) and may simultaneously provide phase information for structure determination by molecular replacement. Several alternative chaperones have been developed, including DARPins, single-chain variable fragments and nanobodies (Nbs; Pardon et al., 2014). Nbs are derived from natural llama heavy-chain antibodies that are devoid of a light chain and in which the heavy-chain variable domain (V H H) exclusively mediates the interaction with the antigen (Muyldermans, 2013). The V H H domain is structurally similar to the IgG V H domain, with three complementarydetermining regions (CDRs) that are responsible for antigen binding. In contrast to Fab fragments, which must be expressed in mammalian or insect cells, Nbs are easy to express and manipulate in Escherichia coli. As a result of their favourable characteristics Nbs are also increasingly being used in imaging, where they can be labelled with GFP or fluorescent dyes (Chakravarty et al., 2014;Rothbauer et al., 2006). Covalent attachment of fluorescent dyes to Nbs has proven to be effective using NHS ester, isothiocyanate or maleimide functional groups, with maleimide labelling being superior to NHS ester dyes when comparing background staining in cellpermeabilizing imaging (Pleiner et al., 2015;Rö der et al., 2017). NHS esters and isothiocyanates react readily with N-terminal and lysine amines, while maleimide reacts specifically with cysteine thiols in the pH range 6.5-7.5. For crystallization purposes, the antigen can be screened in complex with each identified Nb or in combinations with more than one Nb to increase the probability of crystallization (Zhang et al., 2015). However, owing to the small size of Nbs the phase information obtained from a single Nb by molecular replacement is limited and may not be sufficient to provide initial phases for a Nb-complex structure. We rationalized that generating heavy-atom-labelled Nbs would allow one to not only utilize Nbs as crystallization chaperones but also provide an easy approach for experimental phasing. Incorporation of selenomethionines is a successful route for introducing anomalous scatterers into proteins for subsequent experimental phasing, but this approach is very challenging for proteins expressed in eukaryotic cells and is not feasible for proteins purified from natural sources. Alternatively, denser atoms (for example mercury, gold and platinum) have successfully been used as phasing labels (Pike et al., 2016). One strategy uses cysteines for site-specific mercury labelling and has helped to solve the structures of both soluble and membrane proteins (Doyle et al., 1996;Li et al., 2015). Hg is the most successful element when it comes to forming heavyatom derivatives of proteins and their crystals, and cysteine side chains are also by far the most frequent binding or coordinating residues for Hg atoms (Sugahara et al., 2005). Hg has been extensively used for isomorphous replacement since the early days of protein crystallography, where it was used to determine the structure of haemoglobin (Green et al., 1954), and inclusion of the anomalous signal from Hg for SIRAS phasing was reported in 1977 (Wood et al., 1977). SAD phasing is now the standard approach for obtaining experimental phase information in macromolecular crystallography since the majority of modern synchrotrons have beamlines with tunable wavelengths and the potential problem of nonisomorphism is avoided (Rose & Wang, 2016). The collection of anomalous data from a single well diffracting Hgsubstituted crystal is a relatively simple task, whereas merging anomalous data from multiple small crystals is more challenging and the anomalous signal is also even more sensitive to radiation damage than the overall reciprocal-space signal. Recently, SAD and SIRAS phasing based on data obtained by femtosecond serial crystallography for the luciferin-regenerating enzyme with a single Hg-substituted cysteine in a protein with 308 residues was reported (Yamashita et al., 2015). For the same protein SAD phasing was also successful using serial synchrotron rotation crystallography (Hasegawa et al., 2017). Thus, very challenging structures for which only microcrystals are available may have their structures determined rapidly if Hg atoms are present in the crystals. Hence, a generic method for introducing Hg atoms into any crystal independent of the presence of free cysteines in the target protein could greatly facilitate the process of obtaining unbiased experimental phases. Disulfide bridges in extracellular proteins can, upon partial reduction, react with Hg 2+ and thus have a Hg atom inserted between two cysteines connected by a disulfide bridge (Sperling et al., 1969), but this confers the risk of decreasing the stability of the protein and perturbation of its structure. We describe the introduction of cysteine residues at conserved framework serine positions in a Nb specific for human complement component C5 and subsequent site-specific labelling with Hg derivatives and structure determination by SAD and SIRAS. Hg incorporation did not perturb the structure of the Nb or its antigenbinding capacity. We also show that the introduced cysteines can be labelled with Alexa Fluor 488, providing a generic method for generating fluorescent Nbs for molecular imaging. Nanobody production Human C5 and cobra venom factor (CVF) were purified as described previously (Laursen et al., 2010;Schatz-Jakobsen et al., 2016). One llama (Lama glama) was immunized with 500 mg human C5. Total RNA was isolated from peripheral blood lymphocytes using an RNase Plus Mini Kit (Qiagen) and cDNA was generated using a SuperScript III First-Strand Kit (Invitrogen) with random hexamer primers. Nb DNA sequences were amplified by PCR and inserted into a phagemid vector designed to express Nbs as pIII fusions. The M13 phage-display Nb library was generated using the VCSM13 helper phage. For Nb selection, a microtitre plate well was coated with 1 mg C5 and was blocked after 12 h with PBS supplemented with 2% BSA. A total of 3 Â 10 12 M13 phage particles were added and allowed to bind C5 for 1 h before 15 washing steps with PBS containing 0.1% Tween 20. The remaining phage particles binding to C5 at its CVF interface (Laursen et al., 2011) were eluted by adding 100 ml CVF at 1 mg ml À1 for 1 h. The eluted phage particles were then added to E. coli ER2738 cells. The enriched library was amplified and used in a second round of phage display, but this time using only 0.1 mg C5 and 3 Â 10 12 M13 phage particles. Phage particles were eluted at low pH by adding 100 ml 0.2 M glycine pH 2.2 for 15 min and then neutralized with 15 ml 1 M Tris pH 9.1 before being added to E. coli ER2738 cells. After two rounds of phage-display selection, single colonies were transferred to a 96-well plate format and grown for 6 h in LB medium before Nb expression was induced with 0.8 mM IPTG overnight at 30 C. The 96-well plate was centrifuged and 50 ml of the supernatant was transferred to an ELISA plate coated with 1 mg ml À1 C5 in blocking solution (PBS with 0.1% Tween 20 and 2% BSA). The ELISA plate was then washed six times in PBS with 0.1% Tween 20 before anti-E-tag-HPR antibody (Bethyl) was added at a 1:10 000 dilution. The plate was washed and developed with 3,3 0 ,5,5 0 -tetramethylbenzidine, the reaction was quenched with 1 M HCl and the plate was read at 450 nm. Phagemids from positive clones were isolated, sequenced and subcloned for bacterial expression. DNA encoding Nb36 was cloned into a pET-22b(+) expression vector and the cysteine mutants were generated using inverse PCR. The Nb constructs contained an N-terminal PelB signal for secretion into the periplasm and a C-terminal 6ÂHis tag. All Nbs were expressed in E. coli LOBSTR cells (Andersen et al., 2013) grown to an optical density of $0.6 before expression was induced with 0.2 mM IPTG at 18 C overnight. Cells were opened in lysis buffer [PBS buffer supplemented with 400 mM NaCl and 20 mM imidazole and with an additional 5 mM -mercaptoethanol (BME) for the cysteine mutants]. The cleared supernatant was loaded onto Ni Sepharose 6 FF affinity resin (GE Healthcare) and extensively washed before elution in lysis buffer supplemented with 400 mM imidazole. The Nbs were finally purified on a Superdex 75 10/300 gelfiltration column (GE Healthcare) in gel-filtration buffer [10 mM HEPES pH 7.6, 150 mM NaCl with an additional 2 mM dithiothreitol (DTT) for the cysteine mutants] and concentrated before labelling and crystallization. Site-specific mercury labelling Immediately before labelling, the Nbs were transferred into nonreducing buffer (10 mM HEPES pH 7.6, 150 mM NaCl) using a PD-10 desalting column. Nbs were immediately mixed with a fivefold molar excess of para-chloromercuribenzoic acid (PCMB) or a tenfold molar excess of mercury(II) acetate and incubated on ice for 1 h. PCMB labelling was quenched after 1 h by the addition of a 100-fold molar excess of iodoacetamide. To quantify the PCMB labelling efficiency, the remaining free cysteines were reacted with a tenfold molar excess of MPEG-maleimide (Sigma-Aldrich, catalogue No. 99126-64-4) on ice for 1 h. The samples were analysed by nonreducing SDS-PAGE and the ratio of Nb that had reacted with MPEG-maleimide to Nb that had reacted with PCMB was compared and quantified using ImageJ (Hartig, 2013). Site-specific fluorescent labelling As for PCMB labelling, the Nbs were transferred into a nonreducing buffer (10 mM HEPES pH 7.6, 50 mM NaCl) using a PD-10 desalting column. Nbs were mixed with a 1.5-fold molar excess of Alexa Fluor 488 maleimide (Thermo Fisher) and incubated on ice for 1 h before quenching with a 100-fold molar excess of DTT. The samples were analysed by nonreducing SDS-PAGE and fluorescence detection of the resulting acrylamide gel on a Typhoon fluorescent scanner (GE Healthcare). Crystallization Before crystallization, PCMB-labelled monomeric Nbs were separated from a minor fraction of Nb dimers on a Mono S 5/50 cation-exchange column (GE) equilibrated in 20 mM sodium acetate pH 5.5. The pH and salt concentration of the sample were adjusted before loading by mixing the PCMBlabelled Nb with one volume of 40 mM sodium acetate pH 5.5. PCMB-labelled Nbs were eluted using a linear gradient from 250 to 500 mM NaCl. The buffer was exchanged into gelfiltration buffer on a spin filter (Vivaspin 500 centrifugal concentrators) and the protein was concentrated to 8-10 mg ml À1 . All crystals were formed in vapour-diffusion experiments by mixing equal volumes of protein and reservoir solutions. Crystals of nonmodified Nb36 (Nb36-Nat1 and Nb36-Nat2) were obtained by vapour diffusion against a reservoir solution consisting of 0.2 M ammonium sulfate, 0.1 M HEPES pH 7.5, 25% PEG 3350. Crystals of PCMBderivatized Nb36-C85 were obtained by vapour diffusion against reservoirs containing either 0.1 M citric acid pH 3.5, 25% PEG 3350 (Nb36-C85-1) or 0.2 M sodium malonate pH 7.5, 20% PEG 3350 (Nb36-C85-2). Crystals were cryoprotected by transferring them stepwise into 35% PEG 3350 for Nb36-Nat1, Nb36-Nat2 and Nb36-C85-1, while Nb36-C85-2 crystals were transferred to mother liquor supplemented with 30% ethylene glycol, before being flash-cooled in liquid nitrogen. Structure determination The data were processed with XDS and XSCALE (Kabsch, 2010). SAD phasing of the two PCMB derivatives was performed independently with phenix.autosolve (Terwilliger et al., 2009). Subsequent density modification was performed with phenix.autobuild (Terwilliger et al., 2008). Owing to the identification of multiple NCS operators in both cases, test-set reflections were selected in thin shells with phenix.refine (Afonine et al., 2012) prior to automated model building with either buccaneer_pipeline (Cowtan, 2006) or phenix.autobuild. The resulting models were rebuilt in Coot (Emsley et al., 2010) in an iterative manner and refined with phenix.refine until convergence using NCS restraints. A distance restraint of 2.3 Å between the SG atom of Cys85 and a connected Hg atom was present during refinement. A Nb molecule from the Nb36-C85-1 structure with the CDR regions deleted was used as a search model for molecular replacement with phenix.phaser (McCoy et al., 2007) into the Nb36-Nat1 data collected from an underivatized crystal on beamline I04 at Diamond Light Source (DLS). The model was iteratively rebuilt in Coot, refined with phenix.refine and then used for molecular replacement into the Nb36-Nat2 data collected from an underivatized crystal on BioMAX at MAX IV; it was completed by iterative rebuilding and refinement. The quality of all structures was analysed with MolProbity (Chen et al., 2010). Figures were prepared with PyMOL 1.8 (http://www.pymol. org). research papers 2.6. Antigen-binding measurements Binding of native Nb36 and Hg-labelled Nb36-C85 to C5 was measured on an Octet RED biolayer interferometer (Pall ForteBio) in PBS buffer. Histidine-tagged Nbs were immobilized on Anti-Penta-HIS biosensors (Pall ForteBio) at a concentration of 2.5 mg ml À1 and amounted to approximately 0.2 nm saturation. Interaction with C5 was measured in a dilution series of antigen concentrations ranging from 62.5 to 2000 nM for 700 s. Subsequently, the dissociation was recorded for 1800 s. To account for baseline drift during the experiment, biosensors immobilized with Nbs dipped into PBS without C5 were measured in parallel and subsequently subtracted. Sensorgrams were processed using ForteBio Data Analysis 7.0 (Pall ForteBio) and data were globally fitted using nonlinear regression in GraphPad Prism 6 (GraphPad Software) with a goodness of fit (R 2 ) of 0.99 for both native Nb36 and PCMB-labelled Nb36-C85. Selecting Nb36 against complement component C5 To identify Nb36 targeting C5, we immunized a llama and performed two rounds of phage display on the derived library. Finally, we performed ELISA and the identified clone was sequenced and subcloned into a bacterial expression vector. Next, by aligning 20 Nb-antigen structures from the PDB, we identified positions within the Nb framework for the introduction of a free cysteine that we predicted would not interfere with antigen binding. These chosen positions were also conserved among Nb families, ensuring the general applicability of our approach. Antigen binding can be mediated by the CDR loops which protrude from the N-terminal side of the Nbs. Alternatively, the CDR loops can adopt a conformation that occludes the face made up of -strands C 00 -C-C-F-G. In all of the Nb complexes examined the C-terminus and the A-B-E-D -sheet face are freely available and do not engage in antigen binding (Fig. 1a). For Nb36 we chose to substitute conserved serine residues and generate four cysteine mutants modified at one of the positions 8, 71, 85 or 118 (corresponding to positions 7, 70, 82b and 112 in the Kabat nomenclature). The native Nb36 and all variants with a single free cysteine were successfully expressed in E. coli and purified to homogeneity, producing milligram quantities. Mercury and fluorophore labelling Purified Nb cysteine mutants (Nb36-C8, Nb36-C71, Nb36-C85 and Nb36-C118) were labelled with either mercury compounds or a fluorescent dye. For mercury labelling, Nbs were first exchanged into a nonreducing labelling buffer. This was necessary because standard reducing agents such as DTT and BME contain thiol groups that react with mercury compounds. When analysed by nonreducing SDS-PAGE all of the Nbs ran predominantly as monomers around 13 kDa, with various degrees of cysteine-mediated dimers appearing at 25 kDa (Fig. 1b, lane 1). In particular, Nb36-C8 and Nb36-C118 had a higher tendency to form dimers compared Site-specific labelling of Nbs. (a) Overview of a Nb structure (exemplified here by PDB entry 3p0g; Rasmussen et al., 2011) with the face in red observed to contribute to antigen binding (potential binding face); the face in blue has never been observed to directly interact during antigen binding (free face). with Nb36-C71 and Nb36-C85. To test the accessibility of the free cysteines in these Nb mutants, we labelled them with an MPEG moiety using maleimide chemistry. We were able to label the monomeric Nbs for all of the mutants, as seen by an $10 kDa shift in molecular weight (Fig. 1b, lane 2), indicating that the free cysteine is accessible in all Nbs. We next examined whether it was possible to label the Nbs with either mercury(II) acetate or PCMB. Mercury(II) acetate treatment gave visible smeary bands (Fig. 1b, lane 3), whereas PCMB modification of the Nbs was not clearly resolved by SDS-PAGE (Fig. 1b, lane 4). To visualize the PCMB-labelling efficiency we post-treated with MPEG to detect the available free cysteines after PCMB labelling, and we clearly observed that most cysteines are inaccessible after PCMB treatment (Fig. 1b, lane 5). Finally, we semi-quantified the PCMBlabelling efficiency by comparing the band intensities between MPEG-labelled Nb monomers and PCMB-labelled monomer bands post-treated with MPEG and found that the Nbs mutants were labelled with an efficiency of between 78 and 94% (Fig. 1c). Since Nbs are often used in high-resolution imaging and other experiments requiring fluorescent signals, we tested whether we could fluorescently label our Nb cysteine mutants with the Alexa Fluor 488 maleimide fluorescent dye. We again performed a buffer transfer to avoid the reaction of reducing-agent thiols with labelling reagents. The Coomassie Blue-stained SDS-PAGE showed single monomeric Nbs in all lanes (Fig. 1d, top) and comparison with fluorescence imaging of the same gel (Fig. 1d, bottom) revealed that fluorescence labelling was successful for all four Cys-mutant Nbs using a 1.5-fold molar excess of fluorescent dye. In summary, we show that it is possible to introduce free cysteines at four different positions within the Nb framework and that we can label these efficiently. Nb36-C85 showed an overall high efficiency for both mercury and fluorescence labelling and a low tendency to form cysteine-mediated dimers, and we decided to continue our structural analyses using this version. De novo phasing using incorporated mercury We were able to crystallize both native Nb36 and Nb36 Ser85Cys labelled with PCMB. For the derivatized protein we obtained crystals at two very different pH values of 3.5 (Nb36-C85-1) and 7.5 (Nb36-C85-2). To investigate the potential of the Hg-derivatized Nbs for rapid and automated structure determination without prior phase information, we analysed two SAD data sets exhibiting space groups P2 1 2 1 2 1 (Nb36-C85-1) and P2 1 (Nb36-C85-2). For the P2 1 2 1 2 1 case, 720 of data were collected to a maximum resolution of 1.5 Å (see Table 1 for details). Analysis of the anomalous signal from the four consecutive 180 wedges indicated a significantly stronger signal in the first 180 of data compared with the subsequent 540 of data. The anomalous correlation between random half data sets according to XSCALE was 0.39 and 0.16 for the rotation ranges 1-180 and 180-720 , respectively. Importantly, even for the rotation range 540-720 the anomalous correlation was still 0.1 for the full resolution range with a clear anomalous signal in the resolution shell 2.74-2.54 Å , which exhibited an anomalous correlation of 0.18. For comparison the 1-180 data had an anomalous correlation of 0.19 in the 1.94-1.86 Å resolution shell. Remarkably, the R meas values for the 1-180 and the 540-720 data were 0.054 and 0.055, respectively. Hence, in line with previous studies of Hgsubstituted cysteine side chains (Ramagopal et al., 2005), we observed a significant selective decay of the anomalous signal from Hg, whereas the overall data quality was preserved during the full rotation range. The pronounced decay of the anomalous signal inspired us to compare SAD phasing based on the 1-180 data with SIRAS phasing, in which the 180-720 data acted as the native data and the 1-180 data as a derivative with an anomalous signal. This strategy is also known as radiation-damageinduced phasing with anomalous scattering (RIPAS) and has previously been shown to strongly improve the resulting electron density obtained after phasing based on anomalous data with significant decay in the anomalous signal from Cys-Hg derivatives of the YjcF and YidA proteins (Ramagopal et al., 2005). For site identification and phasing we used phenix.autosol and obtained figures of merit (FOMs) of 0.38 and 0.40 for the SAD and SIRAS scenarios, respectively. In the SAD case two major sites each with two minor sites located within 1.4-1.6 Å were modelled by phenix.autosol, whereas only the two major sites were modelled in the SIRAS phasing scenario. Despite the apparent modest differences in FOM for the scenarios, the quality of the SIRAS-based electron density prior to any density modification was clearly superior to that based on SAD phasing (Figs. 2a and 2c). Subsequent density modification in both cases resulted in easily interpretable maps (Figs. 2b and 2d), although the SIRAS-based map was still slightly superior. The SAD-based density-modified map based on the 1-180 data was used for automated model building and refinement with the Buccaneer pipeline, which traced the four molecules in the asymmetric unit almost completely. Subsequent refinement of the two Hg atoms, and a few cycles of iterative refinement and rebuilding resulted in a model with an R free of 0.215. As already indicated by the site identification in phenix.autosolve, the structure did not contain a single Hg atom at each free Cys85 introduced into the Nb. Instead, two Hg atoms each bridge two cysteine side chains from two NCSrelated Nbs and no density could be assigned to the benzoate moiety. This implies that the benzoate was released at some point between the isolation of monomeric Hg-substituted Nb and cooling of the crystal. A water atom appears to adopt a third coordination position at both sites, and another water molecule 3.3 Å from the Hg atom is also present at both sites, but is more likely to be present owing to a nearby main-chain carbonyl group. When contoured above 6 the Fourier map calculated with anomalous differences from the 1-180 data and phases calculated from the final model refined against the 1-180 data (with Hg atoms and water omitted) did not show much evidence of Hg subsites (Fig. 2e). In contrast, when the same phases were used in combination with the anomalous differences from the 180-720 data, there was very clear evidence of two Hg subsites around each major site (Fig. 2f ). The strongest of these was separated from the major site by 3 Å and located at the position of a water molecule in the structure refined against the 1-180 data, whereas the anomalous density for the second minor site was continuous with that of the major site (Figs. 2e and 2f ). The strongest subsite could in principle correspond to a Hg atom bound to only one cysteine, but there is no density supporting an alternative conformation of the nearest cysteine (Fig. 2g). The strong subsite is therefore more likely to correspond to a Hg 2+ ion released from both cysteine side chains. In contrast, the weakest subsite appears to stem from a fraction of molecules in which the Hg atom is only bound to one cysteine side chain since density supporting an alternative conformation of the nearby cysteine is present. A movement of Hg atoms induced by radiation damage was also noticed for the YidA protein (Ramagopal et al., 2005). For the P2 1 crystal 360 of data (Nb36-C85-2) were integrated to a d max of 2.5 Å . The anomalous signal did not extend beyond the 3.5-3.7 Å resolution shell, which had an anomalous correlation between random half data sets of 11% according to XSCALE. Owing to the low symmetry, 180 of spindle rotation was required to obtain a complete set of anomalous differences. Since only a minor difference between the anomalous signal in the two segments 1-180 and 180-360 was observed we did not pursue SIRAS phasing in this case, but instead performed SAD phasing based on all 360 of data. This resulted in a set of phases with an overall FOM of 0.34 based on 19 sites in eight groups. As also observed in the P2 1 2 1 2 1 case, some of the phenix.autosol minor sites were within 2 Å of the major sites, reflecting their role in modelling of a nonspherical major site, whereas sites separated by 5-7 Å reflected more distinct sites that were also mirrored in the refined model (see below). Density modification with phenix.autobuild revealed clear electron density for multiple Nbs in the asymmetric units (Fig. 3a). Upon automated model building and refinement with phenix.autobuild, the majority of the nine molecules were traced and the resulting model, which was 81% complete compared with the final model, had an R free of 0.332. OMIT density for the tenth Nb molecule was clearly visible in a 2mF o À DF c map, but an MR search model derived from the P2 1 2 1 2 1 structure could not be placed correctly with phenix. phaser using either a standard or a phased translation function and the tenth NCS copy was therefore docked manually. After iterative model building and refinement, a final model with an R free of 0.275 was obtained. In this model water molecules were not included, as their inclusion did not decrease the R free . The Nb molecule that was placed manually appears to have a slight rotational freedom within the crystal packing, since the CDR pole of the Nb has very poor electron density, whereas the opposite end containing Cys85 has much better defined density. In the final P2 1 model (Fig. 3b), which includes nine Hg atoms, all ten Cys85 side chains are bound to Hg but in four different manners. Two of the Hg atoms are each inserted between two side chains, as also observed in the P2 1 2 1 2 1 crystal form. Another pair of Hg atoms with modelled occupancies of 0.6 and 0.4 are bonded to two alternative side-chain conformations of the same cysteine (Fig. 3c), which gives rise to a separation of 5.5 Å between the two Hg sites. This leaves five Hg atoms that are bound only to a single cysteine. Interestingly, amongst these there are two very similar arrangements in which two Hg atoms, each bound to a single cysteine side chain, are located 5.3 Å from each other (Figs. 3d and 3e). In both cases one of the two Hg atoms appears to neighbour a nonbonded cysteine SG atom $3.1 Å away in addition to the SG atom that it is directly bonded to. As for the P2 1 2 1 2 1 Hg sites there is little or no density that may be readily attributed to a benzoic acid moiety of PCMB. Structure of Nb36 Nb36 shows the classical two-layered structure built of fourstranded and five-stranded antiparallel -sheets connected by loops (Fig. 4a). The three CDR loops protrude from the N-terminal end of this compact Ig fold and form the antigenbinding surface. Comparing and overlaying the 16 different Nb36 molecules from the four different crystals reveals that the Nb framework is structurally similar and the variations are within the flexible CDRs and especially the longer CDR3 loop (Fig. 4a). Since we obtained the crystal structure of both native Nb36 and PCMB-labelled Nb36-C85 we can compare their structures. The superposition shows that the non-CDR framework residues are not structurally affected by mutating Ser85 to Cys (Fig. 4b). Next, we tested whether Hg labelling The structure and antigen binding is not perturbed by the Ser85Cys mutation. (a) Overlay of the 16 different Nb36 molecules present in the four different crystals, with CDR1, CDR2 and CDR3 coloured red, green and yellow, respectively. (b) Structural superposition of native Nb36-Nat1 (light blue) and PCMB-labelled Nb36-C85-1 (dark blue). (c, d) Binding of the C5 antigen to immobilized native Nb36 or PCMB-labelled Nb36-C85 measured by biolayer interferometry. The experimental association and dissociation curves are shown in black and the fitted curves are shown in blue. compromises binding to the C5 antigen. We performed biolayer interferometry experiments in which we immobilized either native Nb36 or PCMB-labelled Nb36-C85 on Anti-HIS biosensors and measured binding at different concentrations of C5. These experiments suggested that native Nb36 binds C5 with an equilibrium dissociation constant (K d ) of $86 nM (Fig. 4c) and PCMB-labelled Nb36-C85 binds C5 with a K d of $37 nM (Fig. 4d). Overall, our studies show that it is possible to introduce free cysteines in the Nb framework and express, purify and label these with a fluorescent group or Hg. For one selected variant of the C5-binding Nb we show that Hg derivatization does not compromise antigen binding or influence the framework structure. Importantly, our data provide a route for simple experimental phasing using Hg-derivatized Nbs. Discussion It was our expectation that the crystallization of a monomeric PCMB-derivatized Nb would reveal structures with a Hg atom inserted between the benzoate moiety and the side chain of Cys85. In the two PCMB structures we have modelled a total of 11 Hg atoms; four of these were clearly bridging two cysteines, but the remaining seven did not display density that could be attributed to the benzoate fragment. This suggests a high tendency for benzoate release, possibly by a mechanism reminiscent of the protonolysis of organomercurial compounds catalysed by the MerB enzyme, which can accelerate protonolysis by up to 10 7 (Parks et al., 2009). Models of the MerB reaction mechanism suggest that the release of the organic substituent is catalysed through its protonation by an aspartic side chain that acts as a proton shuttle during transfer of the proton from one of the reacting cysteine -SH groups to the C atom bound to Hg (Parks et al., 2009), here the benzoate C4 atom. Whether this mechanism of benzoate release is relevant in our Nb36-C85 PCMB crystallization experiments cannot be decided upon, especially since we do not have a structure of the intermediate from which the benzoate is released. There are several deposited structures in the PDB containing intact PCMB (residue identifier MBO) in which the CysS-Hgbenzoate substructure is modelled. Intriguingly, there are also two entries, PDB entries 5ec5 (Podobnik et al., 2016) and 1naq (Arnesano et al., 2003), in which PCMB has been used in which some of the Hg sites have the benzoate modelled while other sites have only the Hg bound to the cysteine. Hence, the release of benzoate observed in our structures appears to be a general feature of this reagent. Although unintentional, it may have given rise to new crystal forms as our crystals of nonderivatized Nb36 exhibit C2 symmetry in contrast to P2 1 or P2 1 2 1 2 1 symmetry for the PCMB-derivatized Nb, and in both cases there are Hg-bridged Nb dimers. Methane and ethane are released very slowly from organomercurials by protonolysis (Begley et al., 1986), suggesting that the use of mercury compounds such as CH 3 HgCl and CH 3 CH 2 HgCl may minimize the observed unintentional liberation of the organic group during crystallogenesis and storage. It is interesting that in our P2 1 case, which has ten Hgderivatized Nbs in the asymmetric unit, there are two different arrangements that both lead to a separation of two Hg sites by $5.5 Å . This may be utilized in heavy-atom search procedures, where using a pair of Hg atoms separated by 5.5 Å as a model of such a super-Hg site may be beneficial in the same manner as the use of the known geometry of disulfide bridges by SHELXD (Sheldrick, 2008). Our results demonstrate that SAD-based structure determination of a 14 kDa Nb with either one or half a Hg atom bound is straightforward, but the potential of Hg-substituted Nbs in crystallography goes far beyond this. For any crystal containing a Nb-antigen complex experimental phases can most likely be obtained by using a Hg-substituted Nb for co-crystallization. For very large antigens, the anomalous signal may be considerably enhanced by reacting the Nb with the four-mercury cluster tetrakis-(acetoxymercuri)methane, which is a classical compound for the phasing of large structures by isomorphous substitution (O'Halloran et al., 1987;Andersen et al., 1995). The introduction of cysteines and subsequent conjugation with malemides opens a range of possible applications beyond phasing in protein crystallography. Their high specificity and small size make Nbs especially useful for imaging. Nbs have previously been labelled with fluorophores using NHS esters (Ries et al., 2012) and by maleimides (Pleiner et al., 2015) that react with introduced cysteines as described in the current study. Compared with nonspecific lysine NHS ester labelling, site-specific labelling with maleimide fluorophores resulted in less background and better paratope preservation (Pleiner et al., 2015). In non-invasive in vivo imaging, Nbs have been applied in radionuclide-based techniques including position emission tomography (Vosjan et al., 2011) and single-photon emission computed tomography (Huang et al., 2008). Here, the chelating agents that bind the radionuclide were conjugated to lysine residues or through a C-terminal His tag and the introduction of free cysteines may allow alternative conjugation chemistries (George et al., 1995). The ability to sitespecifically conjugate cytotoxic molecules to Nbs and antibodies is desired during the generation of antibody-drug conjugates. Currently licensed antibody-drug conjugates are produced by nonspecific labelling of lysine residues, resulting in a mixture of heterogeneous molecules with different numbers of cytotoxic molecules conjugated to each antibody (Diamantis & Banerji, 2016). Precise control of the number of cytotoxic molecules that are conjugated by labelling though introduced cysteine residues would eliminate the heterogeneity and increase the therapeutic potential, as has already been shown for IgG antibodies (Panowski et al., 2014). Likewise, the introduction of cysteines may allow site-specific and controlled PEGylation to increase stability and extend the half-life of Nbs in a therapeutic setting. In conclusion, we show that by introducing Hg-reactive cysteines in the Nb framework we are able to routinely obtain high-quality experimental phases from anomalous data. Here, we have used this for structure determination of the derivatized Nb, but this approach will also provide rapid access to the structure determination of larger protein complexes research papers containing Nbs. In a wider perspective, our results also offer a route to site-specific Nb modifications, which will be highly beneficial in imaging and drug-development applications.
2017-11-18T18:28:37.221Z
2017-09-27T00:00:00.000
{ "year": 2017, "sha1": "def64cc68653f017b64448946cdfb2030bc85cc4", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/d/issues/2017/10/00/qh5047/qh5047.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "def64cc68653f017b64448946cdfb2030bc85cc4", "s2fieldsofstudy": [ "Biology", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
257444618
pes2o/s2orc
v3-fos-license
Responses of two strawberry cultivars to NaCl-induced salt stress under the influence of ZnO nanoparticles Salinity stress is one of the most serious impacts of climate changes on agriculture production, especially in salt sensitive crop plants, like strawberry. Currently, the utilization of nanomolecules in agriculture is thought to be a useful strategy to compact abiotic and biotic stresses. This study aimed to investigate the effect of zinc oxide nanoparticles (ZnO-NPs) on the in vitro growth, ions uptake, biochemical and anatomical responses of two strawberry cvs (Camarosa and Sweet Charlie) under NaCl-induced salt stress. A 2x3x3 factorial experiment was conducted, with three levels of ZnO-NPs (0, 15 and 30 mg 1-l) and three levels of NaCl-induced salt stress (0, 35 and 70 mM). The results showed that increased levels of NaCl in the medium had led to decrease in shoot fresh weight and proliferative potential. The cv Camarosa was found to be relatively more tolerant to salt stress. Additionally, salt stress leads to an accumulation of toxic ions (Na + and Cl-), as well as a decrease in K + uptake. However, application of ZnO-NPs at a concentration of 15 mg 1-l was found to alleviate these effects by increasing or stabilizing growth traits, decreasing the accumulation of toxic ions and the Na+/K + ratio, and increasing K + uptake. Additionally, this treatment led to elevated levels of catalase (CAT), peroxidase (POD) and proline content. The positive impacts of ZnO-NPs application were reflected on the leaf anatomical features, being better adapted to salt stress. The study highlighted the efficiency of utilizing tissue culture technique in screening of strawberry cultivars for salinity tolerance under the influence of NPs. Introduction Plant breeding is a fundamental discipline that is concerned with growing crops in a scientific manner to help end food insecurity (Lammerts van Bueren et al., 2018). The world population is growing at an alarming rate and is predicted to reach ten billion people within the next twenty years. Therefore, increasing agricultural production is every country's top economic goal (Kliem and Sievers-Glotzbach, 2022). Climate change is a huge problem that agricultural systems need to address. It puts pressure on both plants and animals, and threatens food security on a local and global scale (Urruty et al., 2016). Currently, plant breeding programs are concerned with improving the genetic traits of plant species and creating promising varieties in terms of productivity and quality to confront climate changes (Kamenya et al., 2021). Soil that has become salinized represents a major risk to the ecosystem and the economy, where both plant productivity and the germplasm exchange are typically negatively impacted by salt in semi-arid and arid climates, such as Saudi Arabia (Al-Taisan, 2022). One of the most widely consumed fruits worldwide is the strawberry (Fragaria x ananassa Duch.), which is well-known for its flavor and health benefits, these advantages are due to the presence of potential useful components such phenolic compounds (Giamperi et al., 2012). In addition, strawberry cultivation and production is rapidly expanding in the Middle East and the Gulf states for its socio-economic importance. However, the plants are classified as salt sensitive (Crizel et al., 2020;D'anna et al., 2003;Grieve et al., 2012). Salinity has been demonstrated to decrease the number and size of leaves, the weight of shoots, and number of branch crowns in the strawberry plant, which results in low fruit production (Pirlak and Esitken, 2004). Although, limited research has been done on the impacts of growing strawberry in salinized soil, particularly the implications on strawberry quality, it is yet unclear what molecular and biochemical processes underlie the consequences of mild salt stress (Galli et al., 2016). Improving knowledge of plant resources available in each country and assessing them in terms of valuable traits to adapt to climate changes is the main important step in breeding programs in order to provide and identify plant varieties with distinct traits that have the ability to grow and adapt to future production systems (FAO, 2015). Thus, it is imperative to start evaluate and selection of strawberry cultivars resistant to abiotic stresses (Abu Zeid et al., 2021). The most useful way for evaluation is to look at how strawberry genotypes perform in the field when under salt stress; however the results are sometimes inconclusive. During the growth season, temperature changes and variable moisture availability are typically connected with field trials. It necessitates a plenty of space, time, manpower, as well as the right equipment and planting materials (Arvin and Donnelly, 2010). However, contemporary agricultural biotechnology, such as tissue culture, may have a significant effect in enhancing resistance to abiotic stress (Wieczorek, 2003;Yosefi andJavadi, 2022), crop propagation (Mohamed et al., 2022), screening of plantlets, and studying various plant growth manifestations under well-identified environments (Shatnawi et al., 2004). In the early study of Zhang et al. (2004), the study by Zhang et al. in 2004 found that the in vitro culture response to salinity stress was similar to what is seen in whole plants. Other modern innovations include nanotechnology, which is anticipated to assist in addressing the issue of global salt stress and enhancing the efficiency, sustainability, and resilience of agricultural systems (Aithal and Aithal, 2022;Hofmann et al., 2020;Nair et al., 2010). Nanotechnology is a promising field that develops from nature (Kim et al., 2017). Nature can be thought of as a source of various particles, from those formed from ash to selenium and Zinc nanoparticles (NPs) created by microbiota (Griffin et al., 2017). Nanoparticles, often known as intelligent materials, are substances with either internal or external morphology between one and one hundred nanometers (Rai et al., 2018). Nanoparticles are playing an increasingly important role in agriculture. They can be made from a variety of materials, including polymers, metals, metal oxides, nonmetals, and carbon. Each type of nanoparticle has its own unique physical and chemical properties (Thakur et al., 2018;Wang et al., 2016). Plants under saline conditions may experience changes in their genetic makeup as well as physiological properties (Maurer-Jones et al., 2013). Nanoparticles are frequently used to enhance plant productivity and growth, enable plant genetic modification, boost the production of phytochemicals, and preserve plants (Ditta, 2012;El-Saadony et al., 2019;Masoomeh et al., 2021). Plants' genetic makeup can be changed using nanoparticles to make them resistant to salinity (Maurer-Jones et al., 2013;Zulfiqar and Ashraf, 2021). Others have utilized NPs for biological purification during plant in vitro propagation (Mousavi Kouhi and Lahouti, 2018;Regni et al., 2022). Zinc oxide is a valuable NP for reducing plant stress. It can be deposited onto the cell wall, reinforcing the physical barrier and enhancing the plant's immune system and tolerability. This makes it an important tool for improving plant health (Raha and Ahmaruzzaman, 2022). Zinc and zinc nanoparticle treatments help to lower the oxidative stress response by triggering defense mechanisms in response to biotic and abiotic stress (Khan et al., 2021). Additionally, several studies indicated that Zinc-NPs improved seed germination, induction of antioxidants, and increase fresh and dried leaf weight, proline accumulation, and chlorophyll content in the presence of salt stress (Das and Das, 2019). High salt contents reduce water content and cause osmotic pressure throughout the plasma membrane, which reduces the amount of water content and puts plants under water deficit. Furthermore, most of the harmful consequences are caused by the existence of Na + ions in the cytosol, which compete with K + ions as a cofactor in vital enzyme reactions, like photosynthesis (Gupta and Huang, 2014). It is important to study the effect of Zn Nano molecules on plant ion balance during salt stress. Earlier studies have shown that plants produce more reactive oxygen species when subjected to salt stress, which negatively impacts vegetative growth and production (Hasegawa et al., 2000;Parida and Das, 2005). However, Plants have evolved and created several coping mechanisms to deal with stress circumstances through ROS scavenging (Kamal et al., 2010;Kerchev and Van Breusegem, 2022). Antioxidant enzymes are known for their high antioxidative properties, which enable plants to respond effectively to environmental stressors (Apel and Hirt, 2004;Kapoor et al., 2019;Kruck et al., 2005). The interrelationship between Zinc-NPs application and changes in antioxidant capacity under salt stress in strawberry genotypes needs more studies towards better understanding of the role of NPs under stress conditions. Therefore, the current study's objectives were to examine the effects of applying ZnO-NPs under salinity stress on the in vitro growth and shoot proliferation in two strawberry genotypes. The associated changes in proline content, antioxidant enzyme activities and ion concentrations in response to NaCl and nano zinc oxide treatments were monitored, as well as changes in leaf anatomy. Plant materials The research experiment was conducted in the Biology Department's labs at King Abdulaziz University, Saudi Arabia. The two strawberry (Fragaria  ananassa Duch) cultivars: Sweet Charlie (SW) and Camarosa (CAM) were examined for salt stress tolerance under the influence of zinc oxide nanoparticles (ZnO-NPs) application in vitro. Preparation of ZnO-NPs suspension Zinc nanoparticles of diameter < 30 nm average part size were obtained from (Lot # MKBS 3961B, Sigma-Aldrich Company, Switzerland) for use in the current study. to achieve 15 and 30 mg l À1 concentrations, distilled water (one litter) was used to prepare suspensions of Zn-NPs (1.5 g l À1 ), and this was immediately followed by a 30-minute disperesd process with a sonicator. During the experiment and before any use of these particles in the culture media, the suspensions of ZnO nanoparticle were centrifuged and filtered according to Helaly et al. (2014). In vitro propagation To obtain proliferated shoots from the strawberry cv. SW and CAM, runner tip explants were excised from the mother plants under sterilization conditions and cultured on shoot initiation medium with control of temperature and lighting according to Abu Zeid et al. (2021). After 6 weeks, explants were sub-cultured onto a shoot multiplication medium consisted of MS (Murashige and Skoog, 1962) basal salts and vitamins (Duchefa Biochemie, Holland)), supplemented with the cytokinin 6-benzylamiopurine (0.3 mg l À1 BA) + 3% sucrose. The explants were incubated in vitro at 22 ± 2°C and 16 h photoperiod provided by cool white fluorescent lamps of 3 Klux for 4-6 weeks. Examination of strawberry response to in vitro NaCl-induced salt stress and ZnO-NPs To assess the effects of salinity stress and ZnO-NPs concentration on strawberry plant growth, MS media amended with NaCl (0, 35, and 70 mM) and three doses of ZnO-NPs (0, 15, and 30 mg l À1 ) were used. The pH was adjusted to 5.7 and the media was solidified with 0.7% agar. The media was dispensed into glass jars (30 ml per jar) and placed three excised plantlets per jar. The experiment was arranged as a 2x3x3 factorial in a complete randomized design (CRD) with five replicates/treatment. After six weeks, fully proliferated shoot clusters were obtained and shoot cluster fresh weight (SFW) and number of shoots per cluster (NSC) were measured. Proline determination To estimate the amount of proline in shoot tissues under the experimental treatments, 0.1 g of leaves were weighed and ground in 2 ml of aqueous sulphosalicylic acid soluation (3%) and filtered. Two ml of filtered solution was mixed with 2 ml of glacial acetic acid, followed by 2 ml of acid ninhydrin, then placed on a water bath for heating. After 60 min of boiling the reaction was stopped by transferring the tubes to an ice bath. Finally 4 ml of toluene were added and stirred for 20-40 s. The toluene layer was seperated under room tepmerature 25 ± 2°C and red color intensity was measured at 520 nm according to (Sadasivam and Manickam 1991). Measurement of antioxidant enzymes 0.1-0.4 g in vitro shoots were collected and frozen in liquid nitrogen and stored at À800C until analysis. To assess peroxidase (POD) activity using quantitate methods, every step should be minting in ice conditions as described by Hammer Schmidt et al. (1982). Samples were taken from freezing and left for 5 min at room temperature, then a weight of 0.1 g was taken and grounded well in 0.1 M potassium phosphate buffer in mortar. 100 ml of enzyme extraction were collected in Falcon tubes and mixed with 33% pyrogallol (0.05 M). 500 ul were taken and applied to a spectrophotometer sample cuvette and the reaction was started by adding 100 ul of hydrogen peroxide. The absorbance was assessed at 420 nm. On the other hand, the activity of catalase (CAT) was determined using biodiagnostic kit No. CA 2517, which is based on Aebi (1984) spectrophotometric method. The absorbance was assessed at 510 nm. Determination of Na, Cl and K concentration Ions measurements in strawberry shoots were made on a FLM3 flam photometer (Radiometer, Copenhagen). The standard solution contained sodium chloride (14-/+1.4 mmol l À1 ) and potassium chloride (5 -/+0.5 mmol l À1 ) stored at room temperature (25°C). Zero adjustment was against blank prepared by adding 5 ml of concentrated lithium chloride (300 -/+5 mmol l À1 ) to 500 ml of distilled water. Sodium and potassium were assessed following the method of Chapman and Pratt, (1961). Cl À level was assessed by the method of Ramsay et al. (1955) by titration with AgNO 3 in the presence of K 2 CrO 4 . Leaf anatomy Anatomical studies were made to monitore cellular and tissue changes occuring in the in vitro microplants in response to ZnO-NPs treatments under salt stress. Leaflet blades of equal sizes were excised from the shoot clusters derived from the control, NaCl (70 Mm), ZnO-NPs and NaCl + ZnO-NPs (15 mg l À1 ). These were fixed for 2d in a solution of Formalin/Glacial Acetic Acid/ Ethanol (FAA), washed with dH 2 O, dehydrated in ethanol series and embeded in paraffin wax. At 10 lm thickness, transverse sections from the middle of leaflet were made using a rotary microtome (Olympus cut), then stained with 5% (w/v) solution of toluidine blue. The sections were analyzed under light microscope (Leica ICC 50, Leica microsystem, Germany)) with the aid of ocular micrometer. Statistical analysis Analysis of variance (ANOVA) was applied using SPSS 14 for Windows statistical package (IBM Corp., New York, USA). The differences between treatment means were estimated using Duncan's multiple range method (Duncan, 1955). Effects of NaCl and ZnO-NPs on the in vitro growth and shoot proliferation Results for main effects of strawberry cv, NaCl and ZnO-NPs treatments (Table 1) indicated that cv CAM had more mean SFW, but less NSC than cv SW. Salt stress imposed by NaCL in the medium at 35 and 70 mM significantly (P < 0.05) decreased SFW by 11 and 50.2%, and NSC by 19.7 and 59.5%, respectively over the control (0 NaCl). Addition of ZnO-NPs at 15 mg l À1 had no effect or slightly increased SFW and NSC. However, at 30 mg l À1 , ZnO-NPs decreased NSC by 32% over the control (0.0 ZnO-NPs). The two strawberry cultivars responded differently (P < 5% for SFW and < 1% for NSC) to NaCl-induced salinity stress in the medium (Table 2 and Fig. 1A&B). At moderate salinity level (35 mM), SFW decreased in cv SW, while it was higher than the control (0 NaCl) in cv CAM. At the highest salinity level (70 mM NaCl), a significant decrease in SFW was detected in both cvs, but the decline was higher in SW (66%) compared to CAM (33%) as indicated in (Fig. 1A). Similarly, increasing NaCl level in the shoot proliferation medium had resulted in significant reduction in NSC (60%) in cv SW, while no change was detected in cv CAM ( Fig. 1B and Fig. 2), indicationg that cv CAM is relatively more salt stress tolerant than cv SW. Regarding cultivar growth in response to ZnO-NPs treatment, results indicated significant cv  ZnO-NPs interaction for NSC (P < 5%, Table 2) and the use of ZnO-NPs at 15 mg l À1 increased NSC in cv SW, but had no effect on NSC of cv CAM (Fig. 1B). In both cvs, higher concentration of ZnO-NPs (30 mg l À1 ) had negative impact on NSC. As tested over the two cultivars (NaCl  ZnO-NPs interaction), the addition of ZnO-NPs under salinity stress had no significant effect on SFW (Table 2). However, the use of ZnO-NPs at 15 mg l À1 , under moderat salinity stress (35 mM NaCl) had resulted in significant increase in NSC (Fig. 1B). Under the highest NaCl level (70 mM), no significant decrease in NSC was detected with the adminstration of ZnO-NPs at 15 or 30 mg l À1 . Effects of NaCl and ZnO-NPs on antioxidant enzymes and proline content The results of this study (Table 1) revealed that whereas proline concentrations were not significantly different between the two strawberry cvs, the shoots of the cv CAM had higher CAT and POD activities. With each rise in the medium's NaCl level, salt stress significantly raised CAT, POD, and proline levels. Results also showed that, as compared to the control (0 ZnO-NPs), the administration of ZnO-NPs at 15 mg l À1 had caused significantly higher levels of CAT and POD as well as proline content (Table 1). Results of the cv  NaCl interaction effects (Table 2) indicated that increasing salinity level in the medium had resulted in significant increase (P < 0.1) in CAT activity in shoots of cv SW and CAM (Fig. 1 C). Under moderate and high NaCl levels, the activity of CAT was higher in cv CAM than in cv SW. Results also showed that the activity of POD was not affected by salinity stress in cv SW (Fig. 1D). However, under the highest NaCl level, POD activity in shoots of cv CAM recorded > 60% increase over the control (0 NaCl). Similarly, proline increased with the increase in NaCl level, in both strawberry cultivars. However, at the highest salinity level, the magnitute of increase over the control was higher in shoots of cv CAM (57%) than in cv SW (48%) as estimated from Fig. 1E. The interaction of cv  ZnO-NPs significantly affected CAT, but not POD activity (Table2). ZnO-NPs application significantly increased CAT activity over the control (0 ZnO-NPs) in both strawberry cultivars (Fig. 1C). However, at 15 mg l À1 ZnO-NPs, this increase was higher (64%) in cv SW than CAM (22%). Meanwhile, shoot of cv CAM had more CAT activity than cv SW at 15 or 30 mg l À1 ZnO-NPs, while POD activities in the two strawberry cultivars were not affected by ZnO-NPs application (Fig. 1D). Results indicated that, at 15 mg l À1 ZnO-NPs, proline content was comparable to the control in shoots of cv SW, while it was significantly higher than the control in cv CAM. In contrast, at higher level of ZnO-NPs, proline content was much higher than the control in cv SW, and lower than the control in cv CAM (Fig. 1E). The interaction of NaCl and ZnO-NPs (Table 2) had a considerable impact on the activity of CAT and POD as well as proline accumulation (p < 0.01). In comparison to strawberry shoots grown on 70 mM NaCl alone, significant increases in CAT and POD activities were found in the proliferating shoots under the maximum amount of NaCl and ZnO-NPs in the medium (Fig. 1C&D). Additionally, the results showed that the administration of ZnO-NPs at mg l À1 under moderate salinity stress (35 mM) considerably enhanced proline content compared to the control (0.0 ZnO-NPs), but at the highest salinity stress level, ZnO-NPs application at either concentration had no effect (Fig. 1E). The cv  NaCl  ZnO-NPs interactions significantly affected enzyme activities and proline content as indicated in Table 2 and Fig. 1 (C, D and E). In this respect, the highest CAT activity was recorded with the application of ZnO-NPs at 15 mg l À1 , under moderate (35 mM) level of salinity stress in cv CAM, and at 30 mg l À1 under the highest (70 mM)level of salinity in cv SW (Fig. 1C). Moreover, POD recorded the highest activity in cv CAM with the application of 15 mg l À1 ZnO-NPs under high salinity stress level (Fig. 1D), while in cv SW, application of ZnO-NPs at 30 mg l À1 had resulted in the highest POD activity under moderat or high salinity stress. Proline accumulation was the highest in shoots of Table 1 Effects of strawberry cultivars, NaCl and ZnO-NPs on the in vitro growth, enzyme activities, proline content, and ion uptake. Mean separation for main effect in each column by DMRT, P 5%. *, ** and *** Significant at 5%, 1% and 0.1 %, respectively. ns = not significant. SOV; source of variation, df; degree of freedom, CV; coefficient of variation. cv SW derived from a medium supplemented with high levels of ZnO-NPs and NaCl, and in cv CAM under low levels of ZnO-NPs and NaCl (Fig. 1E). Effects of NaCl and ZnO-NPs on ions uptake Results in Table 1 showed that there was no significant difference in the uptake of Na + between the two examined cvs, however the shoot tissues of cv CAM had more Cland K + than SW. Salinity stress increased the accumulation of Na + and Cland the Na+/K + ratio, but decreased the uptake of K + . On the other hand, ZnO-NP application has led to appreciable reductions in Na (-23.7%) and Cl-(-29.45) accumulation as well as the Na+/K + ratio. Additionally, it was noticed that treatment with ZnO-NPs, particularly at 15 mg l À1 , significantly (P < 0.05) increased K + uptake by 13.8% compared to the control in a significant way (P < 0.05). The cv  NaCl significantly (P < 0.1) influnced ions contents (Table 3). Results revealed that increasing NaCl concentration in the medium had resulted in significantel elevation in the uptake of Na + and Clin shoots of the strawberry cultivars, but in varing degree (Fig. 1 F&G). In this respect, the increases in Na + and Clwere much higher in cv SW (47% Na + and 45% Cl -) than CAM (12% Na + and 6.5% Cl -). Therefore, shoots of cv CAM seem to maintain almost stable levels of Na + and Clunder salinity stress. Moreover, a significant decrease in K + concentration was detected with the increasing level of NaCl in the medium, and this decrease was significantly higher in cv SW (-40%) compared to CAM (-15%) as indicated in Fig. 1H. As a result, the Na + /K + ratio was higher in cv SW than CAM under the highest salinity stress (Fig. 1I). The adminstration of ZnO-NPs did not significantly affect Na + concentration in both eaxamined cvs. However, a significant decline in Clin strawberry shoots was found in response to ZnO-NPs application, especially in cv CAM, which exhibited more reduction in Cl -(-35%) than cv SW (-20%) (Fig. 1G). Additionally, ZnO-NPs treatment significantly increased K + uptake in cv CAM, especially at 15 mg l À1 , but had no effect on K + uptake in shoots of cv SW (Fig. 1H). Similarly, the use of ZnO-NPs had resulted in dramatic decrease in the Na + /K + ratio in cv CAM, compared to cv SW (Fig. 1I). The data of this work showed that, as tested over the two strawberry cultivars, ZnO-NPs treatment at 15 or 30 mg l À1 had resulted in significant decrease in Na + (Fig. 1F) and Cl - (Fig. 1G) under unstressfull or under NaCl-induced salt stress conditions. Under moderate (35 mM NaCl) or high (70 mM NaCl) stress, Na + concentrations were 33.7 and 22% less, respectively with the application of ZnO-NPs at 15 mg l À1 (Fig. 1F). similar reduction trend in Clwas detected with ZnO-NPs treatment under salt stress (Fig. 1G). Additionally, the administration of ZnO-NPs at a concentration of 15 mg l À1 under salt stress conditions resulted in a significant increase in K + uptake (Fig. 1H) and a drop in the Na+/K + ratio (Fig. 1I). Results of ANOVA indicated significant cv  NaCl  ZnO-NPs interaction effect on the accumulation of Na + , Cland K + in strawberry shoots (Table 3). Under moderate salinity stress, tissue Na + and Clcontents were increased in the abscence of ZnO-NPs in the medium (Fig. 1F&G), and this increase was more in cv SW than CAM. Under the highest salinity level, the application of ZnO-NPs had resulted in signficant decrease in Na + and Cluptake, and this decrease was more in cv CAM than SW. Results also indicated significant decrease in K uptake under 70 mM NaCl in both strawberry cultivars (Fig. 1H). However, in the presence of ZnO-NPs at 15 mg l À1 in a medium with high level of NaCl, K + uptake increased in shoots of cv CAM, compared to a medium with NaCl alone. The Na + /K + ratio was the highest in shoots of cv SW derived from the medium supplemented with high level of NaCl and ZnO-NPs. However, this ratio decreased with the application of ZnO-NPs under moderat (35 mM) or high (70 mM) NaCl in cv CAM (Fig. 1I). Leaf anatomical features Results of the leaf anatomy characteristics are presented in Table 4 and Fig. 3a-d. It was realized that ZnO-NPs at 15 mg l À1 increased the thickness of medvein and lamina by17 and 45 % over the control plants, respectively. The thickness of upper epidermis, palisade, and spongy mesophyll were also increased over the Table 3 Summary of analysis of variance for Na + , Cl -, K + and Na + /K + ratio in two strawberry cultivars under different treatments of NaCl and ZnO-NPs. *, ** and *** Significant at 5%, 1% and 0.1% levels, respectively. ns = not significant. control by 19.8, 87, and 110.7 %, respectively, in addition to the increased number of vessels/medvein bundle by 39.5%. On the other hand, treatment with NaCl increased, but at a lesser extent, the lamina thickness, palisade and spongy tissue thickness by 12, 12.5 and 100% over the control, respectively, while largely decreased the thickness of upper and lower epidermis and number of vessels/ medvein bundle by 59, 67, and 23.3%, respectively, compared to the control. ZnO-NPs application under NaCl-induced salt stress had influenced strawberry leaf anatomy by increasing the thickness of medvien, lamina, palisade, and spongy tissues as well as the upper epidermis by 38.2, 60%, 100, 145, and 44.5% compared to control, respectively. Moreover, the number of vessels/medvein bundle increased by 16.2% (Table 4). In general, the diameter of xylem vessels was not affected. Discussion The present study's findings demonstrated that salt stress had a detrimental effect on shoot growth and differentiation in the two evaluated strawberry cultivars, and the cv CAM was more salt tolerant than cv SW based on the relative reduction in SFW and NSC under high NaCl concentration. The relative salt stress tolerance of cv CAM was reported by Sun et al. (2015), Turhan et al. (2008); Turhan and Eris, (2004). Genotype differences in strawberry salt tolerance capacity have been shown in several previous reports The application of ZnO-NPs statistically alleviated salt stress by increasing of, or decreasing the observed decline in SFW and NSC under elevated levels of NaCl in the medium. Though the present data demonstrated that the NSC had higher relative decline under salinity stress than SFW, administration of ZnO-NPs at 15 mg l À1 had alleviated this effect. It has long been recognized that shoot proliferation potential in strawberry, among other crops, is accelerated by the exogenous application of cytokinins, GA 3 and auxins (Waithaka et al., 1980). However, abiotic stress is known to decrease the biosynthesis of cytokinins and increase ABA (Bano et al., 1994), subsequently retarding cell division, differentiation and plant growth. In this study, applying ZnO-NPs had significantly increased NSC under salinity stress, perhaps by increasing the biosynthesis of auxins and / or GAs in the explant tissues. In a recent report, it was shown that tomato plants treated with ZnO-NPs had increased levels of GA 3 and decreased ABA (Faizan et al., 2021). Under in vitro conditions, Alizadeh and Dumanglu, (2022) and Karak et al. (2019) documented the effectiveness of ZnO-NPs loaded with IAA and IBA on improving micro-plant growth and rooting. In strawberry, Moghadam et al. (2013) reported an increase in auxin and GA 3 levels when plants were treated with Zn as nutrient element under stress conditions. Recently, Regni et al. (2022) showed that ZnO-NPs have positive effects on the growth and shoot proliferation of in vitro produced olives. Results of shoot analysis demonstrated significant increase in the levels CAT and POD under salinity stress, and this increase was more noticeable in the tolerant cv CAM than cv SW, suggesting 9.9 9.9 -9.9 -9.9 - that genotypes with relatively higher salt tolerance have the ability to increase the biosynthesis of antioxidant enzymes as possible mechanism towards protection of cell membrane from oxidative damage. In accordance with our results, it was reported by Turhan et al. (2008) that salt tolerance characteristics of cv CAM was due to the higher activity of antioxidant enzymes in the plant organs. In other study, Ghaderi et al. (2018) evaluated two strawberry cultivars for salt tolerance after 20, 40, and 60 days from exposure to stress treatments. They found that the tolerant cv was characterized by the increased antioxidant capacity and relative water content. Similarly, the study of Gao et al. (2015) indicated significant increase in CAT and POD activities in shoots of in vitro plantlets of potato with the increase of salt level (0-100 mM) in the medium. et al., 2007), which could lead to decrease levels of ROS. In the current investigation, significant increases in CAT and POD activities were detected with the application of ZnO-NPs under high level of NaCl, and these increases were higher than those observed under 70 mM NaCl alone. The highest increase in the activities of the two antioxidant enzymes occurred in shoots of cv CAM exposed to ZnO-NPs at 15 mg l À1 under moderate or high salinity level, indicating that the lower dose of ZnO-NPs had resulted in alleviation of salt stress via increasing the antioxidant capacity, as a result of increasing CAT and POD, supporting the previous finding of Ghaderi et al., (2018). Using Se-NPs on strawberry, Zahedi et al. (2019) came to the same conclusion. In other plant species, Faizan et al. (2021) on tomato, and Alabdalah and Alzahrani, (2020) on okra, also related the mitigation effect of ZnO-NPs application under salt stress to the increased growth, the activity of SOD and CAT among other antioxidant enzymes, and photosynthetic pigment contents. Proline, a key osmolite, was significantly increased in the proliferated strawberry shoots under increasing levels of NaCl, and more proline contents were measured with the use of ZnO-NPs at 15 mg l À1 under the highest salinity level, especially in cv CAM. In several reports, proline levels could be changed by NPs to promote resistance to salt stress (Alabdalah and Alzahrani, 2020;Avestan et al., 2019;Farouk and Alamri, 2014) which reduces salt stress-induced osmotic shock due to Na + and Cltoxicity. The results of tissue analysis revealed significant increase in Na + and Clconcentration as well as the Na/K ratio, while K + decreased in strawberry micro plants under salt stress, especially in the relatively salt sensitive cv (SW). Similar findings were reported by Keutgen and Pawelezik, (2009) on strawberry plants subjected to salt stress. In contrast, one of the highest impacts of ZnO-NPs treatment occurred with Na + and Claccumulation. In this respect, strawberry shoots exposed to 15 mg l À1 ZnO-NPs, had lower Na + and Cland higher K + uptake compared to the control (0.0 ZnO-NPS). These findings align with those of Avestan, (2019) on strawberry, Aktafi et al. (2006) on pepper, and El-Badri et al. (2021) on rapeseed administered with ZnO-NPs. Rajput et al. (2021) have demonstrated that nanoparticles can aid in the regulation of ion balance, lowering the toxicity of Na + and enhancing the uptake of K + by plants. The build-up of harmful ions, and the reduced uptake of K + could have contributed to the observed decline in strawberry growth and shoot proliferation potential under increasing levels of NaCl in a medium devoid of ZnO-NPs. Therefore, one strategy of ZnO-NPs to overcome the negative impacts of salinity is to decrease Na + and Clcontents and absorption by plant tissues, and improving K + uptake, subsequently decreasing Na + /K + ratio. However, the results of Ferreira et al. (2019) came with different conclusion. They suggested that the negative impacts of salinity stress in strawberry were attributed more to Clthan Na + toxicity, while K + uptake was not affected (no competition between Na + and K + ). These disagreements could be due to the differences in genotypes examined, concentrations and methods of salt application, as well as organ-specific variation in ion concentration. Results indicated a noticeable modification in leaf anatomical features in response to ZnO-NPs application under non-stressful condition. The increase in lamina thickness (45%) may be due to the increased thickness of palisade and spongy tissue layers which contain cells responsible for photosynthesis. ZnO-NPs also increased the development of vascular system responsible for the translocation of nutrients and photosynthetic products. On the other hand, in NaCl-supplemented medium, lamina thickness was slightly increased (12%), in contrast to the finding of Avestan et al. (2021) who used higher salt level. Raafat et al. (1991) also found no significant increase in tomato leaf lamina thickness under salt stress. However, the development of conducting tissues and thickness of upper and lower epidermis was greatly decreased under salt stress, similar to the results of Saule et al. (2013) and Evlakova et al. (2019) on barley seedling. This could have negative impacts on the growth via limiting the translocation of minerals and photoassimilates to other plant organs. The observed further increase in strawberry leaf anatomical characteristics with the use of ZnO-NPs under NaCl stress provides clear evidence of its impact on mitigation of salt stress in strawberry, similar to other NPs (Avestan et al., 2021). Conclusion This study highlighted the effectiveness of ZnO-NPs application in alleviating the negative impacts of NaCl-induced salinity stress in strawberry plants cultured in vitro. Under elevated levels of NaCl in the tissue culture medium, shoot growth and proliferation potential were significantly decreased, and the decreases were more in cv Sweet Charlie than Camarosa, indicating that Camarosa was relatively more salt tolerant. ZnO-NPs treatment significantly increased the antioxidant system via increasing the activity of key antioxidant enzymes (CAT and POD), in addition to increasing proline content. Salt stress increased, while ZnO-NPs decreased the build up of the toxic ions Na + and Cland the Na + /K + ratio. Leaf anatomical features showed better adaptation to salt stress with the application of low level of ZnO-NPs. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-03-12T15:10:28.695Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "aa4ed991876efd670c7748e9d7562f7602fbf6de", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjbs.2023.103623", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a4a1e30fe0accd3e84d2be94017b8f96688b9acd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
15766275
pes2o/s2orc
v3-fos-license
Effect of weight loss by a low-fat diet and a low-carbohydrate diet on peptide YY levels Objective To compare the effects of weight loss by an energy-restricted low-fat diet versus low-carbohydrate diet on serum peptide YY (PYY) levels. Design 8-week prospective study of 30 obese adults (mean age: 42.8 ± 2.0 years, mean BMI 35.5 ± 0.6 kg/m2). Results After 8 weeks, subjects on the low-carbohydrate diet lost substantially more weight than those on the low-fat diet (5.8 kg vs. 0.99 kg, p<0.001). Weight loss by either diet resulted in a 9% reduction in both mean fasting serum PYY levels (baseline: 103.5 ± 8.8 pg/ml, after weight loss: 94.1 ± 6.5 pg/ml, p<0.01) and postprandial AUC PYY (baseline: (20.5 ± 1.5) × 103 pg·hr−1ml−1, after weight loss: mean AUC PYY (18.8 ± 1.4) × 103 pg·hr−1ml−1 p<0.001). There was a trend towards lower levels of PYY with greater degrees of weight loss. Conclusions Reduced PYY levels after weight loss by an energy-restricted low-fat or low-carbohydrate diet likely represents a compensatory response to maintain energy homeostasis and contributes to difficulty in weight loss during energy-restricted diets. Introduction Nearly 45% of women and 30% of men in the United States are attempting to lose weight at any given time (1). Although energy-restricted diets and low-fat diets are the most commonly recommended diets for weight loss, low-carbohydrate diets are a popular alternative (2,3). A recent meta-analysis reported that low-carbohydrate diets are as effective as low-fat diets in inducing weight loss for up to one year (4). Peptide YY (PYY) plays a role in appetite and food intake regulation. Enteroendocrine cells lining the distal small bowel and colon secrete PYY and the truncated isoform PYY in response to a meal (5). The effect of various dietary interventions on PYY remains unclear. In a previous study, we reported that one-week of a low-carbohydrate, high-fat diet led to 55% higher levels of postprandial serum PYY levels compared with a low-fat, highcarbohydrate diet (6). In this study, we aimed to compare the effects of weight loss by an energy-restricted low-fat versus low-carbohydrate diet on serum PYY levels. We hypothesized that weight loss by a low-carbohydrate diet would lead to higher PYY levels. To test this hypothesis, we compared fasting and postprandial PYY levels in obese individuals at baseline and after 8 weeks of weight loss with a low-fat (<30% of total calories derived from fat) or a lowcarbohydrate diet (<30gm of carbohydrate/day, with <10% of total calories derived from carbohydrates). Materials/Subjects and Methods Subjects were recruited to this prospective study by advertisement in the Richmond, Virginia area. Inclusion criteria included body mass index (BMI) 30 kg/m 2 -40 kg/m 2 , age 18-60 years, and stable weight for ≥3 months. Exclusion criteria included clinically significant pulmonary, cardiac, renal, hepatic, or infectious disease; blood pressure > 170/100 mmHg; diabetes mellitus with HbA1C≥7.9%; and pregnancy or lactation. Three diabetic persons participated, and all had HbA1C <7%. Subjects provided informed, signed consent. Procedures took place at the General Clinical Research Center (GCRC), and the protocol was approved by the Institutional Review Board of Virginia Commonwealth University. After enrollment, subjects presented for a screening visit and instruction on maintenance of a 3-day food diary. The subjects' energy requirements were estimated with the equation: total energy expenditure = fasting metabolic rate (calculated with the Harris-Benedict equation) × activity factor (sedentary = 1.3, some regular exercise = 1.5, and regular exercise = 1.7). From these estimates, daily caloric intake needed to achieve an energy deficit of 500 kcal/day was estimated for each individual. After randomization to an energy-restricted low-fat or low-carbohydrate diet, subjects presented to the GCRC at 0800 after a 10-hour overnight fast. Baseline serum PYY, glucose, insulin, leptin, and adiponectin levels were drawn at −15 min and 0 min. Subjects consumed a low-fat or low-carbohydrate test meal (mean 540 kcal), and serum samples were subsequently drawn at 30-minute intervals over the next 2.5 hours. Subjects received counseling from a study dietitian on maintenance of an energy-restricted low-fat diet or lowcarbohydrate diet and were instructed to avoid modifying physical activity. Subjects were responsible for preparing their own meals. They presented to the GCRC weekly for a weight determination. Compliance was assessed by interview and degree of weight loss, with less than 1 kg loss over a 3-week defined as noncompliance. Noncompliant individuals were required to meet individually with a dietitian. After 8 weeks, subjects presented again to the GCRC and underwent similar procedures to those performed at baseline. Serum glucose concentrations were measured on a glucose analyzer using oxidative methodology, and serum insulin using a double-antibody RIA. For PYY determinations, aprotinin (Sigma-Aldrich, Inc., St. Louis, MO) at a concentration of 1 μg/ml and dipeptidyl peptidase IV (DPP-IV) inhibitor (Linco, Research, Inc., St. Louis, MO) at a final concentration of 100 μM were added to the serum, and the samples were stored at −70° C until assays were performed. Total PYY was measured using a sensitive and specific RIA (Linco Research, Inc., St. Louis, MO). The lower limit of detection was 10 pg/ml, and the coefficients of variation were 9.4% within and 8.5% between assays. Serum leptin and adiponectin were measured with ELISA kits (Diagnostic System Laboratories, Inc., Webster, TX). Areas under the curve (AUC) for insulin, glucose, and PYY were calculated with the trapezoidal method. The primary variable of interest was postprandial AUC PYY. In order to detect a 35% difference between the groups with a standard deviation of 154.5, based on a published study (7), 10 subjects per group were needed to achieve a power of 80% with α=0.05. Estimating a potential 20% noncompliance rate, the sample size increased to 16 subjects per group. All data are presented as means ± SEM. Baseline measurements were assessed with unpaired t-tests, and comparisons between groups analyzed using repeated-measures ANOVA with time and diet as main effects. Linear relationships were tested by Pearson's correlation coefficient. The macronutrient composition of the diets was calculated using the Nutrition Data System for Research (version 4.04, Nutrition Coordinating Center, University of Minnesota). All statistical analysis were made using JMP Version 8.0 (SAS Institute Inc., Cary, NC), with p<0.05 statistically significant. Results Two subjects dropped out, one from each group. Baseline characteristics for the remaining 30 participants are summarized in Table 1. The mean age was 42.8 ± 2.0 years, and the mean BMI 35.5 ± 0.6 kg/m 2 . There were no significant differences in baseline characteristics. Furthermore, analysis with inclusion of the 2 eliminated subjects demonstrated no differences in any baseline variables between groups. Discussion Contrary to our hypothesis that weight loss with a low-carbohydrate diet would increase PYY levels, serum fasting and postprandial AUC PYY decreased by nearly 10% after weight loss by either diet. Subjects randomized to the low-carbohydrate diet lost 6-fold more weight than those randomized to the low fat-diet. However, change in AUC PYY occurred independently of dietary intervention and degree of weight loss. Few studies to date have evaluated the effects of dietary composition on PYY expression in humans. Unlike our previous study (6), this study showed no difference in postprandial PYY levels between low-fat and low-carbohydrate diets, likely because subjects prepared their own meals and were not as adherent to their assigned diets as subjects in the previous study. The fact that some subjects did not achieve weight loss as expected with a 500 kcal/day deficit suggests noncompliance. Sloth et al. (8) reported no difference in PYY levels with a high monounsaturated fat, low glycemic index diet versus a low-fat diet. Likewise, Brownley et al. (9) reported that postprandial PYY levels were not affected by glycemic load in obese women. An interesting finding of this study is that postprandial PYY decreased significantly after weight loss regardless of low-fat or low-carbohydrate diet. Similarly, Sloth et al. (8) reported that 8 weeks of a low energy diet resulted in lower PYY levels and increased appetite scores. In addition, Chan et al. (10) reported that short-term fasting over 48-72 hours reduced fasting PYY levels. Since lower PYY levels are associated with increased appetite (5,7), we speculate that reduced PYY levels following diet-induced weight loss represents a physiological homeostatic mechanism to preserve baseline body weight. Reduced PYY levels would indirectly stimulate hypothalamic neurons containing neuropeptide Y and agouti-related protein, which in turn would stimulate appetite and food intake. Limitations of this study include measurement of total PYY rather than PYY 3-36 . However, total PYY levels correlate closely with PYY 3-36 levels (11). Furthermore, subjects did not keep weekly food diaries. Therefore, verification of compliance with the prescribed diets was not entirely achievable. Lastly, measures of hunger, appetite, and satiety were not evaluated. In summary, this study demonstrates that weight loss by either a low-fat or lowcarbohydrate diet reduces postprandial serum PYY levels. This finding suggests that low PYY levels may contribute to the high recidivism and weight regain with energy-restricted diets. Further investigation is needed to determine whether diets comprised of various
2017-11-08T19:01:09.189Z
2010-03-02T00:00:00.000
{ "year": 2010, "sha1": "8690ce791cafe72509a690d1b46ecdf8867eb1cb", "oa_license": null, "oa_url": "https://www.nature.com/articles/ijo201048.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8690ce791cafe72509a690d1b46ecdf8867eb1cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
54216691
pes2o/s2orc
v3-fos-license
The assessment of endosonographers in training Endosonography (EUS) has an estimated long learning curve including the acquisition of both technical and cognitive skills. Trainees in EUS must learn to master intraprocedural steps such as echoendoscope handling and ultrasonographic imaging with the interpretation of normal anatomy and any pathology. In addition, there is a need to understand the periprocedural parts of the EUS-examination such as the indications and contraindications for EUS and potential adverse events that could occur post-EUS. However, the learning process and progress vary widely among endosonographers in training. Consequently, the performance of a certain number of supervised procedures during training does not automatically guarantee adequate competence in EUS. Instead, the assessment of EUS-competence should preferably be performed by the use of an assessment tool developed specifically for the evaluation of endosonographers in training. Such a tool, covering all the different steps of the EUS-procedure, would better depict the individual learning curve and better reflect the true competence of each trainee. This mini-review will address the issue of clinical education in EUS with respect to the evaluation of endosonographers in training. The aim of the article is to provide an informative overview of the topic. The relevant literature of the field will be reviewed and discussed. The current knowledge on how to assess the skills and competence of endosonographers in training is presented in detail. INTRODUCTION Endosonography (EUS) has become an important diagnostic and therapeutic tool for medical gastro enterologists, surgeons, and oncologists worldwide. The learning of EUS is a rewarding but demanding task with an estimated long learning curve [1] . The long learning curve is partly explained by the fact that EUS has several different clinical indications [2,3] . Moreover, many of the lesions examined by EUS include a wide range of possible diagnostic entities [4,5] . Consequently, the competent endosonographer needs to master not only multiple maneuvers with the echoendoscope and its accessories, but also endosonographic interpretation of the normal anatomy and any pathologic lesions ( Figure 1). In the end, both cognitive and technical skills are essential to perform a safe EUSexamination of high quality. In advanced endoscopy, the learning process and progress vary widely among trainees [1,6] . Therefore, the performance of a certain number of procedures during training does not automatically guarantee adequate competence in EUS. It is likely that an assessment tool that covers the different steps of the EUSprocedure and that is developed for the evaluation of endosonogra phers in training would be more appropriate than the count of procedures for assessing competence. Such tools would likely better depict the learning curve of EUS and reflect the true competence of each individual trainee [6] . This minireview addresses the issue of clinical education in EUS with respect to the evaluation of endosonographers training basic, diagnostic EUS with or without fine needle aspiration (EUSFNA). The aim of this minireview is to provide an informative up to Hedenström P et al . The assessment of endosonographers in training date overview of the topic. The relevant literature of the field is reviewed and discussed. The current know ledge on how to assess the skills and competence of endosonographers in training is presented in detail. TRAINING IN EUS -FOR WHOM, WHERE, AND HOW? It is recommended that the EUStrainee should have completed a minimum of two years of training or pra ctice in routine endoscopy before initiating training in EUS [7] . However, the experience in advanced, therapeutic endoscopy might not be a prerequisite for successful, basic EUStraining [8] . Likewise, previous competence in transabdominal ultrasound is probably not vital for learning EUS [9] . There is limited data on the number of centers providing supervised training in EUS [10] . Although it is frequently reported [10] , learning EUS by selfteaching without supervision is discouraged [7,9,11] . A large number of learning procedures are expected. Therefore, train ing in EUS should only be performed in centers that can provide a reasonably high volume of procedures along with experienced and motivated instructors [11] . This type of focused training is highlighted by a study published in 2005, which found that trainees in an advanced endoscopy fellowship in an academic center performed a larger number of supervised procedures compared with endosonographers trained in other types of practice [10] . Furthermore, it is important that the endosonographic findings of the trainee are co evaluated by the supervisor in the early phase of training [11] . Ex vivo models used for training in EUS Animal models can probably work as a facilitating tool for beginners or for trainees with little experience in EUS [1214] . A live porcine model was evaluated by Bhutani et al [14] in a survey among 38 trainees with little experience in EUS, with these trainees participating in either of two EUS courses organized by the American Society of Gastrointestinal Endoscopy (ASGE) in 1997 and 2000. Over 90% of the respondents found the model helpful in enhancing their EUS skills but there was no measurement of the effect on the learning curve of EUS. Similar models have also been evaluated and have been found to be useful for the purpose of learning EUSFNA [15,16] . In vivo supervised training in EUS Even though ex vivomodels could be helpful tools in early EUStraining, they may not be available in all centers and cannot replace supervised training in true patients [7,11] . Regarding the equipment, the linear array echoendoscope can probably be introduced to trainees at the onset of training. A period of initial training with a radial echoendoscope was shown to not improve the performance of subsequent scanning with the linear array echoendoscope according to one study published in 2015 [17] . The recommended design of training programs in EUS can be further studied by the guidelines issued by the ASGE [11,18] . The decision as to when to introduce the trainee to EUSFNA has been a matter of debate. Some authors advocate long, previous experience with basic EUS with a thorough knowledge of the normal and abnormal anatomy before the introduction of EUSFNA [19] . Others consider early traineeperformed EUSFNA both app ropriate and patient safe [20,21] . In a study by Coté et al [20] a supervisordirected, traineeperformed EUS FNA executed from the onset of training, resulted in no recorded complications in a total of 305 patients. In addition, the performance characteristics of EUS FNA including the diagnostic accuracy were found to be comparable (trainee vs supervisor). In another study by Mertz et al [21] , the first 50 EUSFNA:s of pancreatic masses performed by a nonexperienced endosono grapher were found to be safe with no adverse events detected. However, in this study, the diagnostic sen sitivity for cancer was significantly higher after the first 30 EUSFNA procedures. Therefore, it might be that the introduction of EUSFNA could be considered by supervisors to already be performed at the onset of training, at least from a patient safety point of view. Continued learning after completed training An important issue merits some attention: "How to ascertain that the obtained competence in EUS will be maintained after the completion of training in EUS?". One way of ascertaining the maintenance of competence is to follow the recommendations issued by the ASGE [7] , which encourage the trained endosonographer to log the annual number of EUSprocedures and, like all other endosonographers, to regularly review the quality and outcome of the procedures. Educational activities, such as scientific meetings and handson workshops, should also be attended. COMPETENT IN EUS? The simple answer to this difficult question is "we do not know". Therefore, the competence of an EUStrainee can hardly be assessed only by the numeric count of performed procedures. Basic EUS According to guidelines published in 2001, there are a suggested, minimum number of 125 supervised procedures to be performed before acceptable com petence in EUS can be expected [7] . For comprehensive competence in all aspects of EUS, the same guidelines recommend a minimum of 150 supervised trainee performed EUSprocedures. Out of these 150 proce dures at least 75 procedures should have a focus on the pancreaticobiliary area and at least 50 procedures should include EUSguided sampling (EUSFNA) [7] . These recommended numbers should be considered to be an absolute minimum and not a guarantee that the necessary skills will be acquired. A few clinical studies [1,8,22] have investigated the number of training procedures required to become a competent endosonographer. These publications are summarized in Table 1. As is discussed below, there is a significant variation in the methodologies of the studies, in the variables measured, and in the criteria for competence, when comparing the studies included in Table 1. This variation makes the results of these studies somewhat difficult to compare to each other. In the early era of EUS, examinations were mainly performed with the purpose of tumor staging without sampling. Today a majority of EUSprocedures include diagnostic sampling of lesions (EUSFNA/B) or the rapeutic interventions such as drainage of pancreatic pseudocysts. Therefore, to a large extent, radial echo endoscopes have been replaced by linear ones [11] . Consequently, the number of required cases for com petence in EUS that were recommended many years ago should be interpreted with some caution since it might not be completely valid today. EUS with EUS-FNA Before independent performance of EUSFNA, the ESGE and the ASGE both recommend a minimum of 50 supervised, traineeperformed EUSFNAs of which 2530 should be pancreatic EUSFNA [7,9] . No specific number of EUSFNAprocedures has been identified before full competence can be expected [9] , but the learning curve most likely continues long after the initial period of supervised training [23] . In a retrospective study by Mertz et al [21] , the sensitivity for the detection of pancreatic cancer by traineeperformed EUSFNAs was compared in quintiles of procedures. A significant increase in sensitivity after the third quintile was de tected. Consequently, the authors concluded that the ASGE guideline of 25 supervised EUSFNA procedures in solid pancreatic lesions seemed reasonable. In a prospective, Japanese study including only subepithelial lesions [24] the accuracy and safety of EUS FNA performed by two trainees were compared with those of two experts. Before the study period, both trainees had performed 50 EUSs without sampling and attended 20 EUSFNAs performed by experts. In the study, a total of 51 cases were performed alternately by the trainee and the expert, and there was no difference in the acquisition of an adequate specimen. No major complications were recorded. In a study by Wani et al [1] , five EUStrainees per forming a total of 1412 examinations were assessed with regards to both basic EUS and EUSFNA. The num ber of examinations required for acceptable competence varied significantly among the trainees. In one trainee 255 procedures were required while another trainee was still in need of continued training after 402 procedures ( Table 1). The authors concluded that, compared with the recommended minimum of 150 supervised cases, all five trainees needed much larger number of training procedures to be competent [7] . Consequently, it is likely that > 200 procedures are required for the majority of trainees. This estimation is supported by others who argue that the number of recommended EUS procedures may be a significant underestimation of the true number of procedures that are needed [25] . IN EUS? Logically, the competence of the trainee is reflected by the quality of the EUS being performed. Consequently, in EUS, what is good quality and what quality is good enough? One definition of adequate competence is suggested in the following guidelines by the ASGE: "The minimum level of skill, knowledge, and/or expertise derived through training and experience, required to safely and proficiently perform a task or procedure" [7] . Nevertheless, there is no consensus on the exact de finition of competence in EUS or with what tools, and on what scale, it should be measured [1] . It also remains to be agreed upon what the specific indicators to be used as quality measures are in EUS. In 2006, the American College of Gastroenterology (ACG)/ASGE task force aimed to establish quality indicators in EUS to aid in the recognition of high quality examinations [26] . An updated and extended version including 23 quality indicators was published in 2015 [27] . The 23 indicators were divided into three categories -Preprocedure (n = 9), Intraprocedure (n = 5), Postprocedure (n = 9). The three most prioritized indicators should be the frequency of adequate staging of GI malignancies, the diagnostic sensitivity of EUS FNA in pancreatic masses, and the frequency of adverse events postEUSFNA [27] . However, these documents are basically intended for trained endosonographers working in clinical practice and not specifically for the situation of evaluating trainees in EUS. Naturally, the fullytrained endosonographer should ultimately aim to meet these quality indicators. Interestingly, the authors stressed that a subject for future research is the amount of training required for obtaining "diagnostic FNA yields comparable to those of published literature". The European Society of Gastrointestinal Endoscopy (ESGE), has published technical guidelines on EUS [28] , however these guidelines do not include any quality indicators. As such, a recent initiative launched by the ESGE aims to address this specific issue. A working group has been formed [29,30] but to date no report has been published. WHAT TO MEASURE? Thus, one way of assessing endosonographers in train ing would be to apply some of the quality indicators for EUS and to record the outcome on an arbitrary scale over time. However, the assessment of endoscopy trainees should not necessarily only focus on the quality indicators, but also focus on other parameters. The sensible approach would be to use the predefined and validated assessment criteria as well as the direct observation of an expert [11] . There are several validated assessment tools for measuring the learning curve in endoscopy such as the Mayo Colonoscopy Skills Assessment Tool (MCSAT) [31] , the Competency in Endoscopy (ACE) [32] , the British Direct Observation of Procedural Skills (DOPS) [33] , and the Global Assessment of Gastrointestinal Endosco pic Skills (GAGES) [34] . Technical skills such as scope navigation, tip control, and loop reduction together with cognitive skills such as pathology identification and management of patient discomfort are assessed and scored to a varying degree. However the above tools were primarily designed for colonoscopy and not for EUS, which is why the ASGE has encouraged the development of objective criteria for the assessment of endosonographers in training [35] . The ASGE standards of practice committee has authored guidelines for credentialing and for grant ing privileges for EUS [7] , with these guidelines stating that the competence in EUS should be evaluated in dependently from other endoscopic procedures. As further specified in this publication, the competent endosonographer should acquire skills including, among others, safe intubation of the esophagus, appropriate sonographic visualization of various organs, recognition of abnormal findings, and appropriate documentation of the EUSprocedure [7] . ENDOSONOGRAPHERS IN TRAINING The assessment tools of traineeperformance in porcine EUSmodels have been investigated [15] . However, it might be challenging to interpret traineecompetence based on their performance in an animal model, which is a quite different experience compared with the clinical everyday EUSpractice. To date, there is no clear recommendation on what parameters to include in the assessment of endosonographers in training performing EUS in humans. Although there is a lack of a uniform standard, some assessment tools, elaborated for EUStrainees and for use in real patients, have been proposed. The assessment tools that rate specific steps or Table 1 Number of trainee-performed endosonography-procedures required for the adequate performance of the different steps of a diagnostic endosonography-examination not including fine needle aspiration Each range indicates the number of procedures required for the fastest learning trainee (low end) and the number of procedures required for the slowest learning trainee (high end). Competency not reached means that at least one trainee had not yet reached adequate competence by the end of training period. In the study by Meenan et al [8] five trainees were assessed; in the study by Hoffman et al [22] maneuvers of the EUSprocedures have been inve stigated by some authors. As an example, in 2012 Konge et al [36] presented the EUS Assessment Tool (EUSAT), designed exclusively to measure EUS FNA competence for the specific situation of mediastinal staging of nonsmall cell lung cancer. Other examples also include assessment tools for the accurate staging of esophageal cancer [37,38] , for the diagnostic EUSFNA of pancreatic masses [21] or submucosal lesions [24] , and for the adequate onsite traineeassessment of the EUSFNAspecimens [39] . These studies are limited to a certain scenario and they do not cover the complete examination including all organs and structures within reach for upper GI EUS. Basic EUS without EUS-FNA Only a handful of groups have presented tools aimed at assessing the complete EUSprocedure including visualization of all the standard views. In an older study by Meenan et al [8] , five EUStrainees were evaluated in performing radial EUS, i.e., no EUSFNA. In the beginning of training the trainees observed supervisorperformed examinations (range: 55170 cases). Afterwards, the trainees performed the exami nations themselves (range: 25124 cases). In this study, a study unique data collection tool (Table 2), was designed to assess the ability of the trainees to use the ultrasound controls and to visualize a number of predetermined anatomic stations via the esophagus, the stomach, and the duodenum. Esophageal intubation with the echoendoscope was not assessed. Via the assessment tool and a point score system (maximum score: 40 points, Table 2 Assessment form used by Meenan et al [8] to evaluate and assess endosonographers in training using a radial array echoendoscope Points were awarded for the ability to produce "best views with certainity" from the three different positions of scanning. The minimum score for competence in each of the positions is provided in the rightmost column. Adapted from Meenan et al [8] and reprinted with permission from Georg Thieme Verlag KG. adequate competence. The authors concluded that the assessment tool was applicable in clinical practice and could identify trainees with a need for continued training. Difficult maneuvers could be identified such as the dynamic visualization of the aortic outflow, of the splenic vein, and of the common bile duct. A drawback of the study, which limits its implications, is that linear EUS was not performed and that only five procedures per trainee were scored. In another older study, only published in abstract form, twelve EUStrainees were evaluated and rated by an expert [22] . According to the text of the abstract, the traineeperformed EUSs were assessed and rated with respect to the separate steps of the procedure (Table 1). Each step was scored and categorized as follows: 0 = Failed; 1 = Unsatisfactory; 2 = Satisfactory; and 3 = Excellent. Competency was defined as consistent achievement of a score of 2. Unfortunately, any further details and comments on this assessment tool cannot be provided due to the lack of a full article publication. Basic EUS including EUS-FNA In a more recent study by Wani et al [1] , five EUS trainees performing a total of 1412 EUSexaminations were assessed by an EUSexpert. Beginning at the 25 th examination, every 10th examination was assessed. Similar to the work by Meenan, the authors elaborated on a standardized data collection tool including different steps of the EUSprocedure (Figure 2). Each step was scored on a 5grade scale. Then the score of each step and the overall score were recorded. Finally, the assessment of competence was based on the trend and inclination of the score and the learning curve was calculated by a so called CUSUM (Cumulative Sum Analysis) [1] . The authors found the suggested assess ment method to be both feasible and valuable for Hedenström P et al . The assessment of endosonographers in training TEESAT was a more timeconsuming tool than any global rating scale but that it had the clear advantage of monitoring the learning curve and providing precise feedback to trainees. TEESAT, therefore, could facilitate the improvement of certain steps or maneuvers. Finally, this study confirmed the fact that there was significant variability among the trainees concerning the time and number of procedures to achieve competence in EUS. CONCLUSION The safe and competent performance of advanced endoscopy procedures such as EUS is cognitively and technically demanding. Therefore, there is a definite need for the evaluation and assessment of EUS trainees both during and at the completion of training. Some assessment tools have been evaluated in clinical studies but only some of those tools cover all the steps and aspects of a complete, diagnostic EUS procedure. Moreover, the few extensive assessment tools that have been studied thus far have not yet been fully validated by external and independent inves November 26, 2018|Volume 6|Issue 14| identifying trainees who needed continued training. The method also identified the anatomic stations, such as the pancreas and the ampulla, that were more difficult to master for trainees. A weak point of this study was that only every 10 th examination was assessed. The identical study methodology and assessment tool (Figure 2), was used in an enlarged study by Wani et al [40] published in 2015. This study included 17 trainees who performed a total of 4257 examinations in 15 tertiary centers. The results were similar to those presented in the first publication with the learning curves showing a high degree of intertrainee variation. In 2017, another study was published by the same author [41] , with the study evaluating trainees in EUS and ERCP using the EUS and ERCP skills assessment tool (TEESAT). In every third traineeperformed EUS, a nearly identical assessment tool (Figure 2), as was used in the two previous studies [1,40] was used to score the trainees. Twentytwo trainees participated in the study and 3786 examinations were graded. A cen tralized database was used and was found feasible for the collection of data. The authors concluded that Hedenström P et al . The assessment of endosonographers in training tigators. The small number of publications within the field is somewhat troublesome, meaning that today, there is no standardized measurement protocol and assessment tool regarding traineeperformance in EUS. Consequently, no specific recommendation can be put forward on the most appropriate assessment tool to use for the evaluation of endosonographers learning basic, diagnostic EUS [6] . The assessment of endosongraphers learning therapeutic EUS was not an aim of this article. Nevertheless, EUS is a rapidly expanding field with a growing number of diagnostic and therapeutic indi cations [4244] . Therefore, supervisors should be prepared to include new and additional parameters for assess ment with respect to the type of EUSprocedure being trained. It may also be that the traineeperformed EUSFNA should be assessed more profoundly than previously attempted and include parameters such as diagnostic accuracy. Similar tools already exist for the purpose of assessing competence in polypectomy during colonoscopy [45] . Clinical research addressing the issue of assessing endosonographers in training should be encouraged. Studies presenting new assessment tools and studies validating suggested tools would be valuable. Such initiatives could be a great support in the education and training of future endosonographers. Although attempts are not lacking [27] , there is an urgent need to establish an international consensus on the benchmarks for high quality performance and competence in EUS.
2018-12-12T19:54:00.730Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "c7e7be5ed9c9bf20a8fe28bf7441ab470385dc19", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v6.i14.735", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7e7be5ed9c9bf20a8fe28bf7441ab470385dc19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259040378
pes2o/s2orc
v3-fos-license
Exploring a Nuclear-Selective Radioisotope Delivery System for Efficient Targeted Alpha Therapy Targeted alpha therapy (TAT) has garnered significant interest as an innovative cancer therapy. Owing to their high energy and short range, achieving selective α-particle accumulation in target tumor cells is crucial for obtaining high potency without adverse effects. To meet this demand, we fabricated an innovative radiolabeled antibody, specifically designed to selectively deliver 211At (α-particle emitter) to the nuclei of cancer cells. The developed 211At-labeled antibody exhibited a superior effect compared to its conventional counterparts. This study paves the way for organelle-selective drug delivery. Introduction Drug delivery systems (DDSs) are a pivotal technology for achieving optimal drug efficacy and selectivity [1,2]. Alongside passive targeting using nanocarriers, including liposomes and polymers, active targeting, which involves the utilization of specific molecular interactions to achieve high specificity, has been widely used. Notably, antibodies are effective for achieving excellent specificity, as demonstrated by the numerous practical applications of antibody-drug conjugates (ADCs) [3,4]. Recently, higher-resolution drug delivery, that is, organelle-selective drug delivery, has garnered attention as a strategy for augmenting drug efficacy, based on its intensive accumulation at the target site [5]. In this study, we investigated a DDS designed to selectively target a specific organelle, specifically, the nuclei in target cells, and applied it in order to develop an efficient nuclear medicine. Radiation therapy is a common cancer treatment modality, and in this regard, targeted radioisotope (RI) therapy has the advantage of being less burdensome than external beam radiation and can be applied to tumors that are difficult to irradiate externally, such as metastatic malignancies and brain tumors [6,7]. Several monoclonal antibodies armed with β-emitting radionuclides, including Zevalin ( 90 Y-labeled rituximab) [8] and Bexxar ( 131 I-labeled tositumomab) [9], have been developed as targeted RI medicines and have already been translated to practical use. Additionally, targeted alpha therapy (TAT) has garnered considerable interest in recent years [10,11]. Owing to the high energy and short range of α-rays, the selective 2 of 8 accumulation of α-particles in target tumor cells can lead to remarkable therapeutic effects, with reduced adverse effects ( Figure 1a). Additionally, α-rays primarily induce double-strand breaks (DSB), resulting in highly potent cytotoxicity. Therefore, TATs are being extensively investigated worldwide. For example, Xofigo, 223 RaCl 2 has been approved for practical use in the treatment of bone metastatic prostate cancer [12]. We are also actively engaged in TAT development with a focus on 211 At as an α-particle source [13][14][15][16][17]. Particularly, Na 211 At, which leverages the halogen accumulation nature of the thyroid, is currently undergoing clinical trials for thyroid cancer therapy [13]. Furthermore, 211 At-labeled α-methyl-L-tyrosine, which targets the cancer-associated amino acid transporter LAT1, exhibits remarkable antitumor activity [14]. Antibodies armed with α-particles also present a promising avenue for TAT. Since the pioneering study by Wilbur et al., several 211 At-labeled antibodies have been reported [18,19], each demonstrating significant antitumor activity [20,21]. protein transport to the nucleus [22]. Specifically, the anti-cancer antibody was conjugated with NLS-functionalized 211 At via a cleavable linker. This antibody conjugate was envisioned to behave as follows: (i) target cell recognition, followed by internalization via endocytosis; (ii) lysosomal cleavage of the linker to release an 211 At-functionalized fragment; and (iii) accumulation of the released NLS-functionalized 211 At in the nucleus of the target cell, and the subsequent induction of DNA damage (Figure 1c). Such highresolution targeting of RIs was expected to result in improved selectivity and high efficacy, particularly in α-ray therapy, based on the high energy and short range of α-rays. The molecular design was validated using fluorescence imaging. Specifically, this imaging analysis underscored the importance of the membrane permeability of the payload with respect to lysosomal escape, which served as intermediates between steps (i) and (iii) stated above. Based on this discovery, we devised a design to facilitate this step by harnessing the dual function of decaborane ([B]10): as a carrier of 211 At and a membrane permeabilizer. This is because [B]10 forms a stable complex with 211 At [18,19], and is also a potent membrane permeabilizer owing to its chaotropic effect [23,24]. As expected, the developed nucleus-targeting 211 At-labeled antibody showed superior efficacy. Therefore, in this study, we propose a high-resolution DDS with remarkable potency and selectivity as a novel drug development trend. In this study, we developed a novel radiolabeled antibody that was designed to selectively deliver 211 At to the nuclei of target cells for efficient TAT (Figure 1b). To accomplish this, we employed a nuclear localization signal (NLS) that acted as a tag for protein transport to the nucleus [22]. Specifically, the anti-cancer antibody was conjugated with NLS-functionalized 211 At via a cleavable linker. This antibody conjugate was envisioned to behave as follows: (i) target cell recognition, followed by internalization via endocytosis; (ii) lysosomal cleavage of the linker to release an 211 At-functionalized fragment; and (iii) accumulation of the released NLS-functionalized 211 At in the nucleus of the target cell, and the subsequent induction of DNA damage (Figure 1c). Such high-resolution targeting of RIs was expected to result in improved selectivity and high efficacy, particularly in α-ray therapy, based on the high energy and short range of α-rays. The molecular design was validated using fluorescence imaging. Specifically, this imaging analysis underscored the importance of the membrane permeability of the payload with respect to lysosomal escape, which served as intermediates between steps (i) and (iii) stated above. Based on this discovery, we devised a design to facilitate this step by harnessing the dual function of decaborane ([B] 10 ): as a carrier of 211 At and a membrane permeabilizer. This is because [B] 10 forms a stable complex with 211 At [18,19], and is also a potent membrane permeabilizer owing to its chaotropic effect [23,24]. As expected, the developed nucleus-targeting 211 At-labeled antibody showed superior efficacy. Therefore, in this study, we propose a high-resolution DDS with remarkable potency and selectivity as a novel drug development trend. Results and Discussion The molecules used in this study are shown in Figure 2. We used an anti-EpCAM antibody known for its selective binding to pancreatic cancer cells and cancer stem cells [25,26]. NLS(PKKKRKV)-functionalized TMR/ 211 At was conjugated to the antibody via a valine-citrulline (Val-Cit) linker [27], which can be readily cleaved by lysosomal cathepsin. For the fluorescent probe, we designed and synthesized a doubly fluorescentlabeled antibody, NLS(TMR)-Ab(AF488), in which an Alexa Fluor 488 (AF488)-labeled antibody was loaded with NLS-functionalized TMR (Scheme S1, Figure S1); the AF488 was used to track antibody dynamics, while the TMR served as an indicator of the intracellular dynamics of the payload. We also synthesized NLS( 211 At)-Ab as a radiolabeled antibody to deliver 211 At into the nuclei of cancer cells (Scheme S2, Figure S2). These antibody conjugates were readily obtained via Fmoc solid-phase peptide synthesis (Fmoc SPPS) and maleimide-thiol ligation; after preparing the Val-Cit linker-conjugated NLS doubly functionalized with Cys by SPPS, the introduction of TMR or [B] 10 at C-terminal Cys was followed by coupling with the antibody at the N-terminal Cys, yielding NLS(TMR)-Ab(AF488) or NLS( 211 At)-Ab, respectively. The TMR-labeled NLS (NLS(TMR)) was also prepared to trace the dynamics of the NLS-functionalized payload (Scheme S3), while 211 At-Ab, a conventional 211 At-labeled antibody, was prepared as the control (Scheme S4, Figure S5). Notably, 211 At was successfully introduced into [B] 10 , as reported by Wilbur et al. [18,19] during the preparation of both NLS( 211 At)-Ab and 211 At-Ab ( Figures S3, S4, S6 and S7). Results and Discussion The molecules used in this study are shown in Figure 2. We used an anti-EpCAM antibody known for its selective binding to pancreatic cancer cells and cancer stem cells [25,26]. NLS(PKKKRKV)-functionalized TMR/ 211 At was conjugated to the antibody via a valine-citrulline (Val-Cit) linker [27], which can be readily cleaved by lysosomal cathepsin. For the fluorescent probe, we designed and synthesized a doubly fluorescentlabeled antibody, NLS(TMR)-Ab(AF488), in which an Alexa Fluor 488 (AF488)-labeled antibody was loaded with NLS-functionalized TMR (Scheme S1, Figure S1); the AF488 was used to track antibody dynamics, while the TMR served as an indicator of the intracellular dynamics of the payload. We also synthesized NLS( 211 At)-Ab as a radiolabeled antibody to deliver 211 At into the nuclei of cancer cells (Scheme S2, Figure S2). These antibody conjugates were readily obtained via Fmoc solid-phase peptide synthesis (Fmoc SPPS) and maleimide-thiol ligation; after preparing the Val-Cit linkerconjugated NLS doubly functionalized with Cys by SPPS, the introduction of TMR or [B]10 at C-terminal Cys was followed by coupling with the antibody at the N-terminal Cys, yielding NLS(TMR)-Ab(AF488) or NLS( 211 At)-Ab, respectively. The TMR-labeled NLS (NLS(TMR)) was also prepared to trace the dynamics of the NLS-functionalized payload (Scheme S3), while 211 At-Ab, a conventional 211 At-labeled antibody, was prepared as the control (Scheme S4, Figure S5). Notably, 211 At was successfully introduced into [B]10, as reported by Wilbur et al., [18,19] during the preparation of both NLS( 211 At)-Ab and 211 At-Ab ( Figures S3, S4, S6 and S7). To verify the molecular design of our nucleus-selective DDS, live cell imaging was performed using PANC-1, the pancreatic cancer cell line. We first analyzed the intracellular dynamics of NLS(TMR) by introducing it to the cytosol via electroporation. NLS(TMR) was distributed throughout the cytosol and localized to the nucleus with a relatively high concentration, confirming the function of NLS in ensuring delivery to the nucleus (Figure 3a). Next, we analyzed the dynamics of NLS(TMR)-Ab(AF488) (Figures 3b, S8 and S9). NLS(TMR)-Ab(AF488) was smoothly internalized into PANC-1 cells, and both AF488 and TMR fluorescence were observed in the cells. Importantly, after 1 h of incubation, the observed AF488 and TMR fluorescence partially unmerged, indicating that the Val-Cit linker was cleaved in lysosomes, allowing for the successful release of the payload. However, TMR fluorescence was observed as dots in the cells, suggesting that the NLS-functionalized TMR remained in the lysosomes and did not escape into the cytosol due to its low membrane permeability. To verify the molecular design of our nucleus-selective DDS, live cell imaging was performed using PANC-1, the pancreatic cancer cell line. We first analyzed the intracellular dynamics of NLS(TMR) by introducing it to the cytosol via electroporation. NLS(TMR) was distributed throughout the cytosol and localized to the nucleus with a relatively high concentration, confirming the function of NLS in ensuring delivery to the nucleus (Figure 3a). Next, we analyzed the dynamics of NLS(TMR)-Ab(AF488) (Figures 3b, S8 and S9). NLS(TMR)-Ab(AF488) was smoothly internalized into PANC-1 cells, and both AF488 and TMR fluorescence were observed in the cells. Importantly, after 1 h of incubation, the observed AF488 and TMR fluorescence partially unmerged, indicating that the Val-Cit linker was cleaved in lysosomes, allowing for the successful release of the payload. However, TMR fluorescence was observed as dots in the cells, suggesting that the NLSfunctionalized TMR remained in the lysosomes and did not escape into the cytosol due to its low membrane permeability. Overall, fluorescence imaging demonstrated the validity of the nucleus-selective DDS proposed in this study and its limitations (Figure 3c). The above imaging analysis confirmed the following three steps: (i) endocytosis into the target cells; (ii) lysosomal cleavage of the linker; and (iii) transport to the nucleus. However, TMR fluorescence was not observed when NLS(TMR)-Ab(AF488) was used, indicating that another critical step in the present method is lysosomal escape into the cytosol. Namely, the efficacy of the present nucleusspecific DDS depends on the physical properties (mainly the membrane permeability) of the payload. Based on these observations, we employed [B] 10 as an 211 At carrier, in consideration of its high membrane permeability due to its chaotropic effect [23,24]. Overall, fluorescence imaging demonstrated the validity of the nucleus-selective DDS proposed in this study and its limitations (Figure 3c). The above imaging analysis confirmed the following three steps: (i) endocytosis into the target cells; (ii) lysosomal cleavage of the linker; and (iii) transport to the nucleus. However, TMR fluorescence was not observed when NLS(TMR)-Ab(AF488) was used, indicating that another critical step in the present method is lysosomal escape into the cytosol. Namely, the efficacy of the present nucleus-specific DDS depends on the physical properties (mainly the membrane permeability) of the payload. Based on these observations, we employed [B]10 as an 211 At carrier, in consideration of its high membrane permeability due to its chaotropic effect Figures 4a and S10-S13). Furthermore, the cell viability observed after the 4 day incubation period indicated that NLS( 211 At)-Ab exhibited a stronger cytotoxicity than 211 At-Ab (Figure 4b). These findings suggested that the accumulation of 211 At resulted in potent cytotoxicity, thereby indicating the efficacy of the present nucleustargeting strategy. Synthesis of Compounds The details of the synthetic procedure and the characterization data are shown in Supplementary Materials. Fluorescent Imaging of NLS(TMR) Using Electroporation PANC-1 cells were cultured using RPMI containing 10% FBS and 1% penicillinstreptomycin. PANC-1 cells were harvested by treating them with trypsin-EDTA solution, and cells in RPMI (1.5 × 10 6 cells/mL, 390 mL) were transferred to 0.4 cm cuvettes. To the cuvette was added NLS(TMR) (6.03 mg) in RPMI (10 mL, final concentration: 10 mM), and the cells were exposed to the electric field (voltage: 200 V, capacitor: 900 mF). The cells were transferred to a 35 mm dish and incubated for 8 h at 37 °C. After washing with RPMI three times, the cells were treated with Hoechst33342 (10 μg/mL) in RPMI (100 mL) for 10 min at room temperature. After washing with RPMI three times, the cells were observed using confocal laser scanning microscopy (A1R, Nikon, Tokyo). Fluorescent Imaging of NLS(TMR)-Ab(AF488) PANC-1 cells were cultured using RPMI containing 10% FBS and 1% penicillinstreptomycin. PANC-1 cells were incubated for 2 days on a 35 mm glass-bottom dish. After suction of the medium, to this dish was added Hoechst33342 (10 μg/mL) in RPMI (100 mL), and the cells were incubated for 10 min at 37 °C. After washing with RPMI three times, to this dish was added NLS(TMR)-Ab(AF488) (PBS solution, 50 μg/mL) in RPMI (100 mL). After the cells were incubated for 1 h at 37 °C, the cells were observed using confocal laser scanning microscopy (A1R, Nikon, Tokyo, Japan). Protocol for Evaluation of DSB Induction PANC-1 cells (2×10 4 cells/well, 96 well microplate) in RPMI (200 μL) were incubated for 1 day at 37 °C. After suctioning the medium, to the plate was added PBS or 211 At-Ab or NLS( 211 At)-Ab in PBS (100 mL, final concentration: 1 MBq/mL), and the cells were incubated for 4 h at 37 °C. After suctioning the medium, the cells were fixed with 4% PFA at room temperature for 30 min. After washing with PBS three times, the cells were treated with 0.1% Triton X-100 in PBS (100 mL) for 5 min. After washing with PBS three times, an AF488-labeled anti-gH2A.X antibody in PBS (100 mL, 2 mg/mL) was added, and the cells were incubated overnight at 4 °C. After washing with PBS three times, the cells were Synthesis of Compounds The details of the synthetic procedure and the characterization data are shown in Supplementary Materials. Fluorescent Imaging of NLS(TMR) Using Electroporation PANC-1 cells were cultured using RPMI containing 10% FBS and 1% penicillinstreptomycin. PANC-1 cells were harvested by treating them with trypsin-EDTA solution, and cells in RPMI (1.5 × 10 6 cells/mL, 390 mL) were transferred to 0.4 cm cuvettes. To the cuvette was added NLS(TMR) (6.03 mg) in RPMI (10 mL, final concentration: 10 mM), and the cells were exposed to the electric field (voltage: 200 V, capacitor: 900 mF). The cells were transferred to a 35 mm dish and incubated for 8 h at 37 • C. After washing with RPMI three times, the cells were treated with Hoechst33342 (10 µg/mL) in RPMI (100 mL) for 10 min at room temperature. After washing with RPMI three times, the cells were observed using confocal laser scanning microscopy (A1R, Nikon, Tokyo). Fluorescent Imaging of NLS(TMR)-Ab(AF488) PANC-1 cells were cultured using RPMI containing 10% FBS and 1% penicillinstreptomycin. PANC-1 cells were incubated for 2 days on a 35 mm glass-bottom dish. After suction of the medium, to this dish was added Hoechst33342 (10 µg/mL) in RPMI (100 mL), and the cells were incubated for 10 min at 37 • C. After washing with RPMI three times, to this dish was added NLS(TMR)-Ab(AF488) (PBS solution, 50 µg/mL) in RPMI (100 mL). After the cells were incubated for 1 h at 37 • C, the cells were observed using confocal laser scanning microscopy (A1R, Nikon, Tokyo, Japan). Protocol for Evaluation of DSB Induction PANC-1 cells (2 × 10 4 cells/well, 96 well microplate) in RPMI (200 µL) were incubated for 1 day at 37 • C. After suctioning the medium, to the plate was added PBS or 211 At-Ab or NLS( 211 At)-Ab in PBS (100 mL, final concentration: 1 MBq/mL), and the cells were incubated for 4 h at 37 • C. After suctioning the medium, the cells were fixed with 4% PFA at room temperature for 30 min. After washing with PBS three times, the cells were treated with 0.1% Triton X-100 in PBS (100 mL) for 5 min. After washing with PBS three times, an AF488-labeled anti-gH2A.X antibody in PBS (100 mL, 2 mg/mL) was added, and the cells were incubated overnight at 4 • C. After washing with PBS three times, the cells were treated with Hoechst33342 in PBS (100 mL, 10 µg/mL) for 10 min at room temperature. After washing with PBS three times, the cells were observed using an All-in-One Fluorescence Microscope (KEYENCE CORPORATION, Osaka, Japan). The obtained images are shown in the Supplementary Materials ( Figure S8). The images were analyzed by Fiji (NIH). DSB induction was quantified as follows: the value of the total area stained with AF488-labeled anti-gH2A.X antibody over the value of the total area stained with Hoechst33342. Each parameter was set as follows: Brightness: 90-255 (Hoechst), 25-255 (AF488) for color threshold; size (micronˆ2): 0-infinity; and circularity: 0.00 for analyze particles. Three images were analyzed for all entries, and the mean and standard deviation were calculated. Protocol for Evaluation of Cell Viability PANC-1 cells (1 × 10 3 cells/well, 96 well microplate) in RPMI (200 µL) were incubated for 1 day at 37 • C. After suctioning the medium, PBS, or 211 At-Ab or NLS( 211 At)-Ab in PBS (100 mL, final concentration: 1 MBq/mL) was added, and the cells were incubated for 3.5 h at 37 • C. After washing with PBS three times, RPMI containing 1% FBS (200 mL) was added, and the cells were incubated at 37 • C for 4 days. After incubation, to the plate was added Cell Counting Kit-8 solution (Dojindo, 20 mL, final concentration: 10%), and the cells were incubated for 3 h at 37 • C. The absorbance of formazan (450 nm) was measured by Infinite F50 (TECAN, Männedorf, Switzerland) in order to evaluate cell viability. The survival rate of each entry was standardized by calculating the survival rate of the untreated cells as 100%. Three trials were carried out for all entries, and the mean and standard deviation were calculated. Conclusions In summary, in order to develop an efficient TAT, a novel radiolabeled antibody was designed and synthesized to enable nucleus-selective RI transport, resulting in increased potency. This high-resolution drug delivery system was expected to be achieved by incorporating a signal peptide (NLS) and a cleavable Val-Cit linker, whose functions were confirmed via fluorescence imaging. The imaging analysis also highlighted the necessity for the payload to show membrane permeability in order to enable its escape from lysosomes. To overcome this challenge, we employed [B] 10 , which exhibited a dual function, as an 211 At carrier and a membrane permeabilizer. To the best of our knowledge, this is the first report on the fabrication of an antibody conjugate oriented toward the organelle-selective delivery property of payloads. Organelle-selective drug delivery is a state-of-the-art drug delivery technology, and this study demonstrates its feasibility and clarifies design guidelines.
2023-06-03T15:17:02.431Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "596a6477b1c0b17150e357f7e4489fc95a0b6ae7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24119593", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d3d372b4cb788c68bfc572ffbdbf7b5c58f10a33", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
225636703
pes2o/s2orc
v3-fos-license
Multiband Dual-Meander Line Antenna for Body Centric Networks Biomedical Applications by Using UMC 180 nm This paper presents a compact on-chip antenna architecture for 5G body centric networks (BCNs) applications. A dual meander line (DML) integrated antenna consists of two stacked layers of two turns of meander lines and a ground metal layer. The DML structure decreases the resonant frequency, increases tuning bands and broadens the operating bandwidth to be a suitable choice for high data rate biomedical applications. The antenna’s performance is evaluated in both scenarios inside and outside the human body. The proposed antenna is fabricated using UMC180 nm CMOS technology with a total area of 1150µm×200µm, and operates at bands 22 GHz, 34 GHz, 44 GHz and 58 GHz with an operating bandwidth up to 2 GHz at impedance bandwidth ≤ 7.5 dB (VSWR≤2.5). The proposed antenna is simulated using high frequency structure simulator (HFSS) and shows good agreement between measured and simulated results. Introduction 5G technologies have the potential to make significant contributions to providing secure healthcare-orientated wireless networks with improved energy efficiency. Ultrawideband (UWB) communication becomes the solution of the higher demand capacity to multiple devices in 5G technology. UWB is a key component in many applications as radar imaging and smart biomedical sensors, which can be worn or implanted in the human-body [1][2][3]. Recently, these biosensors created closed UWB wireless body centric networks (BCNs) as shown in Fig.1 [4]. Body-centric wireless communications refer to human-self and human-to-human networking with the use of wearable and implantable wireless sensors. BCNs is a subject area combining wireless body-area networks (WBANs), Wireless Sensor Networks (WSNs) and Wireless Personal Area Networks (WPANs). Body-centric wireless communications technology has numerous applications in healthcare, smart homes, personal entertainment, identification systems, space exploration and the military [5][6][7][8]. Battery lifetime in surgically implanted devices still a great challenge as the network has to last for years. Recently, implantable antennas have been largely studied for many sensing and wireless communication applications with tremendous need for integrated wireless powering techniques as energy harvesting/wireless power transfer module to reduce the need for regular battery replacement [9]. Multiband antenna are a potential solution for simultaneous wireless information and power transfer (SWIPT) technique for data rate and the long standby time in the fifth generation (5G) mobile communication systems [10][11][12]. In order to achieve fully integrated sensors, CMOS technology is employed to design implantable antennas. In such a case, not only the size of antenna itself can be minimized, but also a wireless powering module can be integrated on the same chip. In this paper, a multiband on-chip antenna is designed and fabricated by UMC 180 nm CMOS technology. The proposed antenna structure is shown in Fig.2 antenna and other antenna parameters are shown in section 5. Finally, a conclusion of the paper is shown in section 6. Antenna Configuration and Design In order to provide good wireless communication from outside the body to inside, factors such as high tissue conductivity, biocompatibility and small antenna size must be taken into consideration. Furthermore, simulation models and physical models are also important to predict the behavior of the antenna in the presence of human-body. The most commonly used models are one-layer models and three-layer models, which were compared in [13] and the results showed that the measured input parameters can be quite different from the real ones if the antenna is to be implanted in the fat layer, otherwise, no significant differences were observed. The dual meander line (DML) antenna is fabricated using UMC180 nm CMOS process. The technology layers consist of a low resistivity Silicon substrate, six metal layers embedded in inter-dielectric layers, with the upper metal layer M6 being covered with a dielectric passivation layer. As shown in Fig Antenna Performance Results The design and performance of the DML antenna are measured using a high frequency Fig. 4(a), while the resulting input impedance both real and imaginary are shown in The results show that when the human body effect is presented, the antenna impedance matching is more significant for implanted of the antenna inside the human-body. The resonant frequency is shifted down and the operating bandwidth is increased. The other parameter studied is the reflection coefficient phase, which is slightly changed when the antenna is placed outside humanbody, and it is abruptly changed when the antenna is implanted in the human-body. The Measurement of the Proposed Antenna The proposed DML antenna reflection coefficients are measured in the microstrip Lab, at the Electronics Research Institute. The reflection coefficient achieved by using on-wafer probing and the setup composed of one GSG 67 GHz PicoProbe-RF probe (pitch: 150 µm) and ZVA67 Rohde and schwarz vector network analyzer from 10 MHz to 67 GHz as shown in Fig. 8(a). The fabricated UMC180 nm die (miniasic 1525µm×1525µm) was fixed at PM5 KurlSuss manual probestation. The photo of the fabricated antenna is shown in Fig. 8(b) with four ground PADS which are connected to the ground metal layer M1. Fig. 9(a) Table 1 at different resonant frequencies. Table 2 shows the performance comparison between the proposed antenna and other previous literature reviews of the integrated antennas operating within the frequency range. In the 60 GHz band, the proposed DML antenna indicates a comparable gain with [18] and [20] with larger bandwidth and achieved multiband of operation. While [22] and [21] are lower gain and single band of operation at frequency higher than the proposed antenna, both antennas have the same efficiency (35%) at 65 GHz range. In references [17] and [18], antennas resonate at 24GHz but with narrow bandwidth and a larger area than the proposed antenna. While [19] and [20] have the largest antenna size area with a high gain. The proposed antenna has low antenna gain at lower frequency; this can be explained due to the small area which reduces the aperture area of the antenna. The proposed antenna is compact when compared to other structures and it has a large number of operational bands. The multiband of operation with wideband bandwidth makes the proposed antenna a good choice for BCN to cover a large number of sensors for many patients. Essentially, the antenna provides a wireless power transfer system that can be integrated for battery less applications.
2020-08-27T09:06:21.245Z
2020-07-14T00:00:00.000
{ "year": 2020, "sha1": "674828b4358d00f28a6b67509baad2c3bf330994", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/9/1350/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "db25a9dc4598d80bf91989f1640eca8a83661cd9", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science" ] }